Re: GDQ iimplementation
От | Hannu Krosing |
---|---|
Тема | Re: GDQ iimplementation |
Дата | |
Msg-id | 1274396657.12930.713.camel@hvost обсуждение исходный текст |
Ответ на | Re: GDQ iimplementation (Simon Riggs <simon@2ndQuadrant.com>) |
Список | pgsql-cluster-hackers |
On Thu, 2010-05-20 at 20:51 +0100, Simon Riggs wrote: > On Tue, 2010-05-18 at 01:53 +0200, Hannu Krosing wrote: > > On Mon, 2010-05-17 at 14:46 -0700, Josh Berkus wrote: > > > Jan, Marko, Simon, > > > > > > I'm concerned that doing anything about the write overhead issue was > > > discarded almost immediately in this discussion. > > > > Only thing we can do to write overhead _on_master_ is to trade it for > > transaction boundary reconstruction on slave (or special intermediate > > node), effectively implementing a "logical WAL" in addition to (or as an > > extension of) the current WAL. > > That does sound pretty good to me. > > Fairly easy to make the existing triggers write XLOG_NOOP WAL records > directly rather than writing to a queue table, which also gets logged to > WAL. We could just skip the queue table altogether. > > Even better would be extending WAL format to include all the information > you need, so it gets written to WAL just once. Maybe it is also possible (less intrusive/easier to implement) to add some things to WAL which have met resistance as general trigger-based features, like "logical representation" of DDL. We already have equivalent of minimal ON COMMIT/ON ROLLBACK triggers in form of commit/rollback records in WAL. Also, if we use extended WAL as GDQ, then there should be a possibility to write WAL in form that supports only "logical" (+ of course Durability) features but not full backup and WAL based replication . And a possibility to have "user-defined" WAL records for specific tasks would also be a nice and postgreSQL-ly extensibility feature. > > > This is not a trivial > > > issue for performance; it means that each row which is being tracked by > > > the GDQ needs to be written to disk a minimum of 4 times (once to WAL, > > > once to table, once to WAL for queue, once to queue). > > > > In reality the WAL record for main table is forced to disk mosttimes in > > the same WAL write as the WAL record for queue. And the actual queue > > page does not reach disk at all if queue rotation is fast. > > Josh, you really should do some measurements to show the overheads. Not > sure you'll get people just to accept that assertion otherwise. > -- Hannu Krosing http://www.2ndQuadrant.com PostgreSQL Scalability and Availability Services, Consulting and Training
В списке pgsql-cluster-hackers по дате отправления: