Re: Compressing the AFTER TRIGGER queue
От | Jim Nasby |
---|---|
Тема | Re: Compressing the AFTER TRIGGER queue |
Дата | |
Msg-id | 4B502D16-0C61-408F-8664-AA432FD0A3DA@nasby.net обсуждение исходный текст |
Ответ на | Re: Compressing the AFTER TRIGGER queue (Simon Riggs <simon@2ndQuadrant.com>) |
Ответы |
Re: Compressing the AFTER TRIGGER queue
Re: Compressing the AFTER TRIGGER queue |
Список | pgsql-hackers |
On Aug 2, 2011, at 7:09 AM, Simon Riggs wrote: >> The best compression and flexibility in >>> that case is to store a bitmap since that will average out at about 1 >>> bit per row, with variable length bitmaps. Which is about 8 times >>> better compression ratio than originally suggested, without any loss >>> of metadata. >> >> Yeah that's probably possible in specific cases, but I'm still not >> sure how to make it meet the full requirements of the after trigger >> queue. > > I think you'd better explain what use case you are trying to optimise > for. It seems unlikely that you will come up with a compression scheme > that will fit all cases. > > The only cases that seem a problem to me are > * bulk RI checks > * large writes on tables using trigger based replication > maybe you have others? Not sure how much this relates to this discussion, but I have often wished we had AFTER FOR EACH STATEMENT triggers thatprovided OLD and NEW recordsets you could make use of. Sometimes it's very valuably to be able to look at *all* the rowsthat changed in a transaction in one shot. -- Jim C. Nasby, Database Architect jim@nasby.net 512.569.9461 (cell) http://jim.nasby.net
В списке pgsql-hackers по дате отправления: