Re: [REVIEW] Re: Compression of full-page-writes
От | Michael Paquier |
---|---|
Тема | Re: [REVIEW] Re: Compression of full-page-writes |
Дата | |
Msg-id | CAB7nPqR-0pDn8tgUJsHCPuq7n6Bb3Bj1rRNc1ZnXrv4vY2KkJg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [REVIEW] Re: Compression of full-page-writes (Michael Paquier <michael.paquier@gmail.com>) |
Ответы |
Re: [REVIEW] Re: Compression of full-page-writes
|
Список | pgsql-hackers |
On Thu, Nov 27, 2014 at 11:59 PM, Michael Paquier <michael.paquier@gmail.com> wrote: > On Thu, Nov 27, 2014 at 11:42 PM, Andres Freund <andres@2ndquadrant.com> wrote: >> One thing Heikki brought up somewhere, which I thought to be a good >> point, was that it might be worthwile to forget about compressing FDWs >> themselves, and instead compress entire records when they're large. I >> think that might just end up being rather beneficial, both from a code >> simplicity and from the achievable compression ratio. > Indeed, that would be quite simple to do. Now determining an ideal cap > value is tricky. We could always use a GUC switch to control that but > that seems sensitive to set, still we could have a recommended value > in the docs found after looking at some average record size using the > regression tests. Thinking more about that, it would be difficult to apply the compression for all records because of the buffer that needs to be pre-allocated for compression, or we would need to have each code path creating a WAL record able to forecast the size of this record, and then adapt the size of the buffer before entering a critical section. Of course we could still apply this idea for records within a given windows size. Still, the FPW compression does not have those concerns. A buffer used for compression is capped by BLCKSZ for a single block, and nblk * BLCKSZ if blocks are grouped for compression. Feel free to comment if I am missing smth obvious. Regards, -- Michael
В списке pgsql-hackers по дате отправления: