Re: Compression of full-page-writes
От | Heikki Linnakangas |
---|---|
Тема | Re: Compression of full-page-writes |
Дата | |
Msg-id | 54860B1B.2030401@vmware.com обсуждение исходный текст |
Ответ на | Re: Compression of full-page-writes (Andres Freund <andres@2ndquadrant.com>) |
Ответы |
Re: Compression of full-page-writes
|
Список | pgsql-hackers |
On 12/08/2014 09:21 PM, Andres Freund wrote: > I still think that just compressing the whole record if it's above a > certain size is going to be better than compressing individual > parts. Michael argued thta that'd be complicated because of the varying > size of the required 'scratch space'. I don't buy that argument > though. It's easy enough to simply compress all the data in some fixed > chunk size. I.e. always compress 64kb in one go. If there's more > compress that independently. Doing it in fixed-size chunks doesn't help - you have to hold onto the compressed data until it's written to the WAL buffers. But you could just allocate a "large enough" scratch buffer, and give up if it doesn't fit. If the compressed data doesn't fit in e.g. 3 * 8kb, it didn't compress very well, so there's probably no point in compressing it anyway. Now, an exception to that might be a record that contains something else than page data, like a commit record with millions of subxids, but I think we could live with not compressing those, even though it would be beneficial to do so. - Heikki
В списке pgsql-hackers по дате отправления: