Re: page compression
От | Robert Haas |
---|---|
Тема | Re: page compression |
Дата | |
Msg-id | AANLkTim4WJpMrJ_EYZgXPTyj-d48DooWHLOr47iLhsu9@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: page compression (Jim Nasby <jim@nasby.net>) |
Список | pgsql-hackers |
On Mon, Jan 3, 2011 at 4:02 AM, Jim Nasby <jim@nasby.net> wrote: > FWIW, last time I looked at how Oracle handled compression, it would only compress existing data. As soon as you modifieda row, it ended up un-compressed, presumably in a different page that was also un-compressed. IIUC, InnoDB basically compresses a block as small as it'll go, and then stores it in a regular size block. That leaves free space at the end, which can be used to cram additional tuples into the page. Eventually that free space is exhausted, at which point you try to recompress the whole page and see if that gives you room to cram in even more stuff. I thought that was a pretty clever approach. > I wonder if it would be feasible to use a fork to store where a compressed page lives inside the heap... if we could dothat I don't see any reason why indexes wouldn't work. The changes required to support that might not be too horrific either... At first blush, that sounds like a recipe for large amounts of undesirable random I/O. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: