Re: Protecting against unexpected zero-pages: proposal
| От | Aidan Van Dyk |
|---|---|
| Тема | Re: Protecting against unexpected zero-pages: proposal |
| Дата | |
| Msg-id | AANLkTin2+w7UW-xVvTs2N4+uoNZMMg2SVqophoR7TkCR@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: Protecting against unexpected zero-pages: proposal (Greg Stark <gsstark@mit.edu>) |
| Список | pgsql-hackers |
On Tue, Nov 9, 2010 at 3:25 PM, Greg Stark <gsstark@mit.edu> wrote: > Then we might have to get rid of hint bits. But they're hint bits for > a metadata file that already exists, creating another metadata file > doesn't solve anything. Is there any way to instrument the writes of dirty buffers from the share memory, and see how many of the pages normally being written are not backed by WAL (hint-only updates)? Just "dumping" those buffers without writes would allow at least *checksums* to go throug without loosing all the benifits of the hint bits. I've got a hunch (with no proof) that the penalty of not writing them will be born largely by small database installs. Large OLTP databases probably won't have pages without a WAL'ed change and hint-bits set, and large data warehouse ones will probably vacuum freeze big tables on load to avoid the huge write penalty the 1st time they scan the tables... </waving hands> -- Aidan Van Dyk Create like a god, aidan@highrise.ca command like a king, http://www.highrise.ca/ work like a slave.
В списке pgsql-hackers по дате отправления: