Re: Protecting against unexpected zero-pages: proposal
От | Jim Nasby |
---|---|
Тема | Re: Protecting against unexpected zero-pages: proposal |
Дата | |
Msg-id | CEE702B9-D762-4BCD-A0A2-B1947C016F10@nasby.net обсуждение исходный текст |
Ответ на | Re: Protecting against unexpected zero-pages: proposal (Greg Stark <gsstark@mit.edu>) |
Ответы |
Re: Protecting against unexpected zero-pages: proposal
Re: Protecting against unexpected zero-pages: proposal |
Список | pgsql-hackers |
On Nov 9, 2010, at 9:27 AM, Greg Stark wrote: > On Tue, Nov 9, 2010 at 3:25 PM, Greg Stark <gsstark@mit.edu> wrote: >> Oh, I'm mistaken. The problem was that buffering the writes was >> insufficient to deal with torn pages. Even if you buffer the writes if >> the machine crashes while only having written half the buffer out then >> the checksum won't match. If the only changes on the page were hint >> bit updates then there will be no full page write in the WAL log to >> repair the block. > > Huh, this implies that if we did go through all the work of > segregating the hint bits and could arrange that they all appear on > the same 512-byte sector and if we buffered them so that we were > writing the same bits we checksummed then we actually *could* include > them in the CRC after all since even a torn page will almost certainly > not tear an individual sector. If there's a torn page then we've crashed, which means we go through crash recovery, which puts a valid page (with validCRC) back in place from the WAL. What am I missing? BTW, I agree that at minimum we need to leave the option of only raising a warning when we hit a checksum failure. Some peoplemight want Postgres to treat it as an error by default, but most folks will at least want the option to look at their(corrupt) data. -- Jim C. Nasby, Database Architect jim@nasby.net 512.569.9461 (cell) http://jim.nasby.net
В списке pgsql-hackers по дате отправления: