Re: db corruption pg vs mysql
От | Andrew Sullivan |
---|---|
Тема | Re: db corruption pg vs mysql |
Дата | |
Msg-id | 20070522191737.GH23486@phlogiston.dyndns.org обсуждение исходный текст |
Ответ на | db corruption pg vs mysql ("tim h" <timh@vyew.com>) |
Список | pgsql-advocacy |
On Tue, May 22, 2007 at 11:52:40AM -0700, tim h wrote: > Is there a way to prevent or minimize corruption due to service or hardware > failure? This partly depends on what is causing the corruption. If your disk controller breaks and writes garbage all over the disk, or your operating systems fsck is broken and moves the whole data area in to /lost+found on reboot, there's not much PostgreSQL or any other database system can do. But in the event of bog-standard failures, PostgreSQL is extremely reliable in how it handles your data. Note that some hard drives lie about completing fsync, and in that case your data is indeed subject to corruption on failure. Again, no database can be reliable when the hardware lies about what it has done. (Buy better hardware, in that case :) To the best of my knowledge, I have never had data corruption under Postgres that turned out to be a Postgres problem (I have had it happen from both a bad drive controller and from bad operating systems). > also, will the use of Transactional queries prevent corruption, or is that a > different issue? To the extent a COMMIT means "the data is really actually on the disk", PostgreSQL's care with transactions helps avoid this problem. But the bigger problem with MyISAM's lack of transactions is that multi-statement events don't all happen at once (you've probably heard of ACID, and this is part of it). So you can end up in a situation where one table had a portion of the data added, but another table _didn't_ get the data. In other words, your data is inconsistent. A -- Andrew Sullivan | ajs@crankycanuck.ca The plural of anecdote is not data. --Roger Brinner
В списке pgsql-advocacy по дате отправления: