Re: Table and Index compression
От | Greg Stark |
---|---|
Тема | Re: Table and Index compression |
Дата | |
Msg-id | 407d949e0908070233r51cf352dxb28245262d329685@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Table and Index compression (Pierre Frédéric Caillaud<lists@peufeu.com>) |
Ответы |
Re: Table and Index compression
|
Список | pgsql-hackers |
2009/8/7 Pierre Frédéric Caillaud <lists@peufeu.com>: > > Also, about compressed NTFS : it can give you disk-full errors on read(). I suspect it's unavoidable for similar reasons to the problems Postgres faces. When you issue a read() you have to find space in the filesystem cache to hold the data. Some other data has to be evicted. If that data doesn't compress as well as it did previously it could take more space and cause the disk to become full. This also implies that fsync() could generate that error... > Back to the point of how to handle disk full errors : > - we could write a file the size of shared_buffers at startup > - if a write() reports disk full, delete the file above > - we now have enough space to flush all of shared_buffers > - flush and exit gracefully Unfortunately that doesn't really help. That only addresses the issue for a single backend (or as many as are actually running when the error starts). The next connection could read in new data and expand that and now you have no slop space. Put another way, we don't want to exit at all, gacefully or not. We want to throw an error, abort the transaction (or subtransaction) and keep going. -- greg http://mit.edu/~gsstark/resume.pdf
В списке pgsql-hackers по дате отправления: