Re: Detrimental performance impact of ringbuffers on performance
От | Jeff Janes |
---|---|
Тема | Re: Detrimental performance impact of ringbuffers on performance |
Дата | |
Msg-id | CAMkU=1zFPLq8Z+cgjox-3F3xMmf1G6i+-PpXnkF4DG0NFge+ow@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Detrimental performance impact of ringbuffers on performance (Andres Freund <andres@anarazel.de>) |
Список | pgsql-hackers |
On Tue, Apr 12, 2016 at 11:38 AM, Andres Freund <andres@anarazel.de> wrote: > >> The bottom line >> here, IMHO, is not that there's anything wrong with our ring buffer >> implementation, but that if you run PostgreSQL on a system where the >> I/O is hitting a 5.25" floppy (not to say 8") the performance may be >> less than ideal. I really appreciate IBM donating hydra - it's been >> invaluable over the years for improving PostgreSQL performance - but I >> sure wish they had donated a better I/O subsystem. When I had this problem some years ago, I traced it down to the fact you have to sync the WAL before you can evict a dirty page. If your vacuum is doing a meaningful amount of cleaning, you encounter a dirty page with a not-already-synced LSN about once per trip around the ring buffer. That really destroys your vacuuming performance with a 256kB ring if your fsync actually has to reach spinning disk. What I ended up doing is hacking it so that it used a BAS_BULKWRITE when the vacuum was being run with a zero vacuum cost delay. > It's really not just hydra. I've seen the same problem on 24 disk raid-0 > type installations. The small ringbuffer leads to reads/writes being > constantly interspersed, apparently defeating readahead. Was their a BBU on that? I would think slow fsyncs are more likely than defeated readahead. On the other hand, I don't hear about too many 24-disk RAIDS without a BBU.
В списке pgsql-hackers по дате отправления: