[HACKERS] Re: Anyone have experience benchmarking very high effective_io_concurrencyon NVME's?
От | Greg Stark |
---|---|
Тема | [HACKERS] Re: Anyone have experience benchmarking very high effective_io_concurrencyon NVME's? |
Дата | |
Msg-id | CAM-w4HMZpQ0CgZP=6zFkSo1LGBn2k3xviRJod2ksGgu1-rsWbQ@mail.gmail.com обсуждение исходный текст |
Ответ на | [HACKERS] Anyone have experience benchmarking very high effective_io_concurrencyon NVME's? (Chris Travers <chris.travers@adjust.com>) |
Ответы |
Re: [HACKERS] Re: Anyone have experience benchmarking very higheffective_io_concurrency on NVME's?
|
Список | pgsql-hackers |
On 31 October 2017 at 07:05, Chris Travers <chris.travers@adjust.com> wrote: > Hi; > > After Andres's excellent talk at PGConf we tried benchmarking > effective_io_concurrency on some of our servers and found that those which > have a number of NVME storage volumes could not fill the I/O queue even at > the maximum setting (1000). And was the system still i/o bound? If the cpu was 100% busy then perhaps Postgres just can't keep up with the I/O system. It would depend on workload though, if you start many very large sequential scans you may be able to push the i/o system harder. Keep in mind effective_io_concurrency only really affects bitmap index scans (and to a small degree index scans). It works by issuing posix_fadvise() calls for upcoming buffers one by one. That gets multiple spindles active but it's not really going to scale to many thousands of prefetches (and effective_io_concurrency of 1000 actually means 7485 prefetches). At some point those i/o are going to start completing before Postgres even has a chance to start processing the data. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
В списке pgsql-hackers по дате отправления: