Re: [GENERAL] Bottlenecks with large number of relation segment files
| От | Amit Langote |
|---|---|
| Тема | Re: [GENERAL] Bottlenecks with large number of relation segment files |
| Дата | |
| Msg-id | CA+HiwqGV5pjEps2fxw=Vs9+SwbChwqhyJTQsAF7tG214TAUkgw@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: [GENERAL] Bottlenecks with large number of relation segment files (KONDO Mitsumasa <kondo.mitsumasa@lab.ntt.co.jp>) |
| Ответы |
Re: [GENERAL] Bottlenecks with large number of relation segment files
|
| Список | pgsql-hackers |
On Mon, Aug 5, 2013 at 5:01 PM, KONDO Mitsumasa <kondo.mitsumasa@lab.ntt.co.jp> wrote: > Hi Amit, > > > (2013/08/05 15:23), Amit Langote wrote: >> >> May the routines in fd.c become bottleneck with a large number of >> concurrent connections to above database, say something like "pgbench >> -j 8 -c 128"? Is there any other place I should be paying attention >> to? > > What kind of file system did you use? > > When we open file, ext3 or ext4 file system seems to sequential search inode > for opening file in file directory. > And PostgreSQL limit FD 1000 per process. It seems too small. > Please change src/backend/storage/file/fd.c at "max_files_per_process = > 1000;" > If we rewrite it, We can change limit of FD per process. I have already > created fix-patch about this problem in postgresql.conf, and will submit > next CF. Thank you for replying Kondo-san. The file system is ext4. So, within the limits of max_files_per_process, the routines of file.c should not become a bottleneck? -- Amit Langote
В списке pgsql-hackers по дате отправления: