Re: slow commits with heavy temp table usage in 8.4.0
От | Alex Hunsaker |
---|---|
Тема | Re: slow commits with heavy temp table usage in 8.4.0 |
Дата | |
Msg-id | 34d269d40908061113j5c883d91s5633a35e1e38bf87@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: slow commits with heavy temp table usage in 8.4.0 ("Todd A. Cook" <tcook@blackducksoftware.com>) |
Ответы |
Re: slow commits with heavy temp table usage in 8.4.0
|
Список | pgsql-hackers |
On Thu, Aug 6, 2009 at 11:32, Todd A. Cook<tcook@blackducksoftware.com> wrote: > Tom Lane wrote: >> >> I took a look through the CVS history and verified that there were >> no post-8.4 commits that looked like they'd affect performance in >> this area. So I think it's got to be a platform difference not a >> PG version difference. In particular I think we are probably looking >> at a filesystem issue: how fast can you delete [...] 30000 files. > > I'm still on Fedora 7, so maybe this will be motivation to upgrade. > > FYI, on my 8.2.13 system, the test created 30001 files which were all > deleted during the commit. On my 8.4.0 system, the test created 60001 > files, of which 30000 were deleted at commit and 30001 disappeared > later (presumably during a checkpoint?). Smells like fsm? With double the number of files maybe something simple like turning on dir_index if you are ext3 will help?
В списке pgsql-hackers по дате отправления: