Re: LWLock contention: I think I understand the problem
От | Tom Lane |
---|---|
Тема | Re: LWLock contention: I think I understand the problem |
Дата | |
Msg-id | 29890.1010367425@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: LWLock contention: I think I understand the problem (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: LWLock contention: I think I understand the problem
|
Список | pgsql-hackers |
Hannu Krosing <hannu@krosing.net> writes: > Should this not be 'vacuum full' ? >> >> Don't see why I should expend the extra time to do a vacuum full. >> The point here is just to ensure a comparable starting state for all >> the runs. > Ok. I thought that you would also want to compare performance for different > concurrency levels where the number of dead tuples matters more as shown by > the attached graph. It is for Dual PIII 800 on RH 7.2 with IDE hdd, scale 5, > 1-25 concurrent backends and 10000 trx per run VACUUM and VACUUM FULL will provide the same starting state as far as number of dead tuples goes: none. So that doesn't explain the difference you see. My guess is that VACUUM FULL looks better because all the new tuples will get added at the end of their tables; possibly that improves I/O locality to some extent. After a plain VACUUM the system will tend to allow each backend to drop new tuples into a different page of a relation, at least until the partially-empty pages all fill up. What -B setting were you using? regards, tom lane
В списке pgsql-hackers по дате отправления: