Re: Impact of checkpoint_segments under continual load conditions
От | Christopher Petrilli |
---|---|
Тема | Re: Impact of checkpoint_segments under continual load conditions |
Дата | |
Msg-id | 59d991c405071909341a10143@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Impact of checkpoint_segments under continual load conditions (PFC <lists@boutiquenumerique.com>) |
Список | pgsql-performance |
On 7/19/05, PFC <lists@boutiquenumerique.com> wrote: > > > > I think PFC's question was not directed towards modeling your > > application, but about helping us understand what is going wrong > > (so we can fix it). > > Exactly, I was wondering if this delay would allow things to get flushed, > for instance, which would give information about the problem (if giving it > a few minutes of rest resumed normal operation, it would mean that some > buffer somewhere is getting filled faster than it can be flushed). > > So, go ahead with a few minutes even if it's unrealistic, that is not the > point, you have to tweak it in various possible manners to understand the > causes. Totally understand, and appologize if I sounded dismissive. I definately appreciate the insight and input. > And instead of a pause, why not just set the duration of your test to > 6000 iterations and run it two times without dropping the test table ? This I can do. I'll probably set it for 5,000 for the first, and then start the second. In non-benchmark experience, however, this didn't seem to make much difference. > I'm going into wild guesses, but first you should want to know if the > problem is because the table is big, or if it's something else. So you run > the complete test, stopping a bit after it starts to make a mess, then > instead of dumping the table and restarting the test anew, you leave it as > it is, do something, then run a new test, but on this table which already > has data. > > 'something' could be one of those : > disconnect, reconnect (well you'll have to do that if you run the test > twice anyway) > just wait > restart postgres > unmount and remount the volume with the logs/data on it > reboot the machine > analyze > vacuum > vacuum analyze > cluster > vacuum full > reindex > defrag your files on disk (stopping postgres and copying the database > from your disk to anotherone and back will do) > or even dump'n'reload the whole database > > I think useful information can be extracted that way. If one of these > fixes your problem it'l give hints. > This could take a while :-) Chris -- | Christopher Petrilli | petrilli@gmail.com
В списке pgsql-performance по дате отправления: