Re: Rapidly decaying performance repopulating a large table

Поиск
Список
Период
Сортировка
От David Wilson
Тема Re: Rapidly decaying performance repopulating a large table
Дата
Msg-id e7f9235d0804221359o72ecbb51mf685b0a17d32c45d@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Rapidly decaying performance repopulating a large table  ("Scott Marlowe" <scott.marlowe@gmail.com>)
Ответы Re: Rapidly decaying performance repopulating a large table  ("Scott Marlowe" <scott.marlowe@gmail.com>)
Список pgsql-general
On Tue, Apr 22, 2008 at 4:38 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>  The best bet is to issue an "analyze table" (with your table name in
>  there, of course) and see if that helps.  Quite often the real issue
>  is that pgsql is using a method to insert rows when you have 10million
>  of them that made perfect sense when you had 100 rows, but no longer
>  is the best way.
>

This has caused the behavior to be... erratic. That is, individual
copies are now taking anywhere from 2 seconds (great!) to 30+ seconds
(back where we were before). I also clearly can't ANALYZE the table
after every 4k batch; even if that resulted in 2 second copies, the
analyze would take up as much time as the copy otherwise would have
been. I could conceivably analyze after every ~80k (the next larger
unit of batching; I'd love to be able to batch the copies at that
level but dependencies ensure that I can't), but it seems odd to have
to analyze so often.

Oh, barring COPY delays I'm generating the data at a rate of something
like a half million rows every few minutes, if that's relevant.
--
- David T. Wilson
david.t.wilson@gmail.com

В списке pgsql-general по дате отправления:

Предыдущее
От: Erik Jones
Дата:
Сообщение: Re: Schema migration tools?
Следующее
От: "Scott Marlowe"
Дата:
Сообщение: Re: Rapidly decaying performance repopulating a large table