Re: "Healing" a table after massive updates

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: "Healing" a table after massive updates
Дата
Msg-id dcc563d10809110922i3ec99047p5f90262e3383b71a@mail.gmail.com
обсуждение исходный текст
Ответ на Re: "Healing" a table after massive updates  (Bill Moran <wmoran@collaborativefusion.com>)
Ответы Re: "Healing" a table after massive updates
Список pgsql-general
On Thu, Sep 11, 2008 at 8:56 AM, Bill Moran
<wmoran@collaborativefusion.com> wrote:
> In response to Alvaro Herrera <alvherre@commandprompt.com>:
>
>> Bill Moran wrote:
>> > In response to "Gauthier, Dave" <dave.gauthier@intel.com>:
>> >
>> > > I might be able to answer my own question...
>> > >
>> > > vacuum FULL (analyze is optional)
>> >
>> > CLUSTER _may_ be a better choice, but carefully read the docs regarding
>> > it's drawbacks first.  You may want to do some benchmarks to see if it's
>> > really needed before you commit to it as a scheduled operation.
>>
>> What drawbacks?
>
> There's the whole "there will be two copies of the table on-disk" thing
> that could be an issue if it's a large table.

I've also found cluster to be pretty slow, even on 8.3.  On a server
that hits 30-40Megs a second write speed for random access during
pgbench, it's writing out at 1 to 2 megabytes a second when it runs,
and takes the better part of a day on our biggest table.  vacuumdb -fz
+ reindexdb ran in about 6 hours which means we could fit it into our
maintenance window.  vacuum moves a lot more data per second than
cluster.

В списке pgsql-general по дате отправления:

Предыдущее
От: "Dave Page"
Дата:
Сообщение: Re: Windows ODBC Driver
Следующее
От: johnf
Дата:
Сообщение: Re: keep alive losing connections