Re: 121+ million record table perf problems

Поиск
Список
Период
Сортировка
От Craig James
Тема Re: 121+ million record table perf problems
Дата
Msg-id 464E29A4.4050309@emolecules.com
обсуждение исходный текст
Ответ на 121+ million record table perf problems  (cyber-postgres@midnightfantasy.com)
Ответы Re: 121+ million record table perf problems  (Alvaro Herrera <alvherre@commandprompt.com>)
Список pgsql-performance
>
> I've got a table with ~121 million records in it.  Select count on it
> currently takes ~45 minutes, and an update to the table to set a value
> on one of the columns I finally killed after it ran 17 hours and had
> still not completed.  Queries into the table are butt slow, and
>
> The update query that started this all I had to kill after 17hours.
> It should have updated all 121+ million records.  That brought my
> select count down to 19 minutes, but still a far cry from acceptable.

If you have a column that needs to be updated often for all rows,
separate it into a different table, and create a view that joins it back
to the main table so that your application still sees the old schema.

This will greatly speed your update since (in Postgres) and update is
the same as a delete+insert.  By updating that one column, you're
re-writing your entire 121 million rows.  If you separate it, you're
only rewriting that one column.  Don't forget to vacuum/analyze and
reindex when you're done.

Better yet, if you can stand a short down time, you can drop indexes on
that column, truncate, then do 121 million inserts, and finally
reindex.  That will be MUCH faster.

Craig



В списке pgsql-performance по дате отправления:

Предыдущее
От: "Steinar H. Gunderson"
Дата:
Сообщение: Re: CPU Intensive query
Следующее
От: "Tyrrill, Ed"
Дата:
Сообщение: Re: Slow queries on big table