Re: Inserts in 'big' table slowing down the database

Поиск
Список
Период
Сортировка
От Ivan Voras
Тема Re: Inserts in 'big' table slowing down the database
Дата
Msg-id k223s0$e83$1@ger.gmane.org
обсуждение исходный текст
Ответ на Inserts in 'big' table slowing down the database  (Stefan Keller <sfkeller@gmail.com>)
Ответы Re: Inserts in 'big' table slowing down the database  (Stefan Keller <sfkeller@gmail.com>)
Список pgsql-performance
On 03/09/2012 13:03, Stefan Keller wrote:
> Hi,
>
> I'm having performance issues with a simple table containing 'Nodes'
> (points) from OpenStreetMap:
>
>   CREATE TABLE nodes (
>       id bigint PRIMARY KEY,
>       user_name text NOT NULL,
>       tstamp timestamp without time zone NOT NULL,
>       geom GEOMETRY(POINT, 4326)
>   );
>   CREATE INDEX idx_nodes_geom ON nodes USING gist (geom);
>
> The number of rows grows steadily and soon reaches one billion
> (1'000'000'000), therefore the bigint id.
> Now, hourly inserts (update and deletes) are slowing down the database
> (PostgreSQL 9.1) constantly.
> Before I'm looking at non-durable settings [1] I'd like to know what
> choices I have to tune it while keeping the database productive:
> cluster index? partition table? use tablespaces? reduce physical block size?

You need to describe in detail what does "slowing down" mean in your
case. Do the disk drives somehow do more operations per transaction?
Does the database use more CPU cycles? Is there swapping? What is the
expected (previous) performance?

At a guess, it is very unlikely that using non-durable settings will
help you here.


Вложения

В списке pgsql-performance по дате отправления:

Предыдущее
От: Stefan Keller
Дата:
Сообщение: Inserts in 'big' table slowing down the database
Следующее
От: John Nash
Дата:
Сообщение: Re: exponential performance decrease in ISD transaction