insert performance
От | Jinhua Luo |
---|---|
Тема | insert performance |
Дата | |
Msg-id | CAAc9rOz2+yL=SWdA-NMCVg-iPBCBdzW+uQyNi3+se9S6CrpvvA@mail.gmail.com обсуждение исходный текст |
Ответы |
Re: insert performance
(Jim Nasby <Jim.Nasby@BlueTreble.com>)
Re: insert performance (Jeff Janes <jeff.janes@gmail.com>) |
Список | pgsql-performance |
The database is postgresql 9.3, running on debian7, with 8 cpu cores and 8096MB physical memory.
There is a big table, with 70 more columns. It would be constantly at 700 rows/sec. It's not feasible to use COPY, because the data is not predefined or provisioned, and it's generated on demand by clients.
To make a clean test env, I clone a new table, removing the indexes (keeping the primary key) and triggers, and use pgbench to test insert statement purely.
Here is some key items in the postgresql.conf:
--------------
shared_buffers = 1024MB
work_mem = 32MB
maintenance_work_mem = 128MB
bgwriter_delay = 20ms
synchronous_commit = off
checkpoint_segments = 64
checkpoint_completion_target = 0.9
effective_cache_size = 4096MB
log_min_duration_statement = 1000
--------------
В списке pgsql-performance по дате отправления:
Предыдущее
От: Scott MarloweДата:
Сообщение: Re: How we made Postgres upserts 2-3* quicker than MongoDB