Re: Bulk Insert into PostgreSQL

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: Bulk Insert into PostgreSQL
Дата
Msg-id CAH2-Wz=-V=-pO9u4jEtgbSH+y8zpKrzigmpxzh3PMhjnudo3Mg@mail.gmail.com
обсуждение исходный текст
Ответ на RE: Bulk Insert into PostgreSQL  ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>)
Ответы RE: Bulk Insert into PostgreSQL  ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>)
Re: Bulk Insert into PostgreSQL  (Srinivas Karthik V <skarthikv.iitb@gmail.com>)
Список pgsql-hackers
On Sun, Jul 1, 2018 at 5:19 PM, Tsunakawa, Takayuki
<tsunakawa.takay@jp.fujitsu.com> wrote:
> 400 GB / 15 hours = 7.6 MB/s
>
> That looks too slow.  I experienced a similar slowness.  While our user tried to INSERT (not COPY) a billion record,
theyreported INSERTs slowed down by 10 times or so after inserting about 500 million records.  Periodic pstack runs on
Linuxshowed that the backend was busy in btree operations.  I didn't pursue the cause due to other businesses, but
theremight be something to be improved. 

What kind of data was indexed? Was it a bigserial primary key, or
something else?

--
Peter Geoghegan


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Craig Ringer
Дата:
Сообщение: Re: Large Commitfest items
Следующее
От: "Tsunakawa, Takayuki"
Дата:
Сообщение: RE: Bulk Insert into PostgreSQL