Re: Performance die when COPYing to table with bigint PK
От | Robert Ayrapetyan |
---|---|
Тема | Re: Performance die when COPYing to table with bigint PK |
Дата | |
Msg-id | CAAboi9tWD0d2X3vyjzmZP-wnvZWuBos7LFN6yK+OLVZmJtH_Wg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Performance die when COPYing to table with bigint PK (Віталій Тимчишин <tivv00@gmail.com>) |
Список | pgsql-performance |
Yes, you are right. Performance become even more awful. Can some techniques from pg_bulkload be implemented in postgres core? Current performance is not suitable for any enterprise-wide production system. 2011/8/5 Віталій Тимчишин <tivv00@gmail.com>: > > In my tests it greatly depends on if index writes are random or sequential. > My test time goes down from few hours to seconds if I add to the end of > index. > As for me, best comparision would be to make two equal int4 columns with > same data as in int8, two indexes, then perform the test. My bet it will be > slower than int8. > > Четвер, 4 серпня 2011 р. користувач Robert Ayrapetyan > <robert.ayrapetyan@comodo.com> написав: >> All you are saying disproves following: >> >> in experiment I replaces bigint index: >> >> CREATE INDEX ix_t_big ON test.t USING btree (id_big) TABLESPACE tblsp_ix; >> >> with 4 (!) other indexes: >> >>>> If you look at the rest of my mail - you would notice 50 times >>>> difference in performance. >>>> What you would say? >>> >>> That accessing a page from RAM is more than 50 times as fast as a >>> random access of that page from disk. >>> >>> -Kevin >>> >> >> >> >> -- >> Ayrapetyan Robert, >> Comodo Anti-Malware Data Processing Analysis and Management System >> (CAMDPAMS) >> http://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php >> > > -- > Best regards, > Vitalii Tymchyshyn > -- Ayrapetyan Robert, Comodo Anti-Malware Data Processing Analysis and Management System (CAMDPAMS) http://repo-qa.camdpams.odessa.office.comodo.net/mediawiki/index.php
В списке pgsql-performance по дате отправления: