Re: Performance while loading data and indexing
От | Shridhar Daithankar |
---|---|
Тема | Re: Performance while loading data and indexing |
Дата | |
Msg-id | 3D931D08.1695.135D474B@localhost обсуждение исходный текст |
Ответ на | Re: Performance while loading data and indexing ("Shridhar Daithankar" <shridhar_daithankar@persistent.co.in>) |
Список | pgsql-hackers |
On 26 Sep 2002 at 10:51, paolo.cassago@talentmanager.c wrote: > Hi, > it seems you have to cluster it, I don't think you have another choise. Hmm.. That didn't occur to me...I guess some real time clustering like usogres would do. Unless it turns out to be a performance hog.. But this is just insert and select. No updates no deletes(Unless customer makes a 180 degree turn) So I doubt if clustering will help. At the most I can replicate data across machines and spread queries on them. Replication overhead as a down side and low query load on each machine as upside.. > I'm retrieving the configuration of our postgres servers (I'm out of office > now), so I can send it to you. I was quite disperate about performance, and > I was thinking to migrate the data on an oracle database. Then I found this > configuration on the net, and I had a succesfull increase of performance. In this case, we are upto postgresql because we/our customer wants to keep the costs down..:-) Even they are asking now if it's possible to keep hardware costs down as well. That's getting some funny responses here but I digress.. > Maybe this can help you. > > Why you use copy to insert records? I usually use perl scripts, and they > work well . Performance reasons. As I said in one of my posts earlier, putting upto 100K records in one transaction in steps of 10K did not reach performance of copy. As Tom said rightly, it was a 4-1 ratio despite using transactions.. Thanks once again.. ByeShridhar -- Secretary's Revenge: Filing almost everything under "the".
В списке pgsql-hackers по дате отправления: