Re: performance problem
От | scott.marlowe |
---|---|
Тема | Re: performance problem |
Дата | |
Msg-id | Pine.LNX.4.33.0311181419320.26776-100000@css120.ihs.com обсуждение исходный текст |
Ответ на | performance problem ("Rick Gigger" <rick@alpinenetworking.com>) |
Список | pgsql-general |
On Tue, 18 Nov 2003, Rick Gigger wrote: > I am currently trying to import a text data file without about 45,000 > records. At the end of the import it does an update on each of the 45,000 > records. Doing all of the inserts completes in a fairly short amount of > time (about 2 1/2 minutes). Once it gets to the the updates though it slows > to a craw. After about 10 minutes it's only done about 3000 records. Are you doing your updates like this: update table1 set field1='1' where id=1; update table1 set field2=4 where id=1; ... update table1 set field10='something else' where id=1; update table1 set field1='3' where id=2; ... Maybe an analyze after the import and before the updates would help. As might a vacuum [full]. If the table isn't updated by other processes probably not. Maybe you've got a foreign key mistmatch going on and a lot of sequential scanning? > Is that normal? Is it because it's inside such a large transaction? Is > there anything I can do to speed that up. It seems awfully slow to me. Possibly. If you are creating a lot of dead tuples, then the operations can get slower and slower. Have you checked your fsm settings et. al.? > I didn't think that giving it more shared buffers would help but I tried > anyway. It didn't help. Usually doesn't. More sort_mem might though. Make it something like 16384 or 32768 (it's measured in kbytes) > I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a lot > of stuff but it didn't speed up the updates at all. You need to probably do the analyze between the import and the update. > I am using a dual 800mhz xeon box with 2 gb of ram. I've tried anywhere > from about 16,000 to 65000 shared buffers. That's VERY high. When postgresql has to manage a lot of buffers it actually is slower than letting the kernel in Linux or BSD do it for you. > What other factors are involved here? Not sure. More concrete examples would help. Have you run your queries with explain analyze at the front and looked for differences in number of rows / loops? Those are the dead giveaways. Take a look here: http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
В списке pgsql-general по дате отправления: