Problem with large select - PostgreSQL starts eating memory/disk
От | calmarty@hotmail.com (Martin) |
---|---|
Тема | Problem with large select - PostgreSQL starts eating memory/disk |
Дата | |
Msg-id | dae9b9ac.0108301651.3065521c@posting.google.com обсуждение исходный текст |
Ответы |
Re: Problem with large select - PostgreSQL starts eating memory/disk
|
Список | pgsql-general |
I am having a problem with an INSERT into a table from a SELECT which joins over 15 tables none of which have over 3K tuples. I run it multiple times with different values as part of my application and it randomly locks up when executed with different parameters. When I say lock up, I mean that it never finishes executing and that memory usage creeps up to the maximum (according to "top" - 1 GB from ~120 MB during normal operations) and it starts taking up the disk space for the partition with the data files - /var/lib/pgsql (goes from ~100 MB to ~12 GB over 45 minutes). It finally dies when there is no disk space left. If I drop the database and recreate, this goes back to 120 MB. We are running PostgreSQL 7.1.3 compiled on Red Hat 7.1 which runs on a dual Pentium III 1 GHz box with a RAID 5 setup. I would suspect that it is a problem with our query but the odd part is that it will succeed for a few iterations (from 2 to 20) before locking up. We have done all the normal optimizations/tuning including creating indices on all the columns in the join and setting the OS parameters according to the PostgreSQL documentation. I looked every where for other examples of this behavior and could not find anything. I wondered if it was an issue with PostgreSQL splitting processes over the CPUs but have found numerous examples of success running PostgreSQL on multiple CPU servers and there should be no reason why a database couldn't split its processes over multiple CPUs. Has anyone ever come across this problem?
В списке pgsql-general по дате отправления: