how to improve perf of 131MM row table?
От | AJ Weber |
---|---|
Тема | how to improve perf of 131MM row table? |
Дата | |
Msg-id | 53AB2CB1.8080201@comcast.net обсуждение исходный текст |
Ответы |
Re: how to improve perf of 131MM row table?
|
Список | pgsql-performance |
Sorry for the semi-newbie question... I have a relatively sizable postgresql 9.0.2 DB with a few large tables (keep in mind "large" is relative, I'm sure there are plenty larger out there). One of my queries that seems to be bogging-down performance is a join between two tables on each of their BIGINT PK's (so they have default unique constraint/PK indexes on them). One table is a detail table for the other. The "master" has about 6mm rows. The detail table has about 131mm rows (table size = 17GB, index size = 16GB). I unfortunately have limited disks, so I can't actually move to multiple spindles, but wonder if there is anything I can do (should I partition the data, etc.) to improve performance? Maybe some further tuning to my .conf, but I do think that's using as much mem as I can spare right now (happy to send it along if it would help). DB is vacuumed nightly with stats updates enabled. I can send the statistics info listed in pgAdmin tab if that would help. Any suggestions, tips, tricks, links, etc. are welcomed! Thanks in advance, AJ
В списке pgsql-performance по дате отправления: