Large database help
От | xbdelacour@yahoo.com |
---|---|
Тема | Large database help |
Дата | |
Msg-id | 5.0.2.1.0.20010422165107.02b46ec0@209.61.155.192 обсуждение исходный текст |
Ответы |
Re: Large database help
Re: Large database help |
Список | pgsql-admin |
Hi everyone, I'm more or less new to PostgreSQL and am trying to setup a rather large database for a data analysis application. Data is collected and dropped into a single table, which will become ~20GB. Analysis happens on a Windows client (over a network) that queries the data in chunks across parallel connections. I'm running the DB on a dual gig p3 w/ 512 memory under Redhat 6 (.0 I think). A single index exists that gives the best case for lookups, and the table is clustered against this index. My problem is this: during the query process the hard drive is being tagged excessively, while the cpu's are idling at 50% (numbers from Linux command: top), and this is bringing down the speed pretty dramatically since the process is waiting on the hard disk. How do I get the database to be completely resident in memory such that selects don't cause hdd activity? How do I pin how exactly why the hard disk is being accessed? I am setting 'echo 402653184 >/proc/sys/kernel/shmmax', which is being reflected in top. I also specify '-B 48000' when starting postmaster. My test DB is only 86MB, so in theory the disk has no business being active once the data is read into memory unless I perform a write operation.. What am I missing? I appreciate any help anyone could give me. -Xavier _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com
В списке pgsql-admin по дате отправления: