Re: Reading data in bulk - help?
От | Josh Berkus |
---|---|
Тема | Re: Reading data in bulk - help? |
Дата | |
Msg-id | 200309101016.14652.josh@agliodbs.com обсуждение исходный текст |
Ответ на | Re: Reading data in bulk - help? (Chris Huston <chuston@bangjafwac.com>) |
Список | pgsql-performance |
Chris, > The system is currently running on a single processor 500Mhz G4. We're > likely to move to a two processor 2Ghz G5 in the next few months. Then > each block may take only a 30-60 milliseconds to complete and their can > be two concurrent blocks processing at once. What about explaining your disk setup? Or mentioning postgresql.conf? For somebody who wants help, you're ignoring a lot of advice and questions. Personally, I'm not going to be of any further help until you report back on the other 3 of 4 options. > RELATED QUESTION: How now do I speed up the following query: "select > distinct group_id from datatable"? Which results in a sequential scan > of the db. Why doesn't it use the group_id index? I only do this once > per run so it's not as critical as the fetch speed which is done 6817 > times. Because it can't until PostgreSQL 7.4, which has hash aggregates. Up to 7.3, we have to use seq scans for all group bys. I'd suggest that you keep a table of group_ids, instead. -- Josh Berkus Aglio Database Solutions San Francisco
В списке pgsql-performance по дате отправления: