Re: slow joining very large table to smaller ones
От | Dan Harris |
---|---|
Тема | Re: slow joining very large table to smaller ones |
Дата | |
Msg-id | EF082D2E-96A0-4C63-A8FC-6EF1D3152A04@drivefaster.net обсуждение исходный текст |
Ответ на | Re: slow joining very large table to smaller ones (Dan Harris <fbsd@drivefaster.net>) |
Список | pgsql-performance |
On Jul 15, 2005, at 9:09 AM, Dan Harris wrote: > > On Jul 14, 2005, at 10:12 PM, John A Meinel wrote: > >> >> My biggest question is why the planner things the Nested Loop >> would be >> so expensive. >> Have you tuned any of the parameters? It seems like something is >> out of >> whack. (cpu_tuple_cost, random_page_cost, etc...) >> >> > > here's some of my postgresql.conf. Feel free to blast me if I did > something idiotic here. > > shared_buffers = 50000 > effective_cache_size = 1348000 > random_page_cost = 3 > work_mem = 512000 > max_fsm_pages = 80000 > log_min_duration_statement = 60000 > fsync = true ( not sure if I'm daring enough to run without this ) > wal_buffers = 1000 > checkpoint_segments = 64 > checkpoint_timeout = 3000 > > > #---- FOR PG_AUTOVACUUM --# > stats_command_string = true > stats_row_level = true > Sorry, I forgot to re-post my hardware specs. HP DL585 4 x 2.2 GHz Opteron 12GB RAM SmartArray RAID controller, 1GB hardware cache, 4x73GB 10k SCSI in RAID 0+1 ext2 filesystem Also, there are 30 databases on the machine, 27 of them are identical schemas.
В списке pgsql-performance по дате отправления: