Re: two memory-consuming postgres processes
От | Alexy Khrabrov |
---|---|
Тема | Re: two memory-consuming postgres processes |
Дата | |
Msg-id | 72E02D29-848B-467A-AE6B-401568010254@gmail.com обсуждение исходный текст |
Ответ на | Re: two memory-consuming postgres processes (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: two memory-consuming postgres processes
|
Список | pgsql-performance |
On May 2, 2008, at 1:13 PM, Tom Lane wrote: > I don't think you should figure on more than 1GB being > usefully available to Postgres, and you can't give all or even most of > that space to shared_buffers. So how should I divide say a 512 MB between shared_buffers and, um, what else? (new to pg tuning :) I naively thought that if I have a 100,000,000 row table, of the form (integer,integer,smallint,date), and add a real coumn to it, it will scroll through the memory reasonably fast. Yet when I had shared_buffers=128 MB, it was hanging there 8 hours before I killed it, and now with 1500MB is paging again for several hours with no end in sight. Why can't it just add a column to a row at a time and be done with it soon enough? :) It takes inordinately long compared to a FORTRAN or even python program and there's no index usage for this table, a sequential scan, why all the paging? Cheers, Alexy
В списке pgsql-performance по дате отправления: