Re: select count() out of memory
От | tfinneid@student.matnat.uio.no |
---|---|
Тема | Re: select count() out of memory |
Дата | |
Msg-id | 42923.134.32.140.234.1193315284.squirrel@webmail.uio.no обсуждение исходный текст |
Ответ на | Re: select count() out of memory (Alvaro Herrera <alvherre@commandprompt.com>) |
Ответы |
Re: select count() out of memory
Re: select count() out of memory |
Список | pgsql-general |
> tfinneid@student.matnat.uio.no wrote: > >> > are a dump of Postgres's current memory allocations and could be >> useful in >> > showing if there's a memory leak causing this. >> >> The file is 20M, these are the last lines: (the first line continues >> unttill ff_26000) >> >> >> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0 >> chunks); 632 used > > You have 26000 partitions??? At the moment the db has 55000 partitions, and thats only a fifth of the complete volume the system will have in production. The reason I chose this solution is that a partition will be loaded with new data every 3-30 seconds, and all that will be read by up to 15 readers every time new data is available. The data will be approx 2-4TB in production in total. So it will be too slow if I put it in a single table with permanent indexes. I did a test previously, where I created 1 million partitions (without data) and I checked the limits of pg, so I think it should be ok. thomas
В списке pgsql-general по дате отправления: