Re: How should I specify work_mem/max_worker_processes if I want todo big queries now and then?
От | Laurenz Albe |
---|---|
Тема | Re: How should I specify work_mem/max_worker_processes if I want todo big queries now and then? |
Дата | |
Msg-id | 31811a45f443e5dbf08352440d635ee739130ac4.camel@cybertec.at обсуждение исходный текст |
Ответ на | How should I specify work_mem/max_worker_processes if I want to do big queries now and then? ("James(王旭)" <wangxu@gu360.com>) |
Ответы |
Re: How should I specify work_mem/max_worker_processes if I want to do big queries now and then?
|
Список | pgsql-general |
On Wed, 2019-11-20 at 15:56 +0800, James(王旭) wrote: > I am doing a query to fetch about 10000000 records in one time. But the query seems > very slow, like "mission impossible". > I am very confident that these records should be fit into my shared_buffers settings(20G), > and my query is totally on my index, which is this big:(19M x 100 partitions), this index > size can also be put into shared_buffers easily.(actually I even made a new partial index > which is smaller and delete the bigger old index) > > This kind of situation makes me very disappointed.How can I make my queries much faster > if my data grows more than 10000000 in one partition? I am using pg11.6. There are no parameters that make queries faster wholesale. If you need help with a query, please include the table definitions and the EXPLAIN (ANALYZE, BUFFERS) output for the query. Including a list of parameters you changed from the default is helpful too. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com
В списке pgsql-general по дате отправления: