Re: Out of Memory errors are frustrating as heck!
От | Tom Lane |
---|---|
Тема | Re: Out of Memory errors are frustrating as heck! |
Дата | |
Msg-id | 14066.1555793163@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Out of Memory errors are frustrating as heck! (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Ответы |
Re: Out of Memory errors are frustrating as heck!
Re: Out of Memory errors are frustrating as heck! |
Список | pgsql-performance |
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes: > I think it's really a matter of underestimate, which convinces the planner > to hash the larger table. In this case, the table is 42GB, so it's > possible it actually works as expected. With work_mem = 4MB I've seen 32k > batches, and that's not that far off, I'd day. Maybe there are more common > values, but it does not seem like a very contrived data set. Maybe we just need to account for the per-batch buffers while estimating the amount of memory used during planning. That would force this case into a mergejoin instead, given that work_mem is set so small. regards, tom lane
В списке pgsql-performance по дате отправления: