Re: Report planning memory in EXPLAIN ANALYZE
От | Andrey Lepikhov |
---|---|
Тема | Re: Report planning memory in EXPLAIN ANALYZE |
Дата | |
Msg-id | c98f0fbd-c50f-50ba-48bb-f7ab9a0b2122@postgrespro.ru обсуждение исходный текст |
Ответ на | Re: Report planning memory in EXPLAIN ANALYZE (David Rowley <dgrowleyml@gmail.com>) |
Ответы |
Re: Report planning memory in EXPLAIN ANALYZE
|
Список | pgsql-hackers |
On 14/8/2023 06:53, David Rowley wrote: > On Thu, 10 Aug 2023 at 20:33, Ashutosh Bapat > <ashutosh.bapat.oss@gmail.com> wrote: >> My point is what's relevant here is how much net memory planner asked >> for. > > But that's not what your patch is reporting. All you're reporting is > the difference in memory that's *currently* palloc'd from before and > after the planner ran. If we palloc'd 600 exabytes then pfree'd it > again, your metric won't change. > > I'm struggling a bit to understand your goals here. If your goal is > to make a series of changes that reduces the amount of memory that's > palloc'd at the end of planning, then your patch seems to suit that > goal, but per the quote above, it seems you care about how many bytes > are palloc'd during planning and your patch does not seem track that. > > With your patch as it is, to improve the metric you're reporting we > could go off and do things like pfree Paths once createplan.c is done, > but really, why would we do that? Just to make the "Planning Memory" > metric looks better doesn't seem like a worthy goal. > > Instead, if we reported the context's mem_allocated, then it would > give us the flexibility to make changes to the memory context code to > have the metric look better. It might also alert us to planner > inefficiencies and problems with new code that may cause a large spike > in the amount of memory that gets allocated. Now, I'm not saying we > should add a patch that shows mem_allocated. I'm just questioning if > your proposed patch meets the goals you're trying to achieve. I just > suggested that you might want to consider something else as a metric > for your memory usage reduction work. Really, the current approach with the final value of consumed memory smooths peaks of memory consumption. I recall examples likewise massive million-sized arrays or reparameterization with many partitions where the optimizer consumes much additional memory during planning. Ideally, to dive into the planner issues, we should have something like a report-in-progress in the vacuum, reporting on memory consumption at each subquery and join level. But it looks too much for typical queries. -- regards, Andrey Lepikhov Postgres Professional
В списке pgsql-hackers по дате отправления: