Re: Improve output of BitmapAnd EXPLAIN ANALYZE
От | Stephen Frost |
---|---|
Тема | Re: Improve output of BitmapAnd EXPLAIN ANALYZE |
Дата | |
Msg-id | 20161021132123.GY13284@tamriel.snowman.net обсуждение исходный текст |
Ответ на | Re: Improve output of BitmapAnd EXPLAIN ANALYZE (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Improve output of BitmapAnd EXPLAIN ANALYZE
|
Список | pgsql-hackers |
* Tom Lane (tgl@sss.pgh.pa.us) wrote: > Stephen Frost <sfrost@snowman.net> writes: > > Changing it in a new major release seems entirely reasonable. > > It's still a crock though. I wonder whether it wouldn't be better to > change the nodeBitmap code so that when EXPLAIN ANALYZE is active, > it expends extra effort to try to produce a rowcount number. I'm certainly all for doing something better, just didn't think that we should be worried about making a change to the EXPLAIN ANALYZE output in a major release because Depesz might have to update the explain site. > We could certainly run through the result bitmap and count the number > of exact-TID bits. I don't see a practical way of doing something > with lossy page bits, but maybe those occur infrequently enough > that we could ignore them? Or we could arbitrarily decide that > a lossy page should be counted as MaxHeapTuplesPerPage, or a bit > less arbitrarily, count it as the relation's average number > of tuples per page. Counting each page as the relation's average number of tuples per page seems entirely reasonable to me, for what that is trying to report. That said, I'm a big fan of how we have more detail for things like a HashJoin (buckets, batches, memory usage) and it might be nice to have more information like that for a BitmapAnd (and friends). In particular, I'm thinking of memory usage, exact vs. lossy pages, etc. Knowing that the bitmap has gotten to the point of being lossy might indicate that a user could up work_mem, for example, and possibly avoid recheck costs. Thanks! Stephen
В списке pgsql-hackers по дате отправления: