Re: Performance regression with PostgreSQL 11 and partitioning
От | Robert Haas |
---|---|
Тема | Re: Performance regression with PostgreSQL 11 and partitioning |
Дата | |
Msg-id | CA+TgmoZ2Xn2SyEG2KwLU6wK0ptgyqcW=uyYkji3ZzHWp3P1izQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Performance regression with PostgreSQL 11 and partitioning (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Performance regression with PostgreSQL 11 and partitioning
|
Список | pgsql-hackers |
On Fri, Jun 8, 2018 at 3:08 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> That being said, I don't mind a bit if you want to look for further >> speedups here, but if you do, keep in mind that a lot of queries won't >> even use partition-wise join, so all of the arrays will be of length >> 1. Even when partition-wise join is used, it is quite likely not to >> be used for every table in the query, in which case it will still be >> of length 1 in some cases. So pessimizing nappinfos = 1 even slightly >> is probably a regression overall. > > TBH, I am way more concerned about the pessimization introduced for > every pre-existing usage of these functions by putting search loops > into them at all. I'd like very much to revert that. If we can > replace those with something along the line of root->index_array[varno] > we'll be better off across the board. I think David's analysis is correct -- this doesn't quite work. We're trying to identify whether a given varno is one of the ones to be translated, and it's hard to come up with a faster solution than iterating over a (very short) array of those values. One thing we could do is have two versions of each function, or else an optimized path for the very common nappinfos=1 case. I'm just not sure it would be worthwhile. Traversing a short array isn't free, but it's pretty cheap. An early version of the patch that made these changes used a List here rather than a C array, and I asked for that to be changed on efficiency grounds, and also because constructing 1-element lists would have a cost of its own. I think in general we have way too much code that uses Lists for convenience even though C arrays would be faster. For the most part, the performance consequences of any individual place where we do this are probably beneath the noise floor, but in the aggregate I think it has nasty consequences for both performance and memory utilization. I think if we are going to look at optimizing we are likely to buy more by worrying about cases where we traverse lists, especially ones that may be long, rather than worrying about looping over short C arrays. Of course I'm open to being proved wrong... -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: