Re: Vacuum: allow usage of more than 1GB of work mem
От | Claudio Freire |
---|---|
Тема | Re: Vacuum: allow usage of more than 1GB of work mem |
Дата | |
Msg-id | CAGTBQpYLNZFwxkHFMY-W17Ju7Ssof18+yKf7FgrZiXweZWb3Ag@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Vacuum: allow usage of more than 1GB of work mem (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Tue, Sep 6, 2016 at 3:11 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > We could get around (1) by something like Robert's idea of segmented > allocation, but TBH I've seen nothing on this thread to make me think > it's necessary or would even result in any performance improvement > at all. The bigger we make that array, the worse index-cleaning > is going to perform, and complicating the data structure will add > another hit on top of that. I wouldn't be so sure, I've seen cases where two binary searches were faster than a single binary search, especially when working with humongus arrays like this tid array, because touching less (memory) pages for a search does pay off considerably. I'd try before giving up on the idea. The test results (which I'll post in a second) do give credit to your expectation that making the array bigger/more complex does impact index scan performance. It's still faster than scanning several times though.
В списке pgsql-hackers по дате отправления: