Re: Tuplesort merge pre-reading
От | Robert Haas |
---|---|
Тема | Re: Tuplesort merge pre-reading |
Дата | |
Msg-id | CA+TgmoaUhKShAr+yQnJ6LxzT=H=6Pmzkayw6FD=EpOzUAbqoZQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Tuplesort merge pre-reading (Peter Geoghegan <pg@heroku.com>) |
Список | pgsql-hackers |
On Thu, Sep 29, 2016 at 11:38 AM, Peter Geoghegan <pg@heroku.com> wrote: > On Thu, Sep 29, 2016 at 2:59 PM, Robert Haas <robertmhaas@gmail.com> wrote: >>> Maybe that was the wrong choice of words. What I mean is that it seems >>> somewhat unprincipled to give over an equal share of memory to each >>> active-at-least-once tape, ... >> >> I don't get it. If the memory is being used for prereading, then the >> point is just to avoid doing many small I/Os instead of one big I/O, >> and that goal will be accomplished whether the memory is equally >> distributed or not; indeed, it's likely to be accomplished BETTER if >> the memory is equally distributed than if it isn't. > > I think it could hurt performance if preloading loads runs on a tape > that won't be needed until some subsequent merge pass, in preference > to using that memory proportionately, giving more to larger input runs > for *each* merge pass (giving memory proportionate to the size of each > run to be merged from each tape). For tapes with a dummy run, the > appropriate amount of memory for there next merge pass is zero. OK, true. But I still suspect that unless the amount of data you need to read from a tape is zero or very small, the size of the buffer doesn't matter. For example, if you have a 1GB tape and a 10GB tape, I doubt there's any benefit in making the buffer for the 10GB tape 10x larger. They can probably be the same. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: