Re: [Patch] Optimize dropping of relation buffers using dlist
| От | Amit Kapila |
|---|---|
| Тема | Re: [Patch] Optimize dropping of relation buffers using dlist |
| Дата | |
| Msg-id | CAA4eK1+iTYaRYfXWGPJbFCy9CWH7U6fVoPP=bG-ZcYJNsC995A@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: [Patch] Optimize dropping of relation buffers using dlist (Andres Freund <andres@anarazel.de>) |
| Ответы |
Re: [Patch] Optimize dropping of relation buffers using dlist
|
| Список | pgsql-hackers |
On Sat, Aug 1, 2020 at 1:53 AM Andres Freund <andres@anarazel.de> wrote: > > Hi, > > On 2020-07-31 15:50:04 -0400, Tom Lane wrote: > > Andres Freund <andres@anarazel.de> writes: > > > > Wonder if the temporary fix is just to do explicit hashtable probes for > > > all pages iff the size of the relation is < s_b / 500 or so. That'll > > > address the case where small tables are frequently dropped - and > > > dropping large relations is more expensive from the OS and data loading > > > perspective, so it's not gonna happen as often. > > > > Oooh, interesting idea. We'd need a reliable idea of how long the > > relation had been (preferably without adding an lseek call), but maybe > > that's do-able. > > IIRC we already do smgrnblocks nearby, when doing the truncation (to > figure out which segments we need to remove). Perhaps we can arrange to > combine the two? The layering probably makes that somewhat ugly :( > > We could also just use pg_class.relpages. It'll probably mostly be > accurate enough? > Don't we need the accurate 'number of blocks' if we want to invalidate all the buffers? Basically, I think we need to perform BufTableLookup for all the blocks in the relation and then Invalidate all buffers. -- With Regards, Amit Kapila.
В списке pgsql-hackers по дате отправления: