Re: Notes on lock table spilling
От | Alvaro Herrera |
---|---|
Тема | Re: Notes on lock table spilling |
Дата | |
Msg-id | 20050331201828.GF23936@dcc.uchile.cl обсуждение исходный текст |
Ответ на | Re: Notes on lock table spilling (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Thu, Mar 31, 2005 at 12:19:08AM -0500, Tom Lane wrote: > Alvaro Herrera <alvherre@dcc.uchile.cl> writes: > > We have a problem as soon as somebody tries to delete a lot of rows from > > a big table. We cannot possibly extend the memory requirements forever, > > so we need to spill to disk without having an in-shared-memory index. > > Yes. I'm not sure that I see the point of the in-memory index at all... > there is some intermediate regime where it would improve performance, > but it surely does not solve the basic problem that shared memory is > finite. My idea was that it would help eliminate I/O. But probably you are right that it may be the wrong idea; probably it's better to have an on-disk index for the on-disk storage of LOCK (we don't want sequential scanning of an on-disk lock array, do we?). If it's cached in memory, then no I/O would be needed until we started recording a lot of locks, thus achieving the same effect with simpler code and better degradation. I'm thinking in sketching some sort of simple btree on top of slru pages. Nothing concrete yet. > Maybe something involving lossy storage would work? Compare recent > discussions about lossy bitmaps generated from index scans. Hmm. Some problems come to mind: - How to decide when to use lossy storage. Perhaps if the executor (or rather, the planner) thinks it is about to acquirelots of tuple locks, then it could hint the lock manager. - How to unlock? (or: how to know when the lock is released for sure?) I think this can be solved by counting lockers/unlockers. -- Alvaro Herrera (<alvherre[@]dcc.uchile.cl>) "Nadie esta tan esclavizado como el que se cree libre no siendolo" (Goethe)
В списке pgsql-hackers по дате отправления: