Re: old synchronized scan patch
От | Jeff Davis |
---|---|
Тема | Re: old synchronized scan patch |
Дата | |
Msg-id | 1165339957.4302.81.camel@dogma.v10.wvs обсуждение исходный текст |
Ответ на | Re: old synchronized scan patch ("Florian G. Pflug" <fgp@phlo.org>) |
Список | pgsql-hackers |
On Tue, 2006-12-05 at 15:54 +0100, Florian G. Pflug wrote: > Hannu Krosing wrote: > > The worst that can happen, is a hash collision, in which case you lose > > the benefits of sync scans, but you wont degrade compared to non-sync > > scans > > But it could cause "mysterious" performance regressions, no? > Image that your app includes two large tables, which are both > scannen frequently. Suppose that synchronous scanning gives this > use-case a noticeable performance boost. Now, you dump and reload > your schema, and suddently the hashes of oids of those tables > collide. You percieve a noticeable drop in performance that you > can neither explain nor fix without a rather deep understanding > of postgres internals. > A good point. We can hopefully make this relatively rare with a decent hashing algorithm (right now I just mod by the table size), and a reasonable-sized table. For your problem to occur, you'd need two relations which are both scanned very frequently at the same time and also have a hash collision. We can mitigate the problem by not reporting to the table unless the table is a minimum size (perhaps related to effective_cache_size), so that tables that are in memory anyway don't write over another table's hint. Or, we could use a dynamic structure, use locking, and only write a hint every K pages, or something similar. Regards,Jeff Davis
В списке pgsql-hackers по дате отправления: