Re: Synchronized Scan WIP patch
От | Heikki Linnakangas |
---|---|
Тема | Re: Synchronized Scan WIP patch |
Дата | |
Msg-id | 4663D0F3.7000102@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: Synchronized Scan WIP patch (Jeff Davis <pgsql@j-davis.com>) |
Список | pgsql-patches |
Jeff Davis wrote: > On Thu, 2007-05-31 at 09:08 +0100, Heikki Linnakangas wrote: >> * moved the sync scan stuff to a new file access/heapam/syncscan.c. >> heapam.c is long enough already, and in theory the same mechanism could >> be used for large bitmap heap scans in the future. > > Good idea, I hadn't thought of that. It seems like the bitmaps in two > bitmap scans would have to match very closely, but that sounds > plausible. Yeah, it's a pretty narrow use case, but plausible in theory. > This is similar to another idea I had considered (I forget who thought > of it) to try to have a bitmap of "tuples still needed" and then try to > optimize based on that information somehow (read the ones in cache > first, etc). Seems substantially more complex though, more like a > prefetch system at that point. > > I expected the general refactoring. Hopefully my next patch is a little > closer to the code expectations and places less burden on the reviewers. No worries, that's the easy part. >> Testing: >> * Multiple scans on different tables, causing movement in the LRU list >> * Measure the CPU overhead for a single scan >> * Measure the lock contention with multiple scanners > > Is there any way to measure the necessity of the hash table? I would > think the conditions for that would be a large number of tables being > actively scanned causing a lot of LRU activity such that the locks are > held too long. Yep, and it's hard to imagine a system like that. > I also think the optimization of only reporting when the block is not > found in cache would be useful to test if the lock contention is a problem. I tried to demonstrate lock contention by running 10 backends all repeatedly scanning different tables that are bigger than the sync scan threshold but small enough to all fit in OS cache. The total runtime of the tests was the same, ~45 s, with and without the patch. That's pretty much the worst case scenario I could think of, so it seems that contention of the SyncScanLock is not an issue. There was some "missed updates" of the scan location, due to the LWLockConditionalAcquire that I put there to reduce lock contention, but not too much to worry about. I'll post an updated patch shortly.. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
В списке pgsql-patches по дате отправления: