Re: Sequential scans
От | Heikki Linnakangas |
---|---|
Тема | Re: Sequential scans |
Дата | |
Msg-id | 4638E04F.4040100@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: Sequential scans (Gregory Stark <stark@enterprisedb.com>) |
Ответы |
Re: Sequential scans
|
Список | pgsql-hackers |
Gregory Stark wrote: > "Heikki Linnakangas" <heikki@enterprisedb.com> writes: > >> Let's use a normal hash table instead, and use a lock to protect it. If we only >> update it every 10 pages or so, the overhead should be negligible. To further >> reduce contention, we could modify ReadBuffer to let the caller know if the >> read resulted in a physical read or not, and only update the entry when a page >> is physically read in. That way all the synchronized scanners wouldn't be >> updating the same value, just the one performing the I/O. And while we're at >> it, let's use the full relfilenode instead of just the table oid in the hash. > > It's probably fine to just do that. But if we find it's a performance > bottleneck we could probably still manage to avoid the lock except when > actually inserting a new hash element. If you just store in the hash an index > into an array stored in global memory then you could get away without a lock > on the element in the array. > > It starts to get to be a fair amount of code when you think about how you > would reuse elements of the array. That's why I suggest only looking at this > if down the road we find that it's a bottleneck. Another trick you could do is to use acquire the lock conditionally when updating it. But I doubt it's a problem anyhow, if we put some sane lower limit in there so that it's not used at all for small tables. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: