Re: Sequential scans
От | Gregory Stark |
---|---|
Тема | Re: Sequential scans |
Дата | |
Msg-id | 87ejlz6rh8.fsf@oxford.xeocode.com обсуждение исходный текст |
Ответ на | Sequential scans (Heikki Linnakangas <heikki@enterprisedb.com>) |
Ответы |
Re: Sequential scans
|
Список | pgsql-hackers |
"Heikki Linnakangas" <heikki@enterprisedb.com> writes: > Let's use a normal hash table instead, and use a lock to protect it. If we only > update it every 10 pages or so, the overhead should be negligible. To further > reduce contention, we could modify ReadBuffer to let the caller know if the > read resulted in a physical read or not, and only update the entry when a page > is physically read in. That way all the synchronized scanners wouldn't be > updating the same value, just the one performing the I/O. And while we're at > it, let's use the full relfilenode instead of just the table oid in the hash. It's probably fine to just do that. But if we find it's a performance bottleneck we could probably still manage to avoid the lock except when actually inserting a new hash element. If you just store in the hash an index into an array stored in global memory then you could get away without a lock on the element in the array. It starts to get to be a fair amount of code when you think about how you would reuse elements of the array. That's why I suggest only looking at this if down the road we find that it's a bottleneck. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: