Re: Hash tables in dynamic shared memory
От | Thomas Munro |
---|---|
Тема | Re: Hash tables in dynamic shared memory |
Дата | |
Msg-id | CAEepm=3Q7gCh24V8hYxKiQZTRGfX3tk6OCi1stGKVPrvkio1rA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Hash tables in dynamic shared memory (Thomas Munro <thomas.munro@enterprisedb.com>) |
Ответы |
Re: Hash tables in dynamic shared memory
|
Список | pgsql-hackers |
On Wed, Oct 5, 2016 at 12:11 PM, Thomas Munro <thomas.munro@enterprisedb.com> wrote: > On Wed, Oct 5, 2016 at 11:22 AM, Andres Freund <andres@anarazel.de> wrote: >>> Potential use cases for DHT include caches, in-memory database objects >>> and working state for parallel execution. >> >> Is there a more concrete example, i.e. a user we'd convert to this at >> the same time as introducing this hashtable? > > A colleague of mine will shortly post a concrete patch to teach an > existing executor node how to be parallel aware, using DHT. I'll let > him explain. > > I haven't looked into whether it would make sense to convert any > existing shmem dynahash hash table to use DHT. The reason for doing > so would be to move it out to DSM segments and enable dynamically > growing. I suspect that the bounded size of things like the hash > tables involved in (for example) predicate locking is considered a > feature, not a bug, so any such cluster-lifetime core-infrastructure > hash table would not be a candidate. More likely candidates would be > ephemeral data used by the executor, as in the above-mentioned patch, > and long lived caches of dynamic size owned by core code or > extensions. Like a shared query plan cache, if anyone can figure out > the invalidation magic required. Another thought: it could be used to make things like pg_stat_statements not have to be in shared_preload_libraries. -- Thomas Munro http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: