Re: pg_stat_*_columns?
От | Robert Haas |
---|---|
Тема | Re: pg_stat_*_columns? |
Дата | |
Msg-id | CA+TgmoYXkFRz80SnM9N5gy3kFnJ_t3oAqiB9nYenbK+KgUyzwQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_stat_*_columns? (Magnus Hagander <magnus@hagander.net>) |
Ответы |
Re: pg_stat_*_columns?
|
Список | pgsql-hackers |
On Sat, Jun 20, 2015 at 11:15 AM, Magnus Hagander <magnus@hagander.net> wrote: > I've considered both that and to perhaps use a shared memory message queue > to communicate. Basically, have a backend send a request when it needs a > snapshot of the stats data and get a copy back through that method instead > of disk. It would be much easier if we didn't actually take a snapshot of > the data per transaction, but we really don't want to give that up (if we > didn't care about that, we could just have a protocol asking for individual > values). > > We'd need a way to actually transfer the whole hashtables over, without > rebuilding them on the other end I think. Just the cost of looping over it > to dump and then rehashing everything on the other end seems quite wasteful > and unnecessary. One idea would be to advertise a DSM ID in the main shared memory segment, and have the individual backends read that value and attach to it. When new stats are generated, the stats collector creates a new DSM (which might be bigger or smaller than the old one), writes the new stats in there, and then advertises the new DSM ID in the main shared memory segment. Backends that still have the old segment attached can still use it, and it will go away automatically once they all drop off. But I'm not sure how this would work with the new per-database split of the stats file. I don't think it'll work to have one DSM per database; we don't support enough DSMs for that. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: