Re: Get memory contexts of an arbitrary backend process
От | Kasahara Tatsuhito |
---|---|
Тема | Re: Get memory contexts of an arbitrary backend process |
Дата | |
Msg-id | CAP0=ZVK_FFijULfQWD87GriDKgD+S3HdgWhvyjPLxNiaGHrB3Q@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Get memory contexts of an arbitrary backend process (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Get memory contexts of an arbitrary backend process
|
Список | pgsql-hackers |
On Fri, Sep 4, 2020 at 2:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com> writes: > > Yes, but it's not only for future expansion, but also for the > > usability and the stability of this feature. > > For example, if you want to read one dumped file multiple times and analyze it, > > you will want the ability to just read the dump. > > If we design it to make that possible, how are we going to prevent disk > space leaks from never-cleaned-up dump files? In my thought, with features such as a view that allows us to see a list of dumped files, it would be better to have a function that simply deletes the dump files associated with a specific PID, or to delete all dump files. Some files may be dumped with unexpected delays, so I think the cleaning feature will be necessary. ( Also, as the pgsql_tmp file, it might better to delete dump files when PostgreSQL start.) Or should we try to delete the dump file as soon as we can read it? Best regards, -- Tatsuhito Kasahara kasahara.tatsuhito _at_ gmail.com
В списке pgsql-hackers по дате отправления: