Re: Piggybacking vacuum I/O
От | Pavan Deolasee |
---|---|
Тема | Re: Piggybacking vacuum I/O |
Дата | |
Msg-id | 2e78013d0701250317u77c15dfdkcf991a84e30b238d@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Piggybacking vacuum I/O (Heikki Linnakangas <heikki@enterprisedb.com>) |
Ответы |
Re: Piggybacking vacuum I/O
|
Список | pgsql-hackers |
On 1/25/07, Heikki Linnakangas <heikki@enterprisedb.com> wrote:
Yes, we can do that. One problem though is mmaping wouldn't work when
CLOG file is extended and some of the backends may not see the extended
portion. But may be we can start with a sufficiently large initialized file and
mmap the whole file.
Another simpler solution for VACUUM would be to read the entire CLOG file
in local memory. Most of the transaction status queries can be satisfied from
this local copy and the normal CLOG is consulted only when the status is
unknown (TRANSACTION_STATUS_IN_PROGRESS)
Thanks,
Pavan
-- Pavan Deolasee wrote:
>
> Also is it worth optimizing on the total read() system calls which might
> not
> cause physical I/O, but
> still consume CPU ?
I don't think it's worth it, but now that we're talking about it: What
I'd like to do to all the slru files is to replace the custom buffer
management with mmapping the whole file, and letting the OS take care of
it. We would get rid of some guc variables, the OS would tune the amount
of memory used for clog/subtrans dynamically, and we would avoid the
memory copying. And I'd like to do the same for WAL.
Yes, we can do that. One problem though is mmaping wouldn't work when
CLOG file is extended and some of the backends may not see the extended
portion. But may be we can start with a sufficiently large initialized file and
mmap the whole file.
Another simpler solution for VACUUM would be to read the entire CLOG file
in local memory. Most of the transaction status queries can be satisfied from
this local copy and the normal CLOG is consulted only when the status is
unknown (TRANSACTION_STATUS_IN_PROGRESS)
Thanks,
Pavan
EnterpriseDB http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: