Re: [HACKERS] Priorities for 6.6

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: [HACKERS] Priorities for 6.6
Дата
Msg-id 24776.928767446@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: [HACKERS] Priorities for 6.6  (Bruce Momjian <maillist@candle.pha.pa.us>)
Ответы Re: [HACKERS] Priorities for 6.6  (Brian E Gallew <geek+@cmu.edu>)
Список pgsql-hackers
Bruce Momjian <maillist@candle.pha.pa.us> writes:
> ... Another idea
> is to send a signal to each backend that has marked a bit in shared
> memory saying it has written to a relation, and have the signal handler
> fsync all its dirty relations, set a finished bit, and have the
> postmaster then fsync pglog.

I do not think it's practical to expect any useful work to happen inside
a signal handler.  The signal could come at any moment, such as when
data structures are being updated and are in a transient invalid state.
Unless you are willing to do a lot of fooling around with blocking &
unblocking the signal, about all the handler can safely do is set a flag
variable that will be examined somewhere in the backend main loop.

However, if enough information is available in shared memory, perhaps
the postmaster could do this scan/update/flush all by itself?

> Of course, we have to prevent flush of pglog by OS, perhaps by making a
> copy of the last two pages of pg_log before this and remove it after. 
> If a backend starts up and sees that pg_log copy file, it puts that in
> place of the current last two pages of pg_log.

It seems to me that one or so disk writes per transaction is not all
that big a cost.  Does it take much more than one write to update
pg_log, and if so why?
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: ZEUGSWETTER Andreas IZ5
Дата:
Сообщение: Re: [HACKERS] Open 6.5 items
Следующее
От: Tom Lane
Дата:
Сообщение: Re: [HACKERS] 6.6 items