Re: Controlling Load Distributed Checkpoints
От | Florian G. Pflug |
---|---|
Тема | Re: Controlling Load Distributed Checkpoints |
Дата | |
Msg-id | 46706A09.4020504@phlo.org обсуждение исходный текст |
Ответ на | Re: Controlling Load Distributed Checkpoints (Heikki Linnakangas <heikki@enterprisedb.com>) |
Список | pgsql-hackers |
Heikki Linnakangas wrote: > Jim C. Nasby wrote: >> On Thu, Jun 07, 2007 at 10:16:25AM -0400, Tom Lane wrote: >>> Heikki Linnakangas <heikki@enterprisedb.com> writes: >>>> Thinking about this whole idea a bit more, it occured to me that the >>>> current approach to write all, then fsync all is really a historical >>>> artifact of the fact that we used to use the system-wide sync call >>>> instead of fsyncs to flush the pages to disk. That might not be the >>>> best way to do things in the new load-distributed-checkpoint world. >>>> How about interleaving the writes with the fsyncs? >>> I don't think it's a historical artifact at all: it's a valid reflection >>> of the fact that we don't know enough about disk layout to do low-level >>> I/O scheduling. Issuing more fsyncs than necessary will do little >>> except guarantee a less-than-optimal scheduling of the writes. >> >> If we extended relations by more than 8k at a time, we would know a lot >> more about disk layout, at least on filesystems with a decent amount of >> free space. > > I doubt it makes that much difference. If there was a significant amount > of fragmentation, we'd hear more complaints about seq scan performance. OTOH, extending a relation that uses N pages by something like min(ceil(N/1024), 1024)) pages might help some filesystems to avoid fragmentation, and hardly introduce any waste (about 0.1% in the worst case). So if it's not too hard to do it might be worthwhile, even if it turns out that most filesystems deal well with the current allocation pattern. greetings, Florian Pflug
В списке pgsql-hackers по дате отправления: