Re: [PATCHES] Full page writes improvement, code update
От | Tom Lane |
---|---|
Тема | Re: [PATCHES] Full page writes improvement, code update |
Дата | |
Msg-id | 27693.1176226131@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [PATCHES] Full page writes improvement, code update (Koichi Suzuki <suzuki.koichi@oss.ntt.co.jp>) |
Ответы |
Re: [PATCHES] Full page writes improvement, code update
|
Список | pgsql-hackers |
Koichi Suzuki <suzuki.koichi@oss.ntt.co.jp> writes: > My proposal is to remove unnecessary full page writes (they are needed > in crash recovery from inconsistent or partial writes) when we copy WAL > to archive log and rebuilt them as a dummy when we restore from archive > log. > ... > Benchmark: DBT-2 > Database size: 120WH (12.3GB) > Total WAL size: 4.2GB (after 60min. run) > Elapsed time: > cp: 120.6sec > gzip: 590.0sec > pg_compresslog: 79.4sec > Resultant archive log size: > cp: 4.2GB > gzip: 2.2GB > pg_compresslog: 0.3GB > Resource consumption: > cp: user: 0.5sec system: 15.8sec idle: 16.9sec I/O wait: 87.7sec > gzip: user: 286.2sec system: 8.6sec idle: 260.5sec I/O wait: 36.0sec > pg_compresslog: > user: 7.9sec system: 5.5sec idle: 37.8sec I/O wait: 28.4sec What checkpoint settings were used to make this comparison? I'm wondering whether much of the same benefit can't be bought at zero cost by increasing the checkpoint interval, because that translates directly to a reduction in the number of full-page images inserted into WAL. Also, how much was the database run itself slowed down by the increased volume of WAL (due to duplicated information)? It seems rather pointless to me to measure only the archiving effort without any consideration for the impact on the database server proper. regards, tom lane PS: there's something fishy about the gzip numbers ... why all the idle time?
В списке pgsql-hackers по дате отправления: