Re: pg_upgrade failing for 200+ million Large Objects
От | Laurenz Albe |
---|---|
Тема | Re: pg_upgrade failing for 200+ million Large Objects |
Дата | |
Msg-id | 4a3ebf7d81bfc6dd4d545e5b27d6e8f6c32d8937.camel@cybertec.at обсуждение исходный текст |
Ответ на | Re: pg_upgrade failing for 200+ million Large Objects (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: pg_upgrade failing for 200+ million Large Objects
|
Список | pgsql-hackers |
On Fri, 2024-03-15 at 19:18 -0400, Tom Lane wrote: > This patch seems to have stalled out again. In hopes of getting it > over the finish line, I've done a bit more work to address the two > loose ends I felt were probably essential to deal with: Applies and builds fine. I didn't scrutinize the code, but I gave it a spin on a database with 15 million (small) large objects. I tried pg_upgrade --link with and without the patch on a debug build with the default configuration. Without the patch: Runtime: 74.5 minutes Memory usage: ~7GB Disk usage: an extra 5GB dump file + log file during the dump With the patch: Runtime: 70 minutes Memory usage: ~1GB Disk usage: an extra 0.5GB during the dump Memory usage stayed stable once it reached its peak, so no noticeable memory leaks. The reduced memory usage is great. I was surprised by the difference in disk usage: the lion's share is the dump file, and that got substantially smaller. But also the log file shrank considerably, because not every individual large object gets logged. I had a look at "perf top", and the profile looked pretty similar in both cases. The patch is a clear improvement. Yours, Laurenz Albe
В списке pgsql-hackers по дате отправления: