Re: BUG #14384: pg_dump uses excessive amounts of memory for LOBs
От | Tom Lane |
---|---|
Тема | Re: BUG #14384: pg_dump uses excessive amounts of memory for LOBs |
Дата | |
Msg-id | 29613.1476969807@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | BUG #14384: pg_dump uses excessive amounts of memory for LOBs (boleslaw.ziobrowski@yahoo.pl) |
Ответы |
Re: BUG #14384: pg_dump uses excessive amounts of memory for
LOBs
|
Список | pgsql-bugs |
boleslaw.ziobrowski@yahoo.pl writes: > pg_dump seems to allocate memory proportional to the number of rows in > pg_largeobject (not necessarily correlated with size of these objects) , Yes, it does. It also allocates memory proportional to the number of, eg, tables, or any other DB object for that matter. This is a consequence of the fact that blobs grew owners and privileges in 9.0. pg_dump uses its usual per-object infrastructure to keep track of that. The argument was that this'd be okay because if your large objects are, well, large, then there couldn't be so many of them that the space consumption would be fatal. I had doubts about that at the time, but I think we're more or less locked into it now. It would take a lot of restructuring to change it, and we'd lose functionality too, because we couldn't have a separate TOC entry per blob. That means no ability to select out individual blobs during pg_restore. TL;DR: blobs are not exactly lightweight objects. If you want something with less overhead, maybe you should just store the data in a plain bytea column. regards, tom lane
В списке pgsql-bugs по дате отправления: