pgdump, large objects and 7.0->7.1
От | Philip Crotwell |
---|---|
Тема | pgdump, large objects and 7.0->7.1 |
Дата | |
Msg-id | Pine.GSO.4.10.10103161643130.439-100000@tigger.seis.sc.edu обсуждение исходный текст |
Ответы |
Re: pgdump, large objects and 7.0->7.1
|
Список | pgsql-general |
Hi I am having problems with large objects in 7.0.3, high disk usage, slow access and deletes of large objects and occasional selects that hang with the background process going to 98% of the CPU and staying there. Having read that there are alot of large object improvements in 7.1, I was thinking of trying the beta out to see if these problems would disappear. But, 7.0->7.1 needs a pgdumpall/restore. Which wouldn't be a problem, but pgdumpall in 7.0 doesn't dump large objects. :( So, 3 questions that basically boil down to "What is the best way to move large objects from 7.0 to 7.1." 1) Can I use the 7.1 pgdumpall to dump a 7.0.3 database? The docs say no, but worth a try. 2) What does "large objects... must be handled manually" in the 7.0 pgdump docs mean? Does this mean that there is a way to manually copy the xinvXXXX files? I have ~23000 of them at present. 3) Do I need to preserve oid's when with pgdump using large objects? thanks, Philip PS It would be great if something about this could be added to the 7.1 docs. I would guess that others will have this same problem when 7.1 is released.
В списке pgsql-general по дате отправления: