Re: Problem w/ dumping huge table and no disk space
От | David Ford |
---|---|
Тема | Re: Problem w/ dumping huge table and no disk space |
Дата | |
Msg-id | 3B994A34.9090100@blue-labs.org обсуждение исходный текст |
Ответ на | Problem w/ dumping huge table and no disk space (David Ford <david@blue-labs.org>) |
Ответы |
Re: Problem w/ dumping huge table and no disk space
|
Список | pgsql-general |
$ postgres --version postgres (PostgreSQL) 7.1beta5 1) If I run pg_dump, it runs for about 20 minutes the aborts abruptly w/ out of memory err, pg_dump is killed by the kernel and postgres spews pipe errors until it reaches the end of the table or I kill it. It starts with ~100megs of regular RAM free and has 300megs of swap. 2) If I try to do a 'delete from ...' query, it runs for about 20 minutes and all of a sudden has 4 megs of disk space free and pg dies. It starts with ~500megs disk space free. So in either situation I'm kind of screwed. The new machine is running 7.2devel, I doubt I could copy the data directory. My WAL logs is set to 8, 8*16 is 128megs, no? Tom Lane wrote: >David Ford <david@blue-labs.org> writes: > >>I have a 10million+ row table and I've only got a couple hundred megs >>left. I can't delete any rows, pg runs out of disk space and crashes. >> > >What is running out of disk space, exactly? > >If the problem is WAL log growth, an update to 7.1.3 might help >(... you didn't say which version you're using). > >If the problem is lack of space for the pg_dump output file, I think you >have little choice except to arrange for the dump to go to another >device (maybe dump it across NFS, or to a tape, or something). > > regards, tom lane >
В списке pgsql-general по дате отправления: