pg_dump out of shared memory
От | tfo@alumni.brown.edu (Thomas F. O'Connell) |
---|---|
Тема | pg_dump out of shared memory |
Дата | |
Msg-id | 80c38bb1.0406171334.4e0b5775@posting.google.com обсуждение исходный текст |
Ответы |
Re: pg_dump out of shared memory
|
Список | pgsql-general |
In using pg_dump to dump an existing postgres database, I get the following: pg_dump: WARNING: out of shared memory pg_dump: attempt to lock table <table name> failed: ERROR: out of shared memory HINT: You may need to increase max_locks_per_transaction. postgresql.conf just has the default of 1000 shared_buffers. The database itself has thousands of tables, some of which have rows numbering in the millions. Am I correct in thinking that, despite the hint, it's more likely that I need to up the shared_buffers? Or is it that pg_dump is an example of "clients that touch many different tables in a single transaction" [from http://www.postgresql.org/docs/7.4/static/runtime-config.html#RUNTIME-CONFIG-LOCKS] and I actually ought to abide by the hint? -tfo
В списке pgsql-general по дате отправления: