Re: Problems restoring big tables
От | Tom Lane |
---|---|
Тема | Re: Problems restoring big tables |
Дата | |
Msg-id | 25554.1168052565@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Problems restoring big tables (Arnau <arnaulist@andromeiberica.com>) |
Ответы |
Re: Problems restoring big tables
|
Список | pgsql-admin |
Arnau <arnaulist@andromeiberica.com> writes: > I have to restore a database that its dump using custom format (-Fc) > takes about 2.3GB. To speed the restore first I have restored everything > except (played with pg_restore -l) the contents of some tables that's > where most of the data is stored. I think you've outsmarted yourself by creating indexes and foreign keys before loading the data. That's *not* the way to make it faster. > pg_restore: ERROR: out of memory > DETAIL: Failed on request of size 32. > CONTEXT: COPY statistics_operators, line 25663678: "137320348 58618027 I'm betting you ran out of memory for deferred-trigger event records. It's best to load the data and then establish foreign keys ... indexes too. See http://www.postgresql.org/docs/8.2/static/populate.html for some of the underlying theory. (Note that pg_dump/pg_restore gets most of this stuff right already; it's unlikely that you will improve matters by manually fiddling with the load order. Instead, think about increasing maintenance_work_mem and checkpoint_segments, which pg_restore doesn't risk fooling with.) regards, tom lane
В списке pgsql-admin по дате отправления: