Обсуждение: db restore takes to long
Hi,
im trying to restore 250GB database, but indexing takes too long, so restore takes more then 12h.
Is it possible to disable indexing while restoring database?
Thanx in advance
--
---------------------------------------
Viktor Bojović
---------------------------------------
Wherever I go, Murphy goes with me
It seams there is no option like that in pg_dump or pg_restore. But You can manualy do it in the dump or with a sed command finding the lines begining by CREATE INDEX and removing them. We also have a big database and since postgres 8.4 pg_restore is much faster with the option --jobs=x where x is the number of cores you have in your microprocessor. It then divides the restore script into different jobs and goes faster using multithreading. example: pg_restore --username postgres --jobs=4 --dbname=your_db_name your_dump_file Here is if you have a quad core. Adrien Viktor Bojović a écrit : > Hi, > > im trying to restore 250GB database, but indexing takes too long, so > restore takes more then 12h. > Is it possible to disable indexing while restoring database? > > Thanx in advance > > > -- > --------------------------------------- > Viktor Bojović > --------------------------------------- > Wherever I go, Murphy goes with me -- Logo_HBS_mail.jpg Adrien DUCOS Analyste développeur aducos@hbs-research.com <mailto:aducos@hbs-research.com> www.hbs-research.com <http://www.hbs-research.com/> +33 (0)9 70 44 64 10 24 rue de l'Est 75020 Paris
Thanx Adrien,
i have tried to skip indexes but constraints rely on them
"there is no unique constraint matching given keys for referenced table"
, so i will try to use pg_restore as you have mentioned below.
On Mon, May 9, 2011 at 12:19 PM, adrien ducos <aducos@hbs-research.com> wrote:
It seams there is no option like that in pg_dump or pg_restore. But You can manualy do it in the dump or with a sed command finding the lines begining by CREATE INDEX and removing them.
We also have a big database and since postgres 8.4 pg_restore is much faster with the option --jobs=x
where x is the number of cores you have in your microprocessor. It then divides the restore script into different jobs and goes faster using multithreading.
example:
pg_restore --username postgres --jobs=4 --dbname=your_db_name your_dump_file
Here is if you have a quad core.
Adrien
Viktor Bojović a écrit :--Hi,
im trying to restore 250GB database, but indexing takes too long, so restore takes more then 12h.
Is it possible to disable indexing while restoring database?
Thanx in advance
--
---------------------------------------
Viktor Bojović
---------------------------------------
Wherever I go, Murphy goes with me
Logo_HBS_mail.jpg
Adrien DUCOS
Analyste développeur
aducos@hbs-research.com <mailto:aducos@hbs-research.com>
www.hbs-research.com <http://www.hbs-research.com/>
+33 (0)9 70 44 64 10
24 rue de l'Est
75020 Paris
--
---------------------------------------
Viktor Bojović
---------------------------------------
Wherever I go, Murphy goes with me
If the indexes are the issue, you can also try to increase the maintenance_work_mem during the restore and put it back again as normal afterwards. That is what the postgres documentation says about it: maintenance_work_mem (integer) Specifies the maximum amount of memory to be used in maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. It defaults to 16 megabytes (16MB). Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger than work_mem. Larger settings might improve performance for vacuuming and for restoring database dumps. Note that when autovacuum runs, up to autovacuum_max_workers <http://www.postgresql.org/docs/8.3/static/runtime-config-autovacuum.html#GUC-AUTOVACUUM-MAX-WORKERS> times this memory may be allocated, so be careful not to set the default value too high. Viktor Bojović a écrit : > Thanx Adrien, > i have tried to skip indexes but constraints rely on them > "there is no unique constraint matching given keys for referenced table" > , so i will try to use pg_restore as you have mentioned below. > > On Mon, May 9, 2011 at 12:19 PM, adrien ducos <aducos@hbs-research.com > <mailto:aducos@hbs-research.com>> wrote: > > It seams there is no option like that in pg_dump or pg_restore. > But You can manualy do it in the dump or with a sed command > finding the lines begining by CREATE INDEX and removing them. > > We also have a big database and since postgres 8.4 pg_restore is > much faster with the option --jobs=x > > where x is the number of cores you have in your microprocessor. It > then divides the restore script into different jobs and goes > faster using multithreading. > example: > pg_restore --username postgres --jobs=4 --dbname=your_db_name > your_dump_file > > Here is if you have a quad core. > > Adrien > > > Viktor Bojović a écrit : > > Hi, > > im trying to restore 250GB database, but indexing takes too > long, so restore takes more then 12h. > Is it possible to disable indexing while restoring database? > > Thanx in advance > > > -- > --------------------------------------- > Viktor Bojović > --------------------------------------- > Wherever I go, Murphy goes with me > > > > -- > > Logo_HBS_mail.jpg > Adrien DUCOS > Analyste développeur > aducos@hbs-research.com <mailto:aducos@hbs-research.com> > <mailto:aducos@hbs-research.com <mailto:aducos@hbs-research.com>> > www.hbs-research.com <http://www.hbs-research.com> > <http://www.hbs-research.com/> > +33 (0)9 70 44 64 10 > 24 rue de l'Est > 75020 Paris > > > > > > > -- > --------------------------------------- > Viktor Bojović > --------------------------------------- > Wherever I go, Murphy goes with me -- Logo_HBS_mail.jpg Adrien DUCOS Analyste développeur aducos@hbs-research.com <mailto:aducos@hbs-research.com> www.hbs-research.com <http://www.hbs-research.com/> +33 (0)9 70 44 64 10 24 rue de l'Est 75020 Paris
You could try removing/disabling your constraints until after the import/index building.
--
Noodle
Toll Free: 866-258-6951 x 701
Tim.Lewis@vialect.com
http://www.vialect.com
Noodle is a product of Vialect Inc
Follow Noodle on Twitter
http://www.twitter.com/noodle_news
SET CONSTRAINTS ALL DEFERRED
On Mon, May 9, 2011 at 7:44 AM, Viktor Bojović <viktor.bojovic@gmail.com> wrote:
Thanx Adrien,i have tried to skip indexes but constraints rely on them"there is no unique constraint matching given keys for referenced table", so i will try to use pg_restore as you have mentioned below.On Mon, May 9, 2011 at 12:19 PM, adrien ducos <aducos@hbs-research.com> wrote:It seams there is no option like that in pg_dump or pg_restore. But You can manualy do it in the dump or with a sed command finding the lines begining by CREATE INDEX and removing them.
We also have a big database and since postgres 8.4 pg_restore is much faster with the option --jobs=x
where x is the number of cores you have in your microprocessor. It then divides the restore script into different jobs and goes faster using multithreading.
example:
pg_restore --username postgres --jobs=4 --dbname=your_db_name your_dump_file
Here is if you have a quad core.
Adrien
Viktor Bojović a écrit :--Hi,
im trying to restore 250GB database, but indexing takes too long, so restore takes more then 12h.
Is it possible to disable indexing while restoring database?
Thanx in advance
--
---------------------------------------
Viktor Bojović
---------------------------------------
Wherever I go, Murphy goes with me
Logo_HBS_mail.jpg
Adrien DUCOS
Analyste développeur
aducos@hbs-research.com <mailto:aducos@hbs-research.com>
www.hbs-research.com <http://www.hbs-research.com/>
+33 (0)9 70 44 64 10
24 rue de l'Est
75020 Paris
--
---------------------------------------
Viktor Bojović
---------------------------------------
Wherever I go, Murphy goes with me
--
Noodle
Connecting People, Content & Capabilities within the Enterprise
Toll Free: 866-258-6951 x 701
Tim.Lewis@vialect.com
http://www.vialect.com
Noodle is a product of Vialect Inc
Follow Noodle on Twitter
http://www.twitter.com/noodle_news