Copy database performance issue
От | Steve |
---|---|
Тема | Copy database performance issue |
Дата | |
Msg-id | Pine.GSO.4.64.0610231734220.3930@kingcheetah.tanabi.org обсуждение исходный текст |
Ответы |
Re: Copy database performance issue
Re: Copy database performance issue |
Список | pgsql-performance |
Hello there; I've got an application that has to copy an existing database to a new database on the same machine. I used to do this with a pg_dump command piped to psql to perform the copy; however the database is 18 gigs large on disk and this takes a LONG time to do. So I read up, found some things in this list's archives, and learned that I can use createdb --template=old_database_name to do the copy in a much faster way since people are not accessing the database while this copy happens. The problem is, it's still too slow. My question is, is there any way I can use 'cp' or something similar to copy the data, and THEN after that's done modify the database system files/system tables to recognize the copied database? For what it's worth, I've got fsync turned off, and I've read every tuning thing out there and my settings there are probably pretty good. It's a Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and here's some stats: shared_buffers = 300000 work_mem = 102400 maintenance_work_mem = 1024000 bgwriter_lru_maxpages=0 bgwriter_lru_percent=0 fsync = off wal_buffers = 128 checkpoint_segments = 64 Thank you! Steve Conley
В списке pgsql-performance по дате отправления: