On 15/01/04 15:52, Zheng, Wendy wrote:
> Hi PgSql experts,
>
> I’m working on a task to move tables from on database to another within
> the same cluster and same server. I try the pg_dump and pg_restore commands
> (with –Fc option), and notice that it costs around half an hour which is
> not acceptable. Then I try to move the data files directly. I create the
> same tables in the new DB, find out the directory stores the data file
> and search for the data files by the table name, then move the corresponding
> data file in the old DB to the new one. This is very quick, and looks like the
> DB still works. But I have a concern that whether there will any other problem
> if I doing this?
Yes, you'll experience massive data corruption. You can't just manually copy
data files around, even though it might seem to work.
> Another workaround is to access the old DB in the new DB with dblink
> (I created a view with dblink and use it as if the view as if the table
> is in the new DB). But we soon notice that even though we specify the
> criteria in the select command, dblink will still retrieve all the
> records first, and then apply the search criteria. This brings poor
> performance. Do you have any better idea how can I handle this?
Which PostgreSQL version are you using? If 9.3 or later you can use
a foreign data wrapper (postgres_fdw) to access data in another database
(including on the same cluster).
Regards
Ian Barwick
--
Ian Barwick http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services