Re: Any Good Way To Do Sync DB's?
От | Gordan Bobic |
---|---|
Тема | Re: Any Good Way To Do Sync DB's? |
Дата | |
Msg-id | Pine.LNX.4.33.0110130527210.28869-100000@sentinel.bobich.net обсуждение исходный текст |
Ответ на | Re: Any Good Way To Do Sync DB's? (Doug McNaught <doug@wireboard.com>) |
Ответы |
Re: Any Good Way To Do Sync DB's?
|
Список | pgsql-general |
On 12 Oct 2001, Doug McNaught wrote: > Joseph Koenig <joe@jwebmedia.com> writes: > > > I have a project where a client has products stored in a large Progress > > DB on an NT server. The web server is a FreeBSD box though, and the > > client wants to try to avoid the $5,500 license for the Unlimited > > Connections via OpenLink software and would like to take advantage of > > the 'free' non-expiring 2 connection (concurrent) license. This wouldn't > > be a huge problem, but the DB can easily reach 1 million records. Is > > there any good way to pull this data out of Progess and get it into > > Postgres? This is way too large of a db to do a "SELECT * FROM table" > > and do an insert for each row. Any brilliant ideas? Thanks, > > Probably the best thing to do is to export the data from Progress in a > format that the PostgreSQL COPY command can read. See the docs for > details. I'm going to have to rant now. The "dump" and "restore" which use the COPY method are actually totally useless for large databases. The reason for this is simple. Copying a 4 GB table with 40M rows requires over 40GB of temporary scratch space to copy, due to the WAL temp files. That sounds totally silly. Why doesn't pg_dump insert commits every 1000 rows or so??? Cheers. Gordan
В списке pgsql-general по дате отправления: