Re: Re: pg_dump and LOs (another proposal)
От | Tom Lane |
---|---|
Тема | Re: Re: pg_dump and LOs (another proposal) |
Дата | |
Msg-id | 3345.962816793@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Re: pg_dump and LOs (another proposal) (Philip Warner <pjw@rhyme.com.au>) |
Ответы |
Re[2]: Re: pg_dump and LOs (another proposal)
Re: Re: pg_dump and LOs (another proposal) Re: Re: pg_dump and LOs (another proposal) |
Список | pgsql-hackers |
Philip Warner <pjw@rhyme.com.au> writes: > The thing that bugs me about this if for 30,000 rows, I do 30,000 updates > after the restore. It seems *really* inefficient, not to mention slow. Shouldn't be a problem. For one thing, I can assure you there are no databases with 30,000 LOs in them ;-) --- the existing two-tables-per-LO infrastructure won't support it. (I think Denis Perchine has started to work on a replacement one-table-for-all-LOs solution, btw.) Possibly more to the point, there's no reason for pg_restore to grovel through the individual rows for itself. Having identified a column that contains (or might contain) LO OIDs, you can do something like UPDATE userTable SET oidcolumn = tmptable.newLOoid WHERE oidcolumn = tmptable.oldLOoid; which should be quick enough, especially given indexes. > I'll also have to modify pg_restore to talk to the database directly (for > lo import). As a result I will probably send the entire script directly > from withing pg_restore. Do you know if comment parsing ('--') is done in > the backend, or psql? Both, I believe --- psql discards comments, but so will the backend. Not sure you really need to abandon use of psql, though. regards, tom lane
В списке pgsql-hackers по дате отправления: