Re: 7.2.3?
От | Alvaro Herrera |
---|---|
Тема | Re: 7.2.3? |
Дата | |
Msg-id | Pine.LNX.4.44.0209290022370.5621-100000@alvh.no-ip.org обсуждение исходный текст |
Ответ на | Re: 7.2.3? (Bruce Momjian <pgman@candle.pha.pa.us>) |
Ответы |
Re: 7.2.3?
|
Список | pgsql-hackers |
Bruce Momjian dijo: > Justin Clift wrote: > > Alvaro Herrera wrote: > > As a "simple for the user approach", would it be > > too-difficult-to-bother-with to add to the postmaster an ability to > > start up with the data files from the previous version, for it to > > recognise an old data format automatically, then for it to do the > > conversion process of the old data format to the new one before going > > any further? > > Yes, we could, but if we are going to do that, we may as well just > automate the dump/reload. I don't think that's an acceptable solution. It requires too much free disk space and too much time. On-line upgrading, meaning altering the databases on a table-by-table basis (or even page-by-page) solves both problems (binary conversion sure takes less than converting to text representation and parsing it to binary again). I think a converting postmaster would be a waste, because it's unneeded functionality 99.999% of the time. I'm leaning towards an external program doing the conversion, and the backend just aborting if it finds old or in-conversion data. The converter should be able to detect that it has aborted and resume conversion. What would that converter need: - the old system catalog (including user defined data) - the new system catalog (ditto, including the schema) - the storage manager subsystem I think that should be enough for converting table files. I'd like to experiment with something like this when I have some free time. Maybe next year... -- Alvaro Herrera (<alvherre[a]atentus.com>) "I think my standards have lowered enough that now I think 'good design' is when the page doesn't irritate the living fuck out of me." (JWZ)
В списке pgsql-hackers по дате отправления: