Re: Wiki Page Draft for upcoming release
От | David Johnston |
---|---|
Тема | Re: Wiki Page Draft for upcoming release |
Дата | |
Msg-id | 1395107975236-5796503.post@n5.nabble.com обсуждение исходный текст |
Ответ на | Wiki Page Draft for upcoming release (Josh Berkus <josh@agliodbs.com>) |
Список | pgsql-hackers |
I sent a post to -general with a much more detailed brain dump of my current understanding on this topic. The main point I'm addressing here is how to recover from this problem. Since a symptom of the problem is that pg_dump/restore can fail saying that (in some instances) the only viable restore mechanism be pg_dump/restore means that someone so afflicted is going to lose data since their last good dump - if they still have one. However, if the true data table does not actually contain any duplicate data then such a dump/restore cycle (or I would think REINDEX - or DROP/CREATE INDEX chain) should resolve the problem. Thus if there is duplicate data the user needs-to/can identify and remove the offending records so that subsequent actions do not fail with a duplicate key error. If this is true then providing a query (or queries) that can provide the problem records and delete them from the table - along with any staging up that is necessary (like first dropping affected indexes if applicable) - would be good a nice addition. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Wiki-Page-Draft-for-upcoming-release-tp5796494p5796503.html Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.
В списке pgsql-hackers по дате отправления: