Re: Losing data from Postgres
От | Jean-Marc Pigeon |
---|---|
Тема | Re: Losing data from Postgres |
Дата | |
Msg-id | 200011151602.eAFG28J25550@new-york.safe.ca обсуждение исходный текст |
Ответ на | Losing data from Postgres (Paul Breen <pbreen@computerpark.co.uk>) |
Список | pgsql-admin |
Bonjour Paul Breen > > Hello everyone, > > Can anyone help us? > > We are using Postgres in a hotspare configuration, that is, we have 2 > separate boxes both running identical versions of Postgres and everytime > we insert|update|delete from the database we write to both boxes (at the > application level). All communications to the databases are in > transaction blocks and if we cannot commit to both databases then we > rollback. [...] > Originally we were vacuuming twice a day but because some of the reports > we produce regularly were taking too long as the database grew, we added > multiple indexes onto the key tables and began vacuuming every hour. It's > only after doing this that we noticed the data loss - don't know if this > is coincidental or not. Yesterday we went back to vacuuming only twice a > day. We found something similar on our application. Seems to be a vacuum+index problem, the index do not refer to ALL data after the vacuum!. If I am right, drop the index, create the index again and your data should be found again... On our side now, before to do vacuum we drop the index do vacuum, rebuild the index. The overall time is the same as doing a 'simple' vacuum. Hoping that help... A bientot ========================================================================== Jean-Marc Pigeon Internet: Jean-Marc.Pigeon@safe.ca SAFE Inc. Phone: (514) 493-4280 Fax: (514) 493-1946 REGULUS, a real time accounting/billing package for ISP REGULUS' Home base <"http://www.regulus.safe.ca"> ==========================================================================
В списке pgsql-admin по дате отправления: