Re: [HACKERS] Index corruption
От | Adriaan Joubert |
---|---|
Тема | Re: [HACKERS] Index corruption |
Дата | |
Msg-id | 3864BFBA.E5CF7D5C@albourne.com обсуждение исходный текст |
Ответ на | Index corruption (Adriaan Joubert <a.joubert@albourne.com>) |
Ответы |
Re: [HACKERS] Index corruption
|
Список | pgsql-hackers |
> No, that's a new one AFAIK. I don't suppose you saved the state of your > DB before rebuilding it? I'd like to try to reproduce the problem... No, sorry. I got increasing desperate as this was a production system and I was under a bit of pressure to get it back up. A day earlier I had had a complaint about the number of tuples in the index being incorrect. At the third attempt I managed to run vacuum over it without the backend crashing and the it seemed to behave well. Next morning I ran vacuum again and then I ended up with the endless file-creation loop. Oh yes, to get it to vacuum I had to delete all my functions (pg_proc) and then reload them. I know that all my procedures are small enough not to break the 8K limit, as I used to have trouble with that. I tried the same trick, i.e. dropping and reloading my functions, but no luck. As most of what they do is to enforce referential integrity, Jan's foreign key stuff may solve a large part of the problem! I had the system logging with debug level 3 and there was nothing in the logs. Did anything get fixed in this area between 6.5.2 and 6.5.3? I.e. should I upgrade? I'd rather not just at the moment. Merry Christmas! Adriaan
В списке pgsql-hackers по дате отправления: