heap_update is broken in current sources
От | Tom Lane |
---|---|
Тема | heap_update is broken in current sources |
Дата | |
Msg-id | 1490.978903505@sss.pgh.pa.us обсуждение исходный текст |
Ответы |
Re: heap_update is broken in current sources
|
Список | pgsql-hackers |
heap_update() currently ends with if (newbuf != buffer) { LockBuffer(newbuf, BUFFER_LOCK_UNLOCK); WriteBuffer(newbuf); } LockBuffer(buffer,BUFFER_LOCK_UNLOCK); WriteBuffer(buffer); /* invalidate caches */ RelationInvalidateHeapTuple(relation, &oldtup); RelationMark4RollbackHeapTuple(relation, newtup); return HeapTupleMayBeUpdated; This is broken because WriteBuffer releases our refcounts on the buffer pages that are holding the old and new tuples. By the time RelationInvalidateHeapTuple gets to do its thing, some other backend may have swapped a new disk page into the shared buffer that oldtup points at. catcache.c will then be using the wrong data to compute the hash index of the old tuple. This will at minimum result in failure to invalidate the old tuple out of our catcache (because we'll be searching the wrong hashchains), and can lead to a flat-out crash or Assert failure due to invalid data being fed to the hashing code. I have seen several nonrepeatable failures in the parallel regress tests in recent weeks, which I now believe are all traceable to this error. I will commit a fix for this error shortly, and have recommended to Marc that he re-roll the beta2 tarball before announcing it... regards, tom lane
В списке pgsql-hackers по дате отправления: