Re: Reducing tuple overhead

Поиск
Список
Период
Сортировка
От Jim Nasby
Тема Re: Reducing tuple overhead
Дата
Msg-id 55415B4B.5000203@BlueTreble.com
обсуждение исходный текст
Ответ на Re: Reducing tuple overhead  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On 4/29/15 12:18 PM, Robert Haas wrote:
> On Mon, Apr 27, 2015 at 5:01 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
>> The problem with just having the value is that if *anything* changes between
>> how you evaluated the value when you created the index tuple and when you
>> evaluate it a second time you'll corrupt your index. This is actually an
>> incredibly easy problem to have; witness how we allowed indexing
>> timestamptz::date until very recently. That was clearly broken, but because
>> we never attempted to re-run the index expression to do vacuuming at least
>> we never corrupted the index itself.
>
> True.  But I guess what I don't understand is: how big a deal is this,
> really?  The "uncorrupted" index can still return wrong answers to
> queries.  The fact that you won't end up with index entries pointing
> to completely unrelated tuples is nice, but if index scans are missing
> tuples that they should see, aren't you still pretty hosed?

Maybe, maybe not. You could argue it's better to miss some rows than 
have completely unrelated ones.

My recollection is there's other scenarios where this causes problems, 
but that's from several years ago and I wasn't able to find anything on 
a quick search of archives. I've wondered the same in the past and Tom 
had reasons it was bad, but perhaps they're overstated.
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: INSERT ... ON CONFLICT syntax issues
Следующее
От: Tom Lane
Дата:
Сообщение: Re: alternative compression algorithms?