Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap
От | Tom Lane |
---|---|
Тема | Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap |
Дата | |
Msg-id | 20765.1334725033@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap (Jameison Martin <jameisonb@yahoo.com>) |
Ответы |
Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap
Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap |
Список | pgsql-hackers |
Jameison Martin <jameisonb@yahoo.com> writes: > The use-case I'm targeting is a schema that has multiple tables with ~800 columns, most of which have only the first 50or so values set. 800 columns would require 800 bits in a bitmap which equates to 100 bytes. With 8-byte alignment therow bitmap would take up 104 bytes with the current implementation. If only the first 50 or so columns are actually non-null,then the minimum bitmap size wouldn't need to be more than 8 bytes, which means the proposed change would save 96bytes. For the data set I have in mind roughly 90% of the rows would fall into the category of needing only 8 bytes forthe null bitmap. I can't help thinking that (a) this is an incredibly narrow use-case, and (b) you'd be well advised to rethink your schema design anyway. There are a whole lot of inefficiencies associated with having that many columns; the size of the null bitmap is probably one of the smaller ones. I don't really want to suggest an EAV design, but perhaps some of the columns could be collapsed into arrays, or something like that? > What kind of test results would prove that this is a net win (or not a net loss) for typical cases? Are you interestedin some insert performance tests? Also, how would you define a typical case (e.g. what kind of data shape)? Hmm, well, most of the tables I've seen have fewer than 64 columns, so that the probability of win is exactly zero. Which would mean that you've got to demonstrate that the added overhead is unmeasurably small. Which maybe you can do, because there's certainly plenty of cycles involved in a tuple insertion, but we need to see the numbers. I'd suggest an INSERT/SELECT into a temp table as probably stressing tuple formation speed the most. Or maybe you could write a C function that just exercises heap_form_tuple followed by heap_freetuple in a tight loop --- if there's no slowdown measurable in that context, then a fortiori we don't have to worry about it in the real world. regards, tom lane
В списке pgsql-hackers по дате отправления: