Re: [HACKERS] compression in LO and other fields
От | wieck@debis.com (Jan Wieck) |
---|---|
Тема | Re: [HACKERS] compression in LO and other fields |
Дата | |
Msg-id | m11mSsA-0003kzC@orion.SAPserv.Hamburg.dsh.de обсуждение исходный текст |
Ответ на | Re: [HACKERS] compression in LO and other fields (Hannu Krosing <hannu@tm.ee>) |
Список | pgsql-hackers |
Ech - wrong key :-) Hannu Krosing wrote: > Jan Wieck wrote: > > > > The next step would be tweaking the costs for sequential scans vs. > index scans. > > I guess that the indexes would stay uncompressed ? I'm sure about this. On a database of significant size, anyone indexing a field with a possible size over 100 bytes is doing something wrong (and only idiots go above 500 bytes). They are IMPLEMENTING a not well thought out database DESIGN. A database engine should support indices on bigger fields, but it's still a bad schema and thus idiotic. Currently, we don't check the size of indexed fields. And the only problems I've seen with it where some reports that huge PL functions could not be created because there was an unused (idiotic) index on the prosrc attribute and they exceeded the 4K limit for index tuples. I've removed this index already in the v7.0 tree. The ?bug? in the btree code, failing to split a page if the key values exceed 4K, is still there. But I don't think anyone really cares for it. Thus, I assume there aren't many idiots out there. And I don't expect that anyone would ever create an index on a compressed data type. ?bug? -> The difference between a bug and a feature is DOCUMENTATION. Thomas, would you please add this limit on index tuples to the doc's so we have a new FEATURE to tell in the v7.0 announcement? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
В списке pgsql-hackers по дате отправления: