Re: [HACKERS] Storing rows bigger than one block
От | Mattias Kregert |
---|---|
Тема | Re: [HACKERS] Storing rows bigger than one block |
Дата | |
Msg-id | 34BA5077.1A880D21@algonet.se обсуждение исходный текст |
Ответ на | Re: [HACKERS] varchar/char size (darrenk@insightdist.com (Darren King)) |
Список | pgsql-hackers |
Darren King wrote: > > A related question: Is it possible to store tuples over more than one > > block? Would it be possible to split a big TEXT into multiple blocks? > Possible, but would cut the access speed to (1 / # blocks), no? For "big" (multile blocks) rows, maybe. Consecutive blocks should be buffered by the disk or the os, so I don't think the difference would be big, or even noticeable. > There is a var in the tuple header, t_chain, 6.2.1 that has since been > removed for 6.3. I think its original purpose was with time-travel, > _but_, if we go with a ROWID instead of an oid in the future, this could > be put back in the header and would be the actual address of the next > block in the chain. > > Oracle has this concept of chained rows. It is how they implement all > of their LONG* types and also handle rows of normal types that are > larger than the block size. Yes! I can't see why PostgreSQL should not be able to store rows bigger than one block? I have seen people referring to this limitation every now and then, but I don't understand why it has to be that way? Is this something fundamental to PostgreSQL? /* m */
В списке pgsql-hackers по дате отправления: