Re[2]: [HACKERS] Fwd: Joins and links
От | Leon |
---|---|
Тема | Re[2]: [HACKERS] Fwd: Joins and links |
Дата | |
Msg-id | 2965.990705@udmnet.ru обсуждение исходный текст |
Ответ на | Re: [HACKERS] Fwd: Joins and links (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Re[2]: [HACKERS] Fwd: Joins and links
Re: Re[2]: [HACKERS] Fwd: Joins and links |
Список | pgsql-hackers |
Hello Tom, Monday, July 05, 1999 you wrote: T> If we did have such a concept, the speed penalties for supporting T> hard links from one tuple to another would be enormous. Every time T> you change a tuple, you'd have to try to figure out what other tuples T> reference it, and update them all. I'm afraid that's mainly because fields in Postgres have variable length and after update they go to the end of the table. Am I right? In that case there could be done such referencing only with tables with wixed width rows, whose updates can naturally be done without moving. It is a little sacrifice, but it is worth it. T> Finally, I'm not convinced that the results would be materially faster T> than a standard mergejoin (assuming that you have indexes on both the T> fields being joined) or hashjoin (in the case that one table is small T> enough to be loaded into memory). Consider this: no indices, no optimizer thinking, no index lookups - no nothing! Just a sequential number of record multiplied by record size. Exactly three CPU instructions: read, multiply, lookup. Can you see the gain now? Best regards, Leon
В списке pgsql-hackers по дате отправления: