Re: queries with lots of UNIONed relations
От | Andy Colson |
---|---|
Тема | Re: queries with lots of UNIONed relations |
Дата | |
Msg-id | 4D2F8103.7010805@squeakycode.net обсуждение исходный текст |
Ответ на | Re: queries with lots of UNIONed relations (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: queries with lots of UNIONed relations
|
Список | pgsql-performance |
On 1/13/2011 4:42 PM, Robert Haas wrote: > On Thu, Jan 13, 2011 at 5:41 PM, Robert Haas<robertmhaas@gmail.com> wrote: >> On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote: >>> Robert Haas<robertmhaas@gmail.com> writes: >>>> On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson<jnelson+pgsql@jamponi.net> wrote: >>>>> I still think that having UNION do de-duplication of each contributory >>>>> relation is a beneficial thing to consider -- especially if postgresql >>>>> thinks the uniqueness is not very high. >>> >>>> This might be worth a TODO. >>> >>> I don't believe there is any case where hashing each individual relation >>> is a win compared to hashing them all together. If the optimizer were >>> smart enough to be considering the situation as a whole, it would always >>> do the latter. >> >> You might be right, but I'm not sure. Suppose that there are 100 >> inheritance children, and each has 10,000 distinct values, but none of >> them are common between the tables. In that situation, de-duplicating >> each individual table requires a hash table that can hold 10,000 >> entries. But deduplicating everything at once requires a hash table >> that can hold 1,000,000 entries. >> >> Or am I all wet? > > Yeah, I'm all wet, because you'd still have to re-de-duplicate at the > end. But then why did the OP get a speedup? *scratches head* > Because it all fix it memory and didnt swap to disk? -Andy
В списке pgsql-performance по дате отправления: