Re: DBT-3 with SF=20 got failed
От | Robert Haas |
---|---|
Тема | Re: DBT-3 with SF=20 got failed |
Дата | |
Msg-id | CA+TgmoZ3rsBJNUq6MfaD-bHecH0t1v-CB5++1HwBOpvVSNF2KQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: DBT-3 with SF=20 got failed (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Ответы |
Re: DBT-3 with SF=20 got failed
|
Список | pgsql-hackers |
On Thu, Sep 24, 2015 at 12:40 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote: > There are two machines - one with 32GB of RAM and work_mem=2GB, the other > one with 256GB of RAM and work_mem=16GB. The machines are hosting about the > same data, just scaled accordingly (~8x more data on the large machine). > > Let's assume there's a significant over-estimate - we expect to get about > 10x the actual number of tuples, and the hash table is expected to almost > exactly fill work_mem. Using the 1:3 ratio (as in the query at the beginning > of this thread) we'll use ~512MB and ~4GB for the buckets, and the rest is > for entries. > > Thanks to the 10x over-estimate, ~64MB and 512MB would be enough for the > buckets, so we're wasting ~448MB (13% of RAM) on the small machine and > ~3.5GB (~1.3%) on the large machine. > > How does it make any sense to address the 1.3% and not the 13%? One of us is confused, because from here it seems like 448MB is 1.3% of 32GB, not 13%. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: