Re: increasing effective_cache_size slows down join queries by a factor of 4000x
От | Tomas Vondra |
---|---|
Тема | Re: increasing effective_cache_size slows down join queries by a factor of 4000x |
Дата | |
Msg-id | d41dcd64-3ce0-4e75-7690-8ea056c5c9f4@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: increasing effective_cache_size slows down join queries by a factor of 4000x (Artyom Shaposhnikov <artyom@gmail.com>) |
Ответы |
Re: increasing effective_cache_size slows down join queries by a factor of 4000x
|
Список | pgsql-general |
On 2/2/22 22:10, Artyom Shaposhnikov wrote: > the row estimate became ~1000x smaller with the stat in place, so it > looks like it grossly miscalculates the query plans without the stats > for large tables representing M:M relations. > Well, if the estimates are significantly off (and 3 orders of magnitude certainly qualifies), then all bets are off. There's no magical option that'd fix planning in such conditions. Ultimately, fixing the estimates (e.g. by creating extended statistics) is the right "fix" as it gives the optimizer the information needed to pick the right plan. > On Wed, Feb 2, 2022 at 11:47 AM Michael Lewis <mlewis@entrata.com> wrote: >> >> What does the row estimate look like on the scan of data table with that statistic in place? Anytime the stats give amis-estimate this far off, I wouldn't expect that plans would be optimal except by luck. >> >> Index Scan using data_pkey on data t (cost=0.57..21427806.53 rows=58785023 width=131) (actual time=0.024..0.482 rows=854loops=1) >> Index Cond: (id > 205284974) > It'd be interesting to see the plans without the LIMIT, as that makes the "actual" values low simply by terminating early. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-general по дате отправления: