Re: large number dead tup - Postgres 9.5
От | Francisco Olarte |
---|---|
Тема | Re: large number dead tup - Postgres 9.5 |
Дата | |
Msg-id | CA+bJJbxm0FD-1s9S6GSnJLO7hgnKAGShG+iXV53gghLQEXO+FQ@mail.gmail.com обсуждение исходный текст |
Ответ на | large number dead tup - Postgres 9.5 (Patrick B <patrickbakerbr@gmail.com>) |
Ответы |
Re: large number dead tup - Postgres 9.5
|
Список | pgsql-general |
Hi: On Mon, Sep 12, 2016 at 1:17 AM, Patrick B <patrickbakerbr@gmail.com> wrote: >> schemaname relname n_live_tup n_dead_tup >> ---------- ------------- ---------- ---------- >> public parts 191623953 182477402 ... > Because of that the table is very slow... > When I do a select on that table it doesn't use an index, for example: > \d parts; >> "index_parts_id" btree (company_id) >> "index_parts_id_and_country" btree (company_id, country) > explain select * from parts WHERE company_id = 12; >> Seq Scan on parts (cost=0.00..6685241.40 rows=190478997 width=223) >> Filter: (company_id = 12) You've already been directed to check table is really getting vacuumed / analyzed, but I'd like to point that if the count estimates are nearly correct that plan is good ( it's estimating getting more than 99% of the table, a seq scan tends to beat index scan easily when selecting that big part of the table, even accounting for dead tuples it's more about 50% of the table, and a seq scan is much faster PER TUPLE then an index scan ( and and index scan would likely touch every data page for that big fraction, so reading all of them sequentially and oing a quick filter is easier )). Francisco Olarte.
В списке pgsql-general по дате отправления: