Re: : Tracking Full Table Scans
От | Venkat Balaji |
---|---|
Тема | Re: : Tracking Full Table Scans |
Дата | |
Msg-id | CAFrxt0jKLh3mHgJJ54D=Tdy=4ywTV4nbuEmt6Z0R9ZQhAJO5Ug@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: : Tracking Full Table Scans (Craig Ringer <ringerc@ringerc.id.au>) |
Список | pgsql-performance |
Yes. I am looking for the justified full table scans.
If bigger tables are getting scanned, I would like to know %age rows scanned against %age rows as the output.
If the query needs 80% of the rows as the output, then a full table scan is always better.
I believe there is a possibility for this in Postgres. I think we can get this using pg_stat_user_table, pg_statio_user_tables and pg_stats.
I will post the calculation once it i get it.
Thanks
VB
On Wed, Sep 28, 2011 at 6:25 AM, Craig Ringer <ringerc@ringerc.id.au> wrote:
On 09/28/2011 12:26 AM, Venkat Balaji wrote:It can also be best to do a full table scan of a big table for some queries. If the query needs to touch all the data in a table - for example, for an aggregate - then the query will often complete fastest and with less disk use by using a sequential scan.Thanks a lot Kevin !!
Yes. I intended to track full table scans first to ensure that only
small tables or tables with very less pages are (as you said) getting
scanned full.
I guess what you'd really want to know is to find out about queries that do seqscans to match relatively small fractions of the total tuples scanned, ie low-selectivity seqscans. I'm not sure whether or not it's possible to gather this data with PostgreSQL's current level of stats detail.
--
Craig Ringer
В списке pgsql-performance по дате отправления: