Re: Slow Query- Bad Row Estimate

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: Slow Query- Bad Row Estimate
Дата
Msg-id 4CCAF8DC0200002500036FD5@gw.wicourts.gov
обсуждение исходный текст
Ответ на Slow Query- Bad Row Estimate  ("Ozer, Pam" <pozer@automotive.com>)
Список pgsql-performance
"Ozer, Pam" <pozer@automotive.com> wrote:

> Is more information needed?

Table layouts of the tables involved (including indexes) would be
interesting.  A description of the machine would be useful,
including OS, CPUs, RAM, and disk system.

I know you said you might have trouble changing the config, but some
of these seem problematic.

> shared_buffers = 500MB
> effective_cache_size = 1000MB
> max_connections = 100
> temp_buffers = 100MB

So you will allow up to 10GB to be tied up in space reserved for
temporary tables, but only expect to cache 1GB of your database?
That hardly seems optimal.

> work_mem = 100MB

That could be another 10GB or more in work memory at any moment, if
each connection was running a query which needed one work_mem
allocation.

> seq_page_cost = 1.0
> random_page_cost = 1.1
> cpu_tuple_cost = 0.1
> cpu_index_tuple_cost = 0.05
> cpu_operator_cost = 0.01

Those settings are OK if the active portion of the database is fully
cached.  Is it?

> default_statistics_target = 1000

If plan times get long with complex queries, you might want to back
that off; otherwise, OK.

> autovacuum_max_workers = 1

That seems like a bad idea.  Allowing multiple workers helps reduce
bloat and improve statistics.  If autovacuum is affecting
performance, you would be better off tweaking the autovacuum cost
limits.

-Kevin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Slow Query- Bad Row Estimate
Следующее
От: "Ozer, Pam"
Дата:
Сообщение: Re: Slow Query- Bad Row Estimate