Re: Thousands of schemas and ANALYZE goes out of memory
От | Jeff Janes |
---|---|
Тема | Re: Thousands of schemas and ANALYZE goes out of memory |
Дата | |
Msg-id | CAMkU=1wLjAsmJNuB6ZObZmGHqi9jLbK6n1eSgnOc5J1-AUsvUA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Thousands of schemas and ANALYZE goes out of memory (Jeff Janes <jeff.janes@gmail.com>) |
Ответы |
Re: Thousands of schemas and ANALYZE goes out of memory
|
Список | pgsql-general |
On Tue, Oct 2, 2012 at 5:09 PM, Jeff Janes <jeff.janes@gmail.com> wrote: > I don't know how the transactionality of analyze works. I was > surprised to find that I even could run it in an explicit transaction > block, I thought it would behave like vacuum and create index > concurrently in that regard. > > However, I think that that would not solve your problem. When I run > analyze on each of 220,000 tiny tables by name within one session > (using autocommit, so each in a transaction), it does run about 4 > times faster than just doing a database-wide vacuum which covers those > same tables. (Maybe this is the lock/resource manager issue that has > been fixed for 9.3?) For the record, the culprit that causes "analyze;" of a database with a large number of small objects to be quadratic in time is "get_tabstat_entry" and it is not fixed for 9.3. Cheers, Jeff
В списке pgsql-general по дате отправления: