Re: Beginner optimization questions, esp. regarding Tsearch2
От | Richard Huxton |
---|---|
Тема | Re: Beginner optimization questions, esp. regarding Tsearch2 |
Дата | |
Msg-id | 44DAFB2B.4070307@archonet.com обсуждение исходный текст |
Ответ на | Beginner optimization questions, esp. regarding Tsearch2 configuration ("Carl Youngblood" <carl@youngbloods.org>) |
Ответы |
Re: Beginner optimization questions, esp. regarding Tsearch2
Re: Beginner optimization questions, esp. regarding Tsearch2 Re: Beginner optimization questions, esp. regarding Tsearch2 |
Список | pgsql-performance |
Carl Youngblood wrote: > - I noticed that there are six different postmaster daemons running. > Only one of them is taking up a lot of RAM (1076m virtual and 584m > resident). The second one is using 181m resident while the others are > less than 20m each. Is it normal to have multiple postmaster > processes? You should have one master backend process and one per connection. PG is a classic multi-process designed server. > Even the biggest process doesn't seem to be using near as > much RAM as I have on this machine. Is that bad? What percentage of > my physical memory should I expect postgres to use for itself? How > can I encourage it to cache more query results in memory? OK - one of the key things with PostgreSQL is that it relies on the O.S. to cache its disk files. So, allocating too much memory to PG can be counterproductive. From your figures, you're allocating about 64MB to work_mem, which is per sort. So, a complex query could use several times that amount. If you don't have many concurrent queries that might be what you want. Also, you've allocated 1GB to your shared_buffers which is more than I'd use as a starting point. You've only mentioned one main table with 100,000 rows, so presumably you're going to cache the entire DB in RAM. So, you'll want to increase effective_cache_size and reduce random_page_cost. -- Richard Huxton Archonet Ltd
В списке pgsql-performance по дате отправления: