Re: [rfc] overhauling pgstat.stat
От | Pavel Stehule |
---|---|
Тема | Re: [rfc] overhauling pgstat.stat |
Дата | |
Msg-id | CAFj8pRDA_TT0N_LeZ=mtm4ZDRCDkhuYLQ21oDBYETtGrgpKazA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [rfc] overhauling pgstat.stat (Tomas Vondra <tv@fuzzy.cz>) |
Ответы |
Re: [rfc] overhauling pgstat.stat
|
Список | pgsql-hackers |
It works quite well as long as you have the objects (tables, indexes,>
>
> we very successfully use a tmpfs volume for pgstat files (use a backport
> of multiple statfiles from 9.3 to 9.1
functions) spread across multiple databases. Once you have one database
with very large number of objects, tmpfs is not as effective.
It's going to help with stats I/O, but it's not going to help with high
CPU usage (you're reading and parsing the stat files over and over) and
every rewrite creates a copy of the file. So if you have 400MB stats,
you will need 800MB tmpfs + some slack (say, 200MB). That means you'll
use ~1GB tmpfs although 400MB would be just fine. And this 600MB won't
be used for page cache etc.
OTOH, it's true that if you have that many objects, 600MB of RAM is not
going to help you anyway.
and just idea - can we use a database for storing these files. It can be used in unlogged tables. Second idea - hold a one bg worker as persistent memory key value database and hold data in memory with some optimizations - using anti cache and similar memory database fetures.
Pavel
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
В списке pgsql-hackers по дате отправления: