Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?
От | Jeff Janes |
---|---|
Тема | Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? |
Дата | |
Msg-id | CAMkU=1y2ea=ZLT-zqyQpNDuuDiNVuWw7xMgpt6R7oVTrHwHVVQ@mail.gmail.com обсуждение исходный текст |
Ответ на | PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? (Stefan Keller <sfkeller@gmail.com>) |
Ответы |
Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?
Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? |
Список | pgsql-performance |
On Sat, Feb 25, 2012 at 4:16 PM, Stefan Keller <sfkeller@gmail.com> wrote: > > I'd like to come back on the issue of aka of in-memory key-value database. > > To remember, it contains table definition and queries as indicated in > the appendix [0]. There exist 4 other tables of similar structure. > There are indexes on each column. The tables contain around 10 million > tuples. The database is "read-only"; it's completely updated every > day. I don't expect more than 5 concurrent users at any time. A > typical query looks like [1] and varies in an unforeseable way (that's > why hstore is used). EXPLAIN tells me that the indexes are used [2]. > > The problem is that the initial queries are too slow - and there is no > second chance. I do have to trash the buffer every night. There is > enough main memory to hold all table contents. Just that table, or the entire database? > > 1. How can I warm up or re-populate shared buffers of Postgres? Instead, warm the OS cache. Then data will get transferred into the postgres shared_buffers pool from the OS cache very quickly. tar -c $PGDATA/base/ |wc -c If you need to warm just one table, because the entire base directory won't fit in OS cache, then you need to do a bit more work to find out which files to use. You might feel clever and try this instead: tar -c /dev/null $PGDATA/base/ > /dev/null But my tar program is too clever by half. It detects that it is writing to /dev/null, and just does not actually read the data. > 2. Are there any hints on how to tell Postgres to read in all table > contents into memory? I don't think so, at least not in core. I've wondered if it would make sense to suppress ring-buffer strategy when there are buffers on the free-list. That way a sequential scan would populate shared_buffers after a restart. But it wouldn't help you get the indexes into cache. Cheers, Jeff
В списке pgsql-performance по дате отправления: