Re: Millions of tables
От | Greg Spiegelberg |
---|---|
Тема | Re: Millions of tables |
Дата | |
Msg-id | CAEtnbpWCSM81Sf2_DFQ6Xio9FfrkWuacELoRiYuLnJr2radDFw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Millions of tables (Yves Dorfsman <yves@zioup.com>) |
Список | pgsql-performance |
Consider the problem though. Random access to trillions of records with no guarantee any one will be fetched twice in a short time frame nullifies the effectiveness of a cache unless the cache is enormous. If such a cache were that big, 100's of TB's, I wouldn't be looking at on-disk storage options. :)
-Greg
On Mon, Sep 26, 2016 at 6:54 AM, Yves Dorfsman <yves@zioup.com> wrote:
Something that is not talked about at all in this thread is caching. A bunch
of memcache servers in front of the DB should be able to help with the 30ms
constraint (doesn't have to be memcache, some caching technology).
--
http://yves.zioup.com
gpg: 4096R/32B0F416
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
В списке pgsql-performance по дате отправления: