Re: alternative compression algorithms?
От | Tom Lane |
---|---|
Тема | Re: alternative compression algorithms? |
Дата | |
Msg-id | 35661.1430347492@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: alternative compression algorithms? (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: alternative compression algorithms?
|
Список | pgsql-hackers |
Robert Haas <robertmhaas@gmail.com> writes: > On Mon, Apr 20, 2015 at 9:03 AM, Tomas Vondra > <tomas.vondra@2ndquadrant.com> wrote: >> Sure, it's not an ultimate solution, but it might help a bit. I do have >> other ideas how to optimize this, but in the planner every milisecond >> counts. Looking at 'perf top' and seeing pglz_decompress() in top 3. > I suggested years ago that we should not compress data in > pg_statistic. Tom shot that down, but I don't understand why. It > seems to me that when we know data is extremely frequently accessed, > storing it uncompressed makes sense. I've not been following this thread, but I do not think your argument here holds any water. pg_statistic entries are generally fetched via the syscaches, and we fixed things years ago so that toasted tuple entries are detoasted before insertion in syscache. So I don't believe that preventing on-disk compression would make for any significant improvement, at least not after the first reference within a session. Also, it's a very long way from "some pg_statistic entries are frequently accessed" to "all pg_statistic entries are frequently accessed". regards, tom lane
В списке pgsql-hackers по дате отправления: