Re: Page Checksums + Double Writes
От | Jeff Janes |
---|---|
Тема | Re: Page Checksums + Double Writes |
Дата | |
Msg-id | CAMkU=1yDc_OK4Rs=YrK7cANQXSuYkaOezJ3LbiG_XVeNauE7TQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Page Checksums + Double Writes (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: Page Checksums + Double Writes
Re: Page Checksums + Double Writes |
Список | pgsql-hackers |
On 12/23/11, Robert Haas <robertmhaas@gmail.com> wrote: > On Fri, Dec 23, 2011 at 11:14 AM, Kevin Grittner > <Kevin.Grittner@wicourts.gov> wrote: >> Thoughts? > > Those are good thoughts. > > Here's another random idea, which might be completely nuts. Maybe we > could consider some kind of summarization of CLOG data, based on the > idea that most transactions commit. I had a perhaps crazier idea. Aren't CLOG pages older than global xmin effectively read only? Could backends that need these bypass locking and shared memory altogether? > An obvious problem is that, if the abort rate is significantly > different from zero, and especially if the aborts are randomly mixed > in with commits rather than clustered together in small portions of > the XID space, the CLOG rollup data would become useless. On the > other hand, if you're doing 10k tps, you only need to have a window of > a tenth of a second or so where everything commits in order to start > getting some benefit, which doesn't seem like a stretch. Could we get some major OLTP users to post their CLOG for analysis? I wouldn't think there would be much security/propietary issues with CLOG data. Cheers, Jeff
В списке pgsql-hackers по дате отправления: