Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers
От | Amit Kapila |
---|---|
Тема | Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers |
Дата | |
Msg-id | CAA4eK1K0YdjL0A7kXGxgVHsh7fpE35MRmzPu9x8jdKAYPmi0mg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers (Amit Kapila <amit.kapila16@gmail.com>) |
Список | pgsql-hackers |
On Fri, Mar 10, 2017 at 11:43 AM, Amit Kapila <amit.kapila16@gmail.com> wrote: > On Fri, Mar 10, 2017 at 10:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> >> Also, I see clam reported in green just now, so it's not 100% >> reproducible :-( >> > > Just to let you know that I think I have figured out the reason of > failure. If we run the regressions with attached patch, it will make > the regression tests fail consistently in same way. The patch just > makes all transaction status updates to go via group clog update > mechanism. Now, the reason of the problem is that the patch has > relied on XidCache in PGPROC for subtransactions when they are not > overflowed which is okay for Commits, but not for Rollback to > Savepoint and Rollback. For Rollback to Savepoint, we just pass the > particular (sub)-transaction id to abort, but group mechanism will > abort all the sub-transactions in that top transaction to Rollback. I > am still analysing what could be the best way to fix this issue. I > think there could be multiple ways to fix this problem. One way is > that we can advertise the fact that the status update for transaction > involves subtransactions and then we can use xidcache for actually > processing the status update. Second is advertise all the > subtransaction ids for which status needs to be update, but I am sure > that is not-at all efficient as that will cosume lot of memory. Last > resort could be that we don't use group clog update optimization when > transaction has sub-transactions. > On further analysis, I don't think the first way mentioned above can work for Rollback To Savepoint because it can pass just a subset of sub-tranasctions in which case we can never identify it by looking at subxids in PGPROC unless we advertise all such subxids. The case I am talking is something like: Begin; Savepoint one; Insert ... Savepoint two Insert .. Savepoint three Insert ... Rollback to Savepoint two; Now, for Rollback to Savepoint two, we pass transaction ids corresponding to Savepoint three and two. So, I think we can apply this optimization only for transactions that always commits which will anyway be the most common use case. Another alternative as mentioned above is to do this optimization when there are no subtransactions involved. Attached two patches implements these two approaches (fix_clog_group_commit_opt_v1.patch - allow optimization only for commits; fix_clog_group_commit_opt_v2.patch - allow optimizations for transaction status updates that don't involve subxids). I think the first approach is a better way to deal with this, let me know your thoughts? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Вложения
В списке pgsql-hackers по дате отправления: