Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
От | Marina Polyakova |
---|---|
Тема | Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors |
Дата | |
Msg-id | 453fa52de88477df2c4a2d82e09e461c@postgrespro.ru обсуждение исходный текст |
Ответ на | Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors (Marina Polyakova <m.polyakova@postgrespro.ru>) |
Ответы |
Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
|
Список | pgsql-hackers |
Hello, hackers! Here there's a seventh version of the patch for error handling and retrying of transactions with serialization/deadlock failures in pgbench (based on the commit a08dc711952081d63577fc182fcf955958f70add). I added the option --max-tries-time which is an implemetation of Fabien Coelho's proposal in [1]: the transaction with serialization or deadlock failure can be retried if the total time of all its tries is less than this limit (in ms). This option can be combined with the option --max-tries. But if none of them are used, failed transactions are not retried at all. Also: * Now when the first failure occurs in the transaction it is always reported as a failure since only after the remaining commands of this transaction are executed we find out whether we can try again or not. Therefore add the messages about retrying or ending the failed transaction to the "fails" debugging level so you can distinguish failures (which are retried) and errors (which are not retried). * Fix a report on the latency average because the total time includes time for both errors and successful transactions. * Code cleanup (including tests). [1] https://www.postgresql.org/message-id/alpine.DEB.2.20.1803292134380.16472%40lancre > Maybe the max retry should rather be expressed in time rather than > number > of attempts, or both approach could be implemented? -- Marina Polyakova Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Вложения
В списке pgsql-hackers по дате отправления: