Re: long-standing data loss bug in initial sync of logical replication
От | Benoit Lobréau |
---|---|
Тема | Re: long-standing data loss bug in initial sync of logical replication |
Дата | |
Msg-id | fa7e4243-3e82-46a4-af29-73ab00df4b28@dalibo.com обсуждение исходный текст |
Ответ на | RE: long-standing data loss bug in initial sync of logical replication ("Zhijie Hou (Fujitsu)" <houzj.fnst@fujitsu.com>) |
Список | pgsql-hackers |
On 3/3/25 8:41 AM, Zhijie Hou (Fujitsu) wrote: > A nitpick with the data for the Concurrent Transaction (2000) case. The results > show that the HEAD's data appears worse than the patch data, which seems > unusual. However, I confirmed that the details in the attachment are as expected, > so, this seems to be a typo. (I assume you intended to use a > decimal point instead of a comma in the data like (8,43500...)) Hi, Argh, yes, sorry! I didn't pay enough attention and accidentally inverted the Patch and Head numbers in the last line when copying them from the ODS to the email to match the previous report layout. The comma is due to how decimals are written in my language (comma instead of dot). I forgot to "translate" it. Concurrent Txn | Head (sec) | Patch (sec) | Degradation in % --------------------------------------------------------------------- 50 | 0.1797647 | 0.1920949 | 6.85907744957 100 | 0.3693029 | 0.3823425 | 3.53086856344 500 | 1.62265755 | 1.91427485 | 17.97158617972 1000 | 3.01388635 | 3.57678295 | 18.67676928162 2000 | 6.4713304 | 7.0171877 | 8.43500897435 > as Amit pointed out, we will share a new test script soon > that uses the SQL API xxx_get_changes() to test. It would be great if you could > verify the performance using the updated script as well. Will do. -- Benoit Lobréau Consultant http://dalibo.com
В списке pgsql-hackers по дате отправления: