Re: Performance degradation on concurrent COPY into a single relation in PG16.
От | Andres Freund |
---|---|
Тема | Re: Performance degradation on concurrent COPY into a single relation in PG16. |
Дата | |
Msg-id | 20231014023344.xyeadhsszttznpng@awork3.anarazel.de обсуждение исходный текст |
Ответ на | Re: Performance degradation on concurrent COPY into a single relation in PG16. (Andres Freund <andres@anarazel.de>) |
Список | pgsql-hackers |
Hi, On 2023-10-13 11:30:35 -0700, Andres Freund wrote: > On 2023-10-13 10:39:10 -0700, Andres Freund wrote: > > On 2023-10-12 09:24:19 -0700, Andres Freund wrote: > > > I kind of had hoped somebody would comment on the approach. Given that nobody > > > has, I'll push the minimal fix of resetting the state in > > > ReleaseBulkInsertStatePin(), even though I think architecturally that's not > > > great. > > > > I spent some time working on a test that shows the problem more cheaply than > > the case upthread. I think it'd be desirable to have a test that's likely to > > catch an issue like this fairly quickly. We've had other problems in this > > realm before - there's only a single test that fails if I remove the > > ReleaseBulkInsertStatePin() call, and I don't think that's guaranteed at all. > > > > I'm a bit on the fence on how large to make the relation. For me the bug > > triggers when filling both relations up to the 3rd block, but how many rows > > that takes is somewhat dependent on space utilization on the page and stuff. > > > > Right now the test uses data/desc.data and ends up with 328kB and 312kB in two > > partitions. Alternatively I could make the test create a new file to load with > > copy that has fewer rows than data/desc.data - I didn't see another data file > > that works conveniently and has fewer rows. The copy is reasonably fast, even > > under something as expensive as rr (~60ms). So I'm inclined to just go with > > that? > > Patch with fix and test attached (0001). Pushed that. Greetings, Andres
В списке pgsql-hackers по дате отправления: