Re: Segmentation fault when calling BlessTupleDesc in a C function in parallel on PostgreSQL-(12.6, 12.7, 13.2, 13.3)
От | Bharath Rupireddy |
---|---|
Тема | Re: Segmentation fault when calling BlessTupleDesc in a C function in parallel on PostgreSQL-(12.6, 12.7, 13.2, 13.3) |
Дата | |
Msg-id | CALj2ACXuG-M_6=QY4D8kUWFJ4z334WDW=0Zq8fgxCVZSrvbHjQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Segmentation fault when calling BlessTupleDesc in a C function in parallel on PostgreSQL-(12.6, 12.7, 13.2, 13.3) (Eric Thinnes <e.thinnes@gmx.de>) |
Список | pgsql-bugs |
On Fri, May 14, 2021 at 5:56 PM Eric Thinnes <e.thinnes@gmx.de> wrote: > > > I do think it should be possible. > > The function always delivers the same result with the same call > parameters except for the determination of the result types and the > generation of the TupleDesc, the function has no side effects. > > If BlessTupleDesc inevitably leads to side effects, I am happy to be > instructed to improve something. > So far I haven't found any information on this. I can't say exactly that the BlessTupleDesc is actually causing the problem, because there are a good number of parallel safe functions (see [1]) in the core which didn't cause any problem. Since setof_kpos is a custom function, maybe it's a good idea to debug your function: with 1 or 2 workers, with parallel_leader_participation off, with force_parallel_mode on, putting some sleep code in that function, with some lesser data and encouraging parallel plans (see [2]). [1] hash_page_items pg_buffercache_pages pg_prepared_xact pg_lock_status pg_get_catalog_foreign_keys pg_partition_tree [2] -- encourage use of parallel plans set parallel_setup_cost=0; set parallel_tuple_cost=0; set min_parallel_table_scan_size=0; set max_parallel_workers_per_gather=2; With Regards, Bharath Rupireddy. EnterpriseDB: http://www.enterprisedb.com
В списке pgsql-bugs по дате отправления: