Re: [HACKERS] SERIALIZABLE with parallel query
От | Thomas Munro |
---|---|
Тема | Re: [HACKERS] SERIALIZABLE with parallel query |
Дата | |
Msg-id | CAEepm=1UhWBpB77zN7k+=D7ajWAFgrz+2ZwWockrTHJa2aL1Qg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] SERIALIZABLE with parallel query (Amit Kapila <amit.kapila16@gmail.com>) |
Список | pgsql-hackers |
On Fri, Feb 23, 2018 at 7:56 PM, Amit Kapila <amit.kapila16@gmail.com> wrote: > On Fri, Feb 23, 2018 at 8:48 AM, Thomas Munro > <thomas.munro@enterprisedb.com> wrote: >> On Fri, Feb 23, 2018 at 3:29 PM, Amit Kapila <amit.kapila16@gmail.com> wrote: >>> By the way, in which case leader can exit early? As of now, we do >>> wait for workers to end both before the query is finished or in error >>> cases. >> >> create table foo as select generate_series(1, 10)::int a; >> alter table foo set (parallel_workers = 2); >> set parallel_setup_cost = 0; >> set parallel_tuple_cost = 0; >> select count(a / 0) from foo; >> >> That reliably gives me: >> ERROR: division by zero [from leader] >> ERROR: could not map dynamic shared memory segment [from workers] >> >> I thought this was coming from resource manager cleanup, but you're >> right: that happens after we wait for all workers to finish. Perhaps >> this is a race within DestroyParallelContext() itself: when it is >> called by AtEOXact_Parallel() during an abort, it asks the postmaster >> to SIGTERM the workers, then it immediately detaches from the DSM >> segment, and then it waits for the worker to start up. >> > > I guess you mean to say worker waits to shutdown/exit. Why would it > wait for startup at that stage? Right, I meant to say shutdown/exit. -- Thomas Munro http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: