Re: Unlinking Parallel Hash Join inner batch files sooner

Поиск
Список
Период
Сортировка
От Heikki Linnakangas
Тема Re: Unlinking Parallel Hash Join inner batch files sooner
Дата
Msg-id 871e80d4-e8cf-a117-ceb2-141586a3bcd2@iki.fi
обсуждение исходный текст
Ответ на Re: Unlinking Parallel Hash Join inner batch files sooner  (Jehan-Guillaume de Rorthais <jgdr@dalibo.com>)
Ответы Re: Unlinking Parallel Hash Join inner batch files sooner  (John Naylor <johncnaylorls@gmail.com>)
Список pgsql-hackers
On 11/05/2023 00:00, Jehan-Guillaume de Rorthais wrote:
> On Wed, 10 May 2023 15:11:20 +1200
> Thomas Munro <thomas.munro@gmail.com> wrote:
>> The reason I didn't do this earlier is that sharedtuplestore.c
>> continues the pre-existing tradition where each parallel process
>> counts what it writes against its own temp_file_limit.  At the time I
>> thought I'd need to have one process unlink all the files, but if a
>> process were to unlink files that it didn't create, that accounting
>> system would break.  Without some new kind of shared temp_file_limit
>> mechanism that doesn't currently exist, per-process counters could go
>> negative, creating free money.  In the attached patch, I realised
>> something that I'd missed before: there is a safe point for each
>> backend to unlink just the files that it created, and there is no way
>> for a process that created files not to reach that point.
> 
> Indeed.
> 
> For what it worth, from my new and non-experienced understanding of the
> parallel mechanism, waiting for all workers to reach
> WAIT_EVENT_HASH_GROW_BATCHES_REPARTITION, after re-dispatching old batches in
> new ones, seems like a safe place to instruct each workers to clean their old
> temp files.

Looks good to me too at a quick glance. There's this one "XXX free" 
comment though:

>     for (int i = 1; i < old_nbatch; ++i)
>     {
>         ParallelHashJoinBatch *shared =
>         NthParallelHashJoinBatch(old_batches, i);
>         SharedTuplestoreAccessor *accessor;
> 
>         accessor = sts_attach(ParallelHashJoinBatchInner(shared),
>                               ParallelWorkerNumber + 1,
>                               &pstate->fileset);
>         sts_dispose(accessor);
>         /* XXX free */
>     }

I think that's referring to the fact that sts_dispose() doesn't free the 
'accessor', or any of the buffers etc. that it contains. That's a 
pre-existing problem, though: ExecParallelHashRepartitionRest() already 
leaks the SharedTuplestoreAccessor structs and their buffers etc. of the 
old batches. I'm a little surprised there isn't aready an sts_free() 
function.

Another thought is that it's a bit silly to have to call sts_attach() 
just to delete the files. Maybe sts_dispose() should take the same three 
arguments that sts_attach() does, instead.

So that freeing would be nice to tidy up, although the amount of memory 
leaked is tiny so might not be worth it, and it's a pre-existing issue. 
I'm marking this as Ready for Committer.

-- 
Heikki Linnakangas
Neon (https://neon.tech)




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Alexander Korotkov
Дата:
Сообщение: Re: Index range search optimization
Следующее
От: Andres Freund
Дата:
Сообщение: Re: Eager page freeze criteria clarification