Re: BitmapHeapScan streaming read user and prelim refactoring

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: BitmapHeapScan streaming read user and prelim refactoring
Дата
Msg-id 35bdb7db-7412-4d18-9b06-fea5fcea37bc@enterprisedb.com
обсуждение исходный текст
Ответ на Re: BitmapHeapScan streaming read user and prelim refactoring  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Ответы Re: BitmapHeapScan streaming read user and prelim refactoring  (Melanie Plageman <melanieplageman@gmail.com>)
Список pgsql-hackers
On 2/29/24 23:44, Tomas Vondra wrote:
>
> ...
> 
>>>
>>> I do have some partial results, comparing the patches. I only ran one of
>>> the more affected workloads (cyclic) on the xeon, attached is a PDF
>>> comparing master and the 0001-0014 patches. The percentages are timing
>>> vs. the preceding patch (green - faster, red - slower).
>>
>> Just confirming: the results are for uncached?
>>
> 
> Yes, cyclic data set, uncached case. I picked this because it seemed
> like one of the most affected cases. Do you want me to test some other
> cases too?
> 

BTW I decided to look at the data from a slightly different angle and
compare the behavior with increasing effective_io_concurrency. Attached
are charts for three "uncached" cases:

 * uniform, work_mem=4MB, workers_per_gather=0
 * linear-fuzz, work_mem=4MB, workers_per_gather=0
 * uniform, work_mem=4MB, workers_per_gather=4

Each page has charts for master and patched build (with all patches). I
think there's a pretty obvious difference in how increasing e_i_c
affects the two builds:

1) On master there's clear difference between eic=0 and eic=1 cases, but
on the patched build there's literally no difference - for example the
"uniform" distribution is clearly not great for prefetching, but eic=0
regresses to eic=1 poor behavior).

Note: This is where the the "red bands" in the charts come from.


2) For some reason, the prefetching with eic>1 perform much better with
the patches, except for with very low selectivity values (close to 0%).
Not sure why this is happening - either the overhead is much lower
(which would matter on these "adversarial" data distribution, but how
could that be when fadvise is not free), or it ends up not doing any
prefetching (but then what about (1)?).


3) I'm not sure about the linear-fuzz case, the only explanation I have
we're able to skip almost all of the prefetches (and read-ahead likely
works pretty well here).


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Stephen Frost
Дата:
Сообщение: Re: Statistics Import and Export
Следующее
От: Jacob Champion
Дата:
Сообщение: Re: [PoC] Federated Authn/z with OAUTHBEARER