Обсуждение: Optimization for lazy_scan_heap

Поиск
Список
Период
Сортировка

Optimization for lazy_scan_heap

От
Masahiko Sawada
Дата:
Hi all,

While reviewing freeze map code, Andres pointed out[1] that
lazy_scan_heap could accesses visibility map twice and its logic is
seems a bit tricky.
As discussed before, it's not nice especially when large relation is
entirely frozen.

I posted the patch for that before but since this is an optimization,
not bug fix, I'd like to propose it again.
Please give me feedback.

[1] https://www.postgresql.org/message-id/20160505000840.epatoq6d2e556446%40alap3.anarazel.de

Regards,

--
Masahiko Sawada

Вложения

Re: Optimization for lazy_scan_heap

От
Anastasia Lubennikova
Дата:
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:       not tested
Spec compliant:           not tested
Documentation:            not tested

Hi,
I haven't tested the performance yet, but the patch itself looks pretty good
and reasonable improvement.
I have a question about removing the comment. It seems to be really tricky
moment. How do we know that all-frozen block hasn't changed since the 
moment we checked it?

-                 * Tricky, tricky.  If this is in aggressive vacuum, the page
-                 * must have been all-frozen at the time we checked whether it
-                 * was skippable, but it might not be any more.  We must be
-                 * careful to count it as a skipped all-frozen page in that
-                 * case, or else we'll think we can't update relfrozenxid and
-                 * relminmxid.  If it's not an aggressive vacuum, we don't
-                 * know whether it was all-frozen, so we have to recheck; but
-                 * in this case an approximate answer is OK.
+                 * We know that there are n_skipped pages by the visibilitymap scan we
+                 * did just before.                 */

I'm going to test the performance this week.
I wonder if you could send a test script or describe the steps to test it?

The new status of this patch is: Waiting on Author

Re: Optimization for lazy_scan_heap

От
Masahiko Sawada
Дата:
On Thu, Aug 25, 2016 at 1:41 AM, Anastasia Lubennikova
<lubennikovaav@gmail.com> wrote:
> The following review has been posted through the commitfest application:
> make installcheck-world:  tested, passed
> Implements feature:       not tested
> Spec compliant:           not tested
> Documentation:            not tested
>
> Hi,
> I haven't tested the performance yet, but the patch itself looks pretty good
> and reasonable improvement.
> I have a question about removing the comment. It seems to be really tricky
> moment. How do we know that all-frozen block hasn't changed since the
> moment we checked it?

I think that we don't need to check whether all-frozen block hasn't
changed since we checked it.
Even if the all-frozen block has changed after we saw it as an
all-frozen page, we can update the relfrozenxid.
Because any new XIDs just added to that page are newer than the
GlobalXmin we computed.

Am I missing something?

In this patch, since we count the number of all-frozen page even
during not an aggresive scan, I thought that we don't need to check
whether these blocks were all-frozen pages.

> I'm going to test the performance this week.
> I wonder if you could send a test script or describe the steps to test it?

This optimization reduces the number of scanning visibility map when
table is almost visible or frozen..
So it would be effective for very large table (more than several
hundreds GB) which is almost all-visible or all-frozen pages.

For example,
1. Create very large table.
2. Execute VACUUM FREEZE to freeze all pages of table.
3. Measure the execution time of VACUUM FREEZE.
    I hope that the second VACUUM FREEZE become fast a little.

I modified the comment, and attached v2 patch.

Regards,

--
Masahiko Sawada

Вложения

Re: Optimization for lazy_scan_heap

От
Simon Riggs
Дата:
On 4 August 2016 at 05:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

> While reviewing freeze map code, Andres pointed out[1] that
> lazy_scan_heap could accesses visibility map twice and its logic is
> seems a bit tricky.
> As discussed before, it's not nice especially when large relation is
> entirely frozen.
>
> I posted the patch for that before but since this is an optimization,
> not bug fix, I'd like to propose it again.
> Please give me feedback.
>
> [1] https://www.postgresql.org/message-id/20160505000840.epatoq6d2e556446%40alap3.anarazel.de

If we have a freeze map now, surely tables will no longer be entirely frozen?

What performance difference does this make, in a realistic use case?

How would we test this to check it is exactly correct?

-- 
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Optimization for lazy_scan_heap

От
Masahiko Sawada
Дата:
On Mon, Sep 5, 2016 at 6:27 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On 4 August 2016 at 05:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
>> While reviewing freeze map code, Andres pointed out[1] that
>> lazy_scan_heap could accesses visibility map twice and its logic is
>> seems a bit tricky.
>> As discussed before, it's not nice especially when large relation is
>> entirely frozen.
>>
>> I posted the patch for that before but since this is an optimization,
>> not bug fix, I'd like to propose it again.
>> Please give me feedback.
>>
>> [1] https://www.postgresql.org/message-id/20160505000840.epatoq6d2e556446%40alap3.anarazel.de
>
> If we have a freeze map now, surely tables will no longer be entirely frozen?

Well, if table is completely frozen, freezing for all pages will be skipped.

> What performance difference does this make, in a realistic use case?

I have yet to measure performance effect but it would be effect for
very large frozen table.

> How would we test this to check it is exactly correct?

One possible idea is that we emit the number of skipped page according
visibility map as a vacuum verbose message, and check it.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center



Re: Optimization for lazy_scan_heap

От
Robert Haas
Дата:
On Mon, Sep 5, 2016 at 8:57 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>> What performance difference does this make, in a realistic use case?
>
> I have yet to measure performance effect but it would be effect for
> very large frozen table.

I think if you are proposing this patch because you think that the
existing code won't perform well, you definitely need to submit some
performance results showing that your approach is better.  If you
can't show that your approach is practically better (rather than just
theoretically better), then I'm not sure we should change anything.
It's very easy to screw up the code in this area and we do not want to
risk corrupting data for the sake of an optimization that isn't really
needed in the first place.

Of course, if you can prove that the change has a significant benefit,
then it's definitely worth considering.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Optimization for lazy_scan_heap

От
Masahiko Sawada
Дата:
On Wed, Sep 7, 2016 at 1:47 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Sep 5, 2016 at 8:57 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>>> What performance difference does this make, in a realistic use case?
>>
>> I have yet to measure performance effect but it would be effect for
>> very large frozen table.
>
> I think if you are proposing this patch because you think that the
> existing code won't perform well, you definitely need to submit some
> performance results showing that your approach is better.  If you
> can't show that your approach is practically better (rather than just
> theoretically better), then I'm not sure we should change anything.
> It's very easy to screw up the code in this area and we do not want to
> risk corrupting data for the sake of an optimization that isn't really
> needed in the first place.
>
> Of course, if you can prove that the change has a significant benefit,
> then it's definitely worth considering.
>

I understood, thank you.

I've measured the performance benefit of this patch by following steps.
1. Create very large table and set all-visible flag to all blocks.
2. Measure the execution time of vacuum that skips to scan all heap pages.

* 1TB Table(visibility map size is 32MB)
HEAD : 11012.274 ms (00:11.012)
Patched : 6742.481 ms (00:06.742)

* 8TB Table(visibility map size is 64MB)
HEAD : 69886.605 ms (01:09.887)
Patched : 35562.131 ms (00:35.562)

* 32TB Table(visibility map size is 258MB)
HEAD: 265901.096 ms (04:25.901)
Patched: 131779.082 ms (02:11.779)

Since current HEAD could scan visibility map twice, the execution time
of Patched is approximately half of HEAD.
But when table is several hundreds gigabyte, performance benefit would
not be large.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center



Re: Optimization for lazy_scan_heap

От
Simon Riggs
Дата:
On 7 September 2016 at 04:13, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

> Since current HEAD could scan visibility map twice, the execution time
> of Patched is approximately half of HEAD.

Sounds good.

To ensure we are doing exactly same amount of work as before, did you
see the output of VACUUM VEROBOSE?

Can we produce a test that verifies the result patched/unpatched?

-- 
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Optimization for lazy_scan_heap

От
Robert Haas
Дата:
On Tue, Sep 6, 2016 at 11:13 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> I understood, thank you.
>
> I've measured the performance benefit of this patch by following steps.
> 1. Create very large table and set all-visible flag to all blocks.
> 2. Measure the execution time of vacuum that skips to scan all heap pages.
>
> * 1TB Table(visibility map size is 32MB)
> HEAD : 11012.274 ms (00:11.012)
> Patched : 6742.481 ms (00:06.742)
>
> * 8TB Table(visibility map size is 64MB)
> HEAD : 69886.605 ms (01:09.887)
> Patched : 35562.131 ms (00:35.562)
>
> * 32TB Table(visibility map size is 258MB)
> HEAD: 265901.096 ms (04:25.901)
> Patched: 131779.082 ms (02:11.779)
>
> Since current HEAD could scan visibility map twice, the execution time
> of Patched is approximately half of HEAD.
> But when table is several hundreds gigabyte, performance benefit would
> not be large.

Wow, those are surprisingly good results.  Interesting.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Optimization for lazy_scan_heap

От
Masahiko Sawada
Дата:
On Wed, Sep 7, 2016 at 4:11 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On 7 September 2016 at 04:13, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
>> Since current HEAD could scan visibility map twice, the execution time
>> of Patched is approximately half of HEAD.
>
> Sounds good.
>
> To ensure we are doing exactly same amount of work as before, did you
> see the output of VACUUM VEROBOSE?

Sorry, the previous test result I posted was something wrong.
I rerun the performance test and results are,

* 1TB Table(visibility map size is 32MB)
HEAD : 4853.250 ms (00:04.853)
Patched : 3805.789 ms (00:03.806)

* 8TB Table(visibility map size is 257MB)
HEAD : 37853.891 ms (00:37.854)
Patched : 30153.943 ms (00:30.154)

* 32TB Table(visibility map size is 1GB)
HEAD: 151908.344 ms (02:31.908)
Patched: 120560.037 ms (02:00.560)

Since visibility map page can be cached onto shared buffer or OS cache
by first scanning, the benefit of this patch seems not to be large.

Here are outputs of VACUUM VERBOSE for 32TB table.

* HEAD
INFO:  vacuuming "public.vm_skip_test"
INFO:  "vm_skip_test": found 0 removable, 0 nonremovable row versions
in 0 out of 4294967294 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins.
Skipped 4294967294 all-frozen pages according to visibility map.
0 pages are entirely empty.
CPU 1.06s/148.11u sec elapsed 149.20 sec.
VACUUM
Time: 151908.344 ms (02:31.908)

* Patched
INFO:  vacuuming "public.vm_skip_test"
INFO:  "vm_skip_test": found 0 removable, 0 nonremovable row versions
in 0 out of 4294967294 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins.
Skipped 4294967294 all-frozen pages according to visibility map.
0 pages are entirely empty.
CPU 0.65s/117.15u sec elapsed 117.81 sec.
VACUUM
Time: 120560.037 ms (02:00.560)

Current manual vacuum doesn't output how may all_frozen pages we
skipped according to visibility map.
That's why I attached 0001 patch which makes the manual vacuum emit
such information.

>
> Can we produce a test that verifies the result patched/unpatched?
>

Attached test shell script but because I don't have such a large disk,
I've measured performance benefit using by something like unofficial
way.

To make a situation where table is extremly large and make
corresponding visibility map, I applied 0002 patch and made a fake
visibility map.
Attached 0002 patch adds GUC parameter cheat_vacuum_table_size which
artificially defines table size being vacuumed .
For example, If we do,
  SET cheat_vacuum_table_size = 4;
  VACUUM vm_test;
then in lazy_scan_heap, vm_test table is processed as an
8TB(MaxBlockNumber / 4) table.

Attached test shell script makes fake visibility map files and
executes the performance tests for 1TB, 8TB and 32TB table.
Please confirm it.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

Вложения

Re: Optimization for lazy_scan_heap

От
Jim Nasby
Дата:
On 9/8/16 3:03 AM, Masahiko Sawada wrote:
> Current manual vacuum doesn't output how may all_frozen pages we
> skipped according to visibility map.
> That's why I attached 0001 patch which makes the manual vacuum emit
> such information.

I think we should add that, and info about all-frozen skips, regardless 
of the rest of these changes. Can you submit that separately to the 
commitfest? (Or I can if you want.)
-- 
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461



Re: Optimization for lazy_scan_heap

От
Robert Haas
Дата:
On Thu, Sep 8, 2016 at 4:03 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> On Wed, Sep 7, 2016 at 4:11 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
>> On 7 September 2016 at 04:13, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>>
>>> Since current HEAD could scan visibility map twice, the execution time
>>> of Patched is approximately half of HEAD.
>>
>> Sounds good.
>>
>> To ensure we are doing exactly same amount of work as before, did you
>> see the output of VACUUM VEROBOSE?
>
> Sorry, the previous test result I posted was something wrong.
> I rerun the performance test and results are,
>
> * 1TB Table(visibility map size is 32MB)
> HEAD : 4853.250 ms (00:04.853)
> Patched : 3805.789 ms (00:03.806)
>
> * 8TB Table(visibility map size is 257MB)
> HEAD : 37853.891 ms (00:37.854)
> Patched : 30153.943 ms (00:30.154)
>
> * 32TB Table(visibility map size is 1GB)
> HEAD: 151908.344 ms (02:31.908)
> Patched: 120560.037 ms (02:00.560)
>
> Since visibility map page can be cached onto shared buffer or OS cache
> by first scanning, the benefit of this patch seems not to be large.

Yeah, that's not nearly as good as the first set of results.  Maybe
it's still worth doing, but if you need to have a 32TB table to save
as much as 30 seconds, we don't have much of a problem here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Optimization for lazy_scan_heap

От
Masahiko Sawada
Дата:
On Thu, Sep 8, 2016 at 11:27 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
> On 9/8/16 3:03 AM, Masahiko Sawada wrote:
>>
>> Current manual vacuum doesn't output how may all_frozen pages we
>> skipped according to visibility map.
>> That's why I attached 0001 patch which makes the manual vacuum emit
>> such information.
>
>
> I think we should add that, and info about all-frozen skips, regardless of
> the rest of these changes. Can you submit that separately to the commitfest?
> (Or I can if you want.)

Yeah, autovacuum emits such information but manual vacuum doesn't.
I submitted that patch before[1] but patch was not committed, because
verbose output is already complicated.
But I will submit it again and start to discussion about that.

[1] https://www.postgresql.org/message-id/CAD21AoCQGTKDxQ6YfrJ0%2B%3Dev6A3i3pt2ULdWxCtwPLKR2E77jg%40mail.gmail.com

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center



Re: Optimization for lazy_scan_heap

От
Rahila Syed
Дата:
Hello,
 
Some initial comments on optimize_lazy_scan_heap_v2.patch.

>-               while (next_unskippable_block < nblocks)
>+               while (next_unskippable_block < nblocks &&
>+                      !FORCE_CHECK_PAGE(next_unskippable_block))

Dont  we need similar check of !FORCE_CHECK_PAGE(next_unskippable_block) in the below code
which appears on line no 556 of vacuumlazy.c ?

    next_unskippable_block = 0;
    if ((options & VACOPT_DISABLE_PAGE_SKIPPING) == 0)
    {
        while (next_unskippable_block < nblocks)


>           {
>                if ((vmstatus & VISIBILITYMAP_ALL_FROZEN) == 0)
>                    break;
>+               n_all_frozen++;
>            }
>            else
>            {
>                if ((vmstatus & VISIBILITYMAP_ALL_VISIBLE) == 0)
>                    break;
>+
>+               /* Count the number of all-frozen pages */
>+               if (vmstatus & VISIBILITYMAP_ALL_FROZEN)
>+                   n_all_frozen++;
>            }

I think its better to have just one place where we increment n_all_frozen before if-else block.

>            }
>            vacuum_delay_point();
>            next_unskippable_block++;
>+           n_skipped++;
>        }
>    }

Also, instead of incrementing n_skipped everytime, it can be set to 'next_unskippable_block' or 'next_unskippable_block -blkno'
once at the end of the loop.

Thank you,
Rahila Syed



On Thu, Aug 25, 2016 at 11:52 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
On Thu, Aug 25, 2016 at 1:41 AM, Anastasia Lubennikova
<lubennikovaav@gmail.com> wrote:
> The following review has been posted through the commitfest application:
> make installcheck-world:  tested, passed
> Implements feature:       not tested
> Spec compliant:           not tested
> Documentation:            not tested
>
> Hi,
> I haven't tested the performance yet, but the patch itself looks pretty good
> and reasonable improvement.
> I have a question about removing the comment. It seems to be really tricky
> moment. How do we know that all-frozen block hasn't changed since the
> moment we checked it?

I think that we don't need to check whether all-frozen block hasn't
changed since we checked it.
Even if the all-frozen block has changed after we saw it as an
all-frozen page, we can update the relfrozenxid.
Because any new XIDs just added to that page are newer than the
GlobalXmin we computed.

Am I missing something?

In this patch, since we count the number of all-frozen page even
during not an aggresive scan, I thought that we don't need to check
whether these blocks were all-frozen pages.

> I'm going to test the performance this week.
> I wonder if you could send a test script or describe the steps to test it?

This optimization reduces the number of scanning visibility map when
table is almost visible or frozen..
So it would be effective for very large table (more than several
hundreds GB) which is almost all-visible or all-frozen pages.

For example,
1. Create very large table.
2. Execute VACUUM FREEZE to freeze all pages of table.
3. Measure the execution time of VACUUM FREEZE.
    I hope that the second VACUUM FREEZE become fast a little.

I modified the comment, and attached v2 patch.

Regards,

--
Masahiko Sawada


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Optimization for lazy_scan_heap

От
Michael Paquier
Дата:
On Mon, Sep 26, 2016 at 5:26 AM, Rahila Syed <rahilasyed90@gmail.com> wrote:
> Some initial comments on optimize_lazy_scan_heap_v2.patch.

Seems worth pursuing. Marking as returned with feedback because of
lack of activity and some basic reviews sent.
-- 
Michael



Re: Optimization for lazy_scan_heap

От
Masahiko Sawada
Дата:
On Mon, Oct 3, 2016 at 10:59 AM, Michael Paquier
<michael.paquier@gmail.com> wrote:
> On Mon, Sep 26, 2016 at 5:26 AM, Rahila Syed <rahilasyed90@gmail.com> wrote:
>> Some initial comments on optimize_lazy_scan_heap_v2.patch.
>
> Seems worth pursuing. Marking as returned with feedback because of
> lack of activity and some basic reviews sent.

Thank you for reviewing this patch, and sorry for my late reply.

I've measured the performance improvement of this patch, but I got the
result showing that it can improve vacuum freeze performance 30 sec
with 32TB table. I don't think that this patch is worth to pursue any
further, so I'd like to withdraw this.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center