Обсуждение: Exclusive Locks on Insert into large Logging tables

Поиск
Список
Период
Сортировка

Exclusive Locks on Insert into large Logging tables

От
Peter Neave
Дата:

Hi,

 

I’ve been trying to get down to the source of some locks. I’ve enabled log_loc_waits in the logs and I’ve had a script monitor the lock file and run a query to a file when it detects locks. From there, I’ve been able to determine that intermittently the largest tables in my database are have Exclusive Locks on them for up to 1-5 seconds (For example - LOG:  process 116665 acquired ExclusiveLock on extension of relation 59049887 of database 59049867 after 5838.015 ms).

 

I’m running PG10. The machine has 16GB of RAM.

 

The tables are mainly log tables . The queries that are waiting and those blocking are both insert statements.  The tables are 41GB, 34 GB and 33GB and contain many years of historical logs.

 

My question is, why would locking prevent an insertion. From what I’ve read (https://www.citusdata.com/blog/2018/02/15/when-postgresql-blocks/) insert statements should not lock another insert.

 

Would partitioning the table help? Or would having a smaller record set mean fewer locks?

 

Thanks, Peter

 

--------------------------------------------------------------------------------

 

P.S. IF anyone else would like this approach, I’ve added the code here. The original select query is from the PGWatch2 project.

 

Enable in Postgres Config

log_lock_waits = on

 

-----BEGIN CODE - getlocks.sql-----

SELECT

    (extract(epoch from now()) * 1e9)::int8 AS epoch_ns,

    now()                      AS time_stamp,

    waiting.locktype           AS tag_waiting_locktype,

    waiting_stm.usename        AS tag_waiting_user,

    coalesce(waiting.mode, 'null'::text) AS tag_waiting_mode,

    coalesce(waiting.relation::regclass::text, 'null') AS tag_waiting_table,

    waiting_stm.query          AS waiting_query,

    waiting.pid                AS waiting_pid,

    other.locktype             AS other_locktype,

    other.relation::regclass   AS other_table,

    other_stm.query            AS other_query,

    other.mode                 AS other_mode,

    other.pid                  AS other_pid,

    other_stm.usename          AS other_user

FROM

    pg_catalog.pg_locks AS waiting

JOIN

    public.get_stat_activity() AS waiting_stm

    ON (

        waiting_stm.pid = waiting.pid

    )

JOIN

    pg_catalog.pg_locks AS other

    ON (

        (

            waiting."database" = other."database"

        AND waiting.relation  = other.relation

        )

        OR waiting.transactionid = other.transactionid

    )

JOIN

    public.get_stat_activity() AS other_stm

    ON (

        other_stm.pid = other.pid

    )

WHERE

    NOT waiting.GRANTED

AND

    waiting.pid <> other.pid

AND

    other.GRANTED

AND

    waiting_stm.datname = current_database();

-----END CODE-----

 

-----BEGIN CODE - monitor.sh-----

#!/usr/bin/env bash

query=$(cat getlocks.sql | sed -e "s/;$//")

psql -U postgres -d the_database -c "COPY ( $query ) TO STDOUT WITH CSV HEADER " --pset="footer=off" > locks.csv

 

tail -fn0 /var/log/postgresql/postgresql-10-main.log | \

  while read line ; do

        echo "$line" | grep --color -E '^.*LOG:.*\w*Lock\b.*$'

        if [ $? = 0 ]; then

          #echo Getting locks from database

          psql -U postgres -d the_database -c "COPY ( $query ) TO STDOUT WITH CSV" --pset="footer=off" >> locks.csv

        fi

done

-----END CODE-----

 

 


Want faster answers from InfoTech? Check out www.portal.jims.net


Jim's Group
 Peter Neave | DevOps Lead | Peter.Neave@jims.net

 Jim's Group Pty Ltd
 48 Edinburgh Rd | Mooroolbark | VIC, 3138
 P 1300 130 490 | Intl +61 3 8419 2910


This email and any attachment(s) are confidential. If you are not the intended recipient you must not copy, use, disclose, distribute or rely on the information contained in it. If you have received this email in error, please notify the sender immediately by reply email and delete the email from your system. Confidentiality and legal privilege attached to this communication are not waived or lost by reason of mistaken delivery to you. While Jim's Group employs Anti-Virus Software, we cannot guarantee that this email or the attachment(s) are unaffected by computer virus, corruption or other defects and we recommend that this email and any attachments be tested before opening.

Please consider the environment before printing this email.



Message protected by MailGuard: e-mail anti-virus, anti-spam and content filtering.
http://www.mailguard.com.au
 
 
Вложения

Re: Exclusive Locks on Insert into large Logging tables

От
Laurenz Albe
Дата:
Peter Neave wrote:
> I’ve been trying to get down to the source of some locks. I’ve enabled log_loc_waits
> in the logs and I’ve had a script monitor the lock file and run a query to a file
> when it detects locks. From there, I’ve been able to determine that intermittently
> the largest tables in my database are have Exclusive Locks on them for up to 1-5
> seconds (For example -
> LOG:  process 116665 acquired ExclusiveLock on extension of relation 59049887 of database 59049867 after 5838.015
ms).
>  
> I’m running PG10. The machine has 16GB of RAM.
>  
> The tables are mainly log tables . The queries that are waiting and those blocking
> are both insert statements.  The tables are 41GB, 34 GB and 33GB and contain many
> years of historical logs.
>  
> My question is, why would locking prevent an insertion. From what I’ve read
> (https://www.citusdata.com/blog/2018/02/15/when-postgresql-blocks/) insert
> statements should not lock another insert.

Exclusive locks conflict with everything but SELECTs, see
https://www.postgresql.org/docs/current/explicit-locking.html#TABLE-LOCK-COMPATIBILITY

This particular exclusive lock is takes when the relation has to be extended
with new blocks because there is no more room for the new row in the
existing blocks of the table.

The table extension lock is queued behind all other INSERTs that have come
earlier, so it takes 5 seconds for those to finish. I can't tell if that is
because there are so many of them queued or because the transactions are kept
open longer that is necessary.

> Would partitioning the table help? Or would having a smaller record set mean fewer locks?

Perhaps, if the INSERTs affect different partitions.

I'd check if the inserting transactions take longer than strictly necessary.
The shorter they are, the shorter the extension has to queue.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Exclusive Locks on Insert into large Logging tables

От
Tom Lane
Дата:
Laurenz Albe <laurenz.albe@cybertec.at> writes:
> Peter Neave wrote:
>> I’ve been trying to get down to the source of some locks. ...
>> LOG:  process 116665 acquired ExclusiveLock on extension of relation 59049887 of database 59049867 after 5838.015
ms).

> Exclusive locks conflict with everything but SELECTs, see
> https://www.postgresql.org/docs/current/explicit-locking.html#TABLE-LOCK-COMPATIBILITY

True, but not very relevant, because:

> This particular exclusive lock is takes when the relation has to be extended
> with new blocks because there is no more room for the new row in the
> existing blocks of the table.

Yeah.  This is an implementation-level lock that is independent of
anything you can take at the SQL level.  It's needed because only one
backend can extend (physically add space to) a table at a time.
If you have lots of transactions all trying to insert into one table
concurrently, some contention for that lock is to be expected.
Blocking for ~6 sec is pretty darn horrid though --- the expected
lock hold time is only what's needed to write out a page full of
zeroes.

We have heard reports of excessive contention for relation extension
locks before, and there's been work done to mitigate that by extending
the relation more than one block at a time.  Maybe you're hitting a
case where that doesn't work well?

Or maybe the problem is just that you've got under-specced mass
storage?  If the filesystem goes out to lunch when asked to do
a lot of writing, it'd very possibly result in this symptom.

            regards, tom lane


RE: Exclusive Locks on Insert into large Logging tables

От
Peter Neave
Дата:

> We have heard reports of excessive contention for relation extension locks before, and there's been work done to mitigate that by extending the relation more than one block at a time.  Maybe you're hitting a case where that doesn't work well?
One of the larger tables holds a lot of binary data and for that table the relation that is exclusively locked is the TOAST table.
The other tables are log tables. All three tables have a column that is 'text'. Would that make a difference?

> Or maybe the problem is just that you've got under-specced mass storage?  If the filesystem goes out to lunch when asked to do a lot of writing, it'd very possibly result in this symptom.
We are running on a 8000 IOPS drive. Partitioning is definitely on the road map. Log records from 10+ years ago really need to be moved onto a slower spec'd tablespace.



Want faster answers from InfoTech? Check out www.portal.jims.net


Jim's Group
 Peter Neave | DevOps Lead | Peter.Neave@jims.net

 Jim's Group Pty Ltd
 48 Edinburgh Rd | Mooroolbark | VIC, 3138
 P 1300 130 490 | Intl +61 3 8419 2910


This email and any attachment(s) are confidential. If you are not the intended recipient you must not copy, use, disclose, distribute or rely on the information contained in it. If you have received this email in error, please notify the sender immediately by reply email and delete the email from your system. Confidentiality and legal privilege attached to this communication are not waived or lost by reason of mistaken delivery to you. While Jim's Group employs Anti-Virus Software, we cannot guarantee that this email or the attachment(s) are unaffected by computer virus, corruption or other defects and we recommend that this email and any attachments be tested before opening.

Please consider the environment before printing this email.

-----Original Message-----
From: Tom Lane <tgl@sss.pgh.pa.us>
Sent: Tuesday, 19 February 2019 12:58 PM
To: Laurenz Albe <laurenz.albe@cybertec.at>
Cc: Peter Neave <Peter.Neave@jims.net>; pgsql-novice@lists.postgresql.org
Subject: Re: Exclusive Locks on Insert into large Logging tables

Laurenz Albe <laurenz.albe@cybertec.at> writes:
> Peter Neave wrote:
>> I’ve been trying to get down to the source of some locks. ...
>> LOG:  process 116665 acquired ExclusiveLock on extension of relation 59049887 of database 59049867 after 5838.015 ms).

> Exclusive locks conflict with everything but SELECTs, see
> https://www.postgresql.org/docs/current/explicit-locking.html#TABLE-LO
> CK-COMPATIBILITY

True, but not very relevant, because:

> This particular exclusive lock is takes when the relation has to be
> extended with new blocks because there is no more room for the new row
> in the existing blocks of the table.

Yeah.  This is an implementation-level lock that is independent of anything you can take at the SQL level.  It's needed because only one backend can extend (physically add space to) a table at a time.
If you have lots of transactions all trying to insert into one table concurrently, some contention for that lock is to be expected.
Blocking for ~6 sec is pretty darn horrid though --- the expected lock hold time is only what's needed to write out a page full of zeroes.

We have heard reports of excessive contention for relation extension locks before, and there's been work done to mitigate that by extending the relation more than one block at a time.  Maybe you're hitting a case where that doesn't work well?

Or maybe the problem is just that you've got under-specced mass storage?  If the filesystem goes out to lunch when asked to do a lot of writing, it'd very possibly result in this symptom.

regards, tom lane
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and content filtering.
http://www.mailguard.com.au

Click here to report this message as spam:
https://console.mailguard.com.au/ras/1V8T9ILUC1/6T3fIF2lAR4a3VRoxx6NZR/5



Message protected by MailGuard: e-mail anti-virus, anti-spam and content filtering.
http://www.mailguard.com.au
 
 
Вложения