[JDBC] Advisory lock deadlock issue using PG JDBC driver

Поиск
Список
Период
Сортировка
От David Rosenstrauch
Тема [JDBC] Advisory lock deadlock issue using PG JDBC driver
Дата
Msg-id 097e202c-1b75-2422-0e30-77009b4a5d30@darose.net
обсуждение исходный текст
Ответы Re: [JDBC] Advisory lock deadlock issue using PG JDBC driver
Re: Advisory lock deadlock issue using PG JDBC driver
Список pgsql-jdbc
I'm running a Spark job that is writing to a postgres db (v9.6), using
the JDBC driver (v42.0.0), and running into a puzzling error:

2017-06-06 16:05:17.718 UTC [36661] dmx@dmx ERROR:  deadlock detected
2017-06-06 16:05:17.718 UTC [36661] dmx@dmx DETAIL:  Process 36661 waits
for ExclusiveLock on advisory lock [16649,0,102440,5]; blocked by
process 36662.
         Process 36662 waits for ExclusiveLock on advisory lock
[16649,0,102439,5]; blocked by process 36661.

However, I can't for the life of me figure out a) how the advisory locks
are getting created (as I'm not calling for them myself), and b) how to
fix this issue.


A bit of background about my setup.  My Spark job obviously runs as
multiple processes on multiple machines.  Each process is performing the
writes to pgsql using the jdbc driver.  The writes are performed a) as
PG UPSERTS, b) as JDBC batches, and c) using JDBC prepared statements.
So each process, when it's time to write to the db, creates several
prepared statements, adds a bunch of UPSERTs to each prepared statement
(i.e., each prepared statement contains a batch of a few hundred
UPSERTs), and then performs an executeBatch() on each statement to
perform the write.  That executeBatch() call is where I'm running into
the error.

In theory, since there's multiple processes that are issuing these
batched DB writes, there could be a record locking problem if, say, 2
processes tried to perform updates to the same user record.  But in
reality this is impossible.  Spark partitions everything based on a key
- in my case userID - so all DB writes for the same user should be
happening in the same process.  So at worst I could just have a batch
that contains multiple UPSERTs to the same user record - never updates
to the same user from different processes.


So, I'm very puzzled by that deadlock error.  Specifically:

* How could it be possible that there are 2 PG processes trying to
acquire the same lock?  Spark's partitioning should ensure that all
updates to the same user record get routed to the same process, so this
situation shouldn't even be possible.

* How/why am I winding up acquiring advisory locks in the first place?
I'm never requesting them.  Is the PG JDBC driver automatically creating
them for some reason?  (E.g., Does the driver automatically use advisory
locks with prepared statements?  With batches?  With UPSERTs?)

And, last but not least:

* How do I resolve this "waits for ExclusiveLock on advisory lock"
issue?  There's precious little info available regarding exactly what
that error message is and how to solve.


Any help or pointers greatly appreciated!

Thanks,

DR


В списке pgsql-jdbc по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256
Следующее
От: Dave Cramer
Дата:
Сообщение: Re: Advisory lock deadlock issue using PG JDBC driver