Обсуждение: max_locks_per_transactions ...

Поиск
Список
Период
Сортировка

max_locks_per_transactions ...

От
Hans-Juergen Schoenig
Дата:
Right now max_locks_per_transactions defines the average number of locks 
taken by a transaction. thus, shared memory is limited to 
max_locks_per_transaction * (max_connections + max_prepared_transactions).
this is basically perfect. however, recently we have seen a couple of 
people having trouble with this. partitioned tables are becoming more 
and more popular so it is very likely that a single transaction can eat 
up a great deal of shared memory. some people having a lot of data 
create daily tables. if done for 3 years we already lost 1000 locks per 
inheritance-structure.

i wonder if it would make sense to split max_locks_per_transaction into 
two variables: max_locks (global size) and max_transaction_locks (local 
size). if set properly this would prevent "good" short running 
transactions from running out of shared memory when some "evil" long 
running transactions start to suck up shared memory.

if people find this useful we would glady implement this new feature for 
8.3.
   many thanks,
      hans

-- 
Cybertec Geschwinde & Schönig GmbH
Schöngrabern 134; A-2020 Hollabrunn
Tel: +43/1/205 10 35 / 340
www.postgresql.at, www.cybertec.at



Re: max_locks_per_transactions ...

От
"Simon Riggs"
Дата:
On Thu, 2007-02-01 at 09:15 +0100, Hans-Juergen Schoenig wrote:
> Right now max_locks_per_transactions defines the average number of locks 
> taken by a transaction. thus, shared memory is limited to 
> max_locks_per_transaction * (max_connections + max_prepared_transactions).
> this is basically perfect. however, recently we have seen a couple of 
> people having trouble with this. partitioned tables are becoming more 
> and more popular so it is very likely that a single transaction can eat 
> up a great deal of shared memory. some people having a lot of data 
> create daily tables. if done for 3 years we already lost 1000 locks per 
> inheritance-structure.
> 
> i wonder if it would make sense to split max_locks_per_transaction into 
> two variables: max_locks (global size) and max_transaction_locks (local 
> size). if set properly this would prevent "good" short running 
> transactions from running out of shared memory when some "evil" long 
> running transactions start to suck up shared memory.

Do partitioned tables use a lock even when they are removed from the
plan as a result of constraint_exclusion? I thought not. So you have
lots of concurrent multi-partition scans.

I'm not sure I understand your suggestion. It sounds like you want to
limit the number of locks an individual backend can take, which simply
makes the partitioned queries fail, no?

Perhaps we should just set the default higher?

--  Simon Riggs              EnterpriseDB   http://www.enterprisedb.com




Re: max_locks_per_transactions ...

От
Hans-Juergen Schoenig
Дата:
Simon Riggs wrote:
> On Thu, 2007-02-01 at 09:15 +0100, Hans-Juergen Schoenig wrote:
>   
>> Right now max_locks_per_transactions defines the average number of locks 
>> taken by a transaction. thus, shared memory is limited to 
>> max_locks_per_transaction * (max_connections + max_prepared_transactions).
>> this is basically perfect. however, recently we have seen a couple of 
>> people having trouble with this. partitioned tables are becoming more 
>> and more popular so it is very likely that a single transaction can eat 
>> up a great deal of shared memory. some people having a lot of data 
>> create daily tables. if done for 3 years we already lost 1000 locks per 
>> inheritance-structure.
>>
>> i wonder if it would make sense to split max_locks_per_transaction into 
>> two variables: max_locks (global size) and max_transaction_locks (local 
>> size). if set properly this would prevent "good" short running 
>> transactions from running out of shared memory when some "evil" long 
>> running transactions start to suck up shared memory.
>>     
>
> Do partitioned tables use a lock even when they are removed from the
> plan as a result of constraint_exclusion? I thought not. So you have
> lots of concurrent multi-partition scans.
>
>   

maybe i was a bit unprecise before - let me clarify.
the application we are talking about contains > 1 tb of data. the main 
table (about 90% of the data) is partitioned into about 3.700 subtables. 
for this kind of application this makes perfect sense as subsets of data 
(= subtable) change frequently.

two types of queries are executed by the system:
   - short OLTP operations adding data to the huge tables   - a large set of analysis stuff which tortures the database
with
 
more complex queries.

the main issue is that to a large extend those analysis queries have to 
run concurrently.
the thing now is: if there are many concurrent operations which need 
this partitioned structure the amount of locks is growing quite fast (in 
this +3700 locks per transaction).
so, it can happen that we run out of shared memory inside some OLTP 
transaction just because too many background processes are sucking up 
shared memory.

of course it would be simple to pump max_locks_per_transaction - this is 
not the point.
the idea is rather: max_locks_per_transaction is a somehow obscure way 
of putting things. many people are simply misleaded. most people assume 
that this is indeed a per transaction limit and then they are surprised 
when a transaction which hardly needs locks fails.

i would suggest to replace the existing parameter but something else:
   - a switch to define the global size of the lock pool (e.g. "max_locks")   - a switch which defines the upper limit
forthe current backend / 
 
transaction

we could make a transaction fail which takes too many locks.
the advantage would be that the transaction causes the problem and not 
some other "innocent" small operation.
   best regards,
      hans


-- 
Cybertec Geschwinde & Schönig GmbH
Schöngrabern 134; A-2020 Hollabrunn
Tel: +43/1/205 10 35 / 340
www.postgresql.at, www.cybertec.at



Re: max_locks_per_transactions ...

От
Tom Lane
Дата:
Hans-Juergen Schoenig <postgres@cybertec.at> writes:
> i would suggest to replace the existing parameter but something else:
>     - a switch to define the global size of the lock pool (e.g. "max_locks")
>     - a switch which defines the upper limit for the current backend / 
> transaction

The problem with that is that it's pretty much guaranteed to break
pg_dump, as pg_dump always needs a lot of locks.  We could perhaps
change pg_dump to increase its limit value (assuming that that's not a
privileged operation), but the fact that a counterexample is so handy
makes me doubt that this is a better design than what we have.
        regards, tom lane