Обсуждение: repeated out of shared memory error - not related to max_locks_per_transaction

Поиск
Список
Период
Сортировка

repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:
We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.

Currently, we have: 
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb

shared memory limits are very big: 
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1

thanks




Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
066ce286@free.fr
Дата:
Hi,

Try to run

ipcs -m
(and also ipcs -pm)


To check who and wich process is using the shm.


----- Mail original -----
De: "Alfonso Moscato" <alfonso.moscato@merqurio.it>
À: pgsql-admin@lists.postgresql.org
Envoyé: Vendredi 20 Juillet 2018 13:14:53
Objet: repeated out of shared memory error - not related to max_locks_per_transaction

We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.

Currently, we have:
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb

shared memory limits are very big:
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1

thanks





Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Fabio Pardi
Дата:

Hi Alfonso,

is best if you post sysctl settings with their name.  Could you do that, to avoid confusion?


If i recall correctly, this setting is not OK:

max number of segments = 4096

are you referrring to 'kernel.shmall', right?

SHMALL should always be at least shmmax/PAGE_SIZE	



also, some other settings are not quite ok, like:

max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116


(and i think it is expressed in bytes, perhaps?) but those 2 should not be the cause of your out of memory.



and effective_cache_size too, to be accurate.


regards,

fabio pardi






On 20/07/18 13:14, Alfonso Moscato wrote:
We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.

Currently, we have: 
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb

shared memory limits are very big: 
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1

thanks




R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:

Hi Fabio,

thanks for your answer.

The output of sysctl is:

kernel.sem = 32000      1024000000      500     32000

kernel.sem_next_id = -1

kernel.sg-big-buff = 32768

kernel.shm_next_id = -1

kernel.shm_rmid_forced = 0

kernel.shmall = 18446744073692774399

kernel.shmmax = 18446744073692774399

kernel.shmmni = 4096

 

effective_cache_size is 69GB (it was at the third row of my post)

 

this is the output of ipcs -lm:

------ Limiti della memoria condivisa --------

max number of segments = 4096

max seg size (kbytes) = 18014398509465599

max total shared memory (kbytes) = 18014398442373116

min seg size (bytes) = 1

 

I agree they sould be bytes, as the number is very big, but ipcs says kbytes.

 

Regards

Alfonso Moscato

 

Da: Fabio Pardi <f.pardi@portavita.eu>
Inviato: venerdì 20 luglio 2018 14:07
A: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Hi Alfonso,

is best if you post sysctl settings with their name.  Could you do that, to avoid confusion?

 

If i recall correctly, this setting is not OK:

max number of segments = 4096
 
are you referrring to 'kernel.shmall', right?
 
 
SHMALL should always be at least shmmax/PAGE_SIZE
               
 
 
 
also, some other settings are not quite ok, like:
 
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
 
 
(and i think it is expressed in bytes, perhaps?) but those 2 should not be the cause of your out of memory.
 
 
 
and effective_cache_size too, to be accurate.
 
 
regards,
 
fabio pardi
 
 




On 20/07/18 13:14, Alfonso Moscato wrote:

We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.
 
Currently, we have: 
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb
 
shared memory limits are very big: 
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1
 
thanks
 
 
 

 

R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:
I did.
This is a dedicated server.

root@pg:/var/log/postgresql# ipcs -pm

------ Shared Memory Creator/Last-op PIDs --------
shmid      proprietario cpid       lpid
131072     postgres   12879      15299

root@pg:/var/log/postgresql# ipcs -m

------ Segm. Memoria Condivisa --------
chiave     shmid      proprietario perms      byte       nattch     stato
0x0052e2c1 131072     postgres   600        56         83

-----Messaggio originale-----
Da: 066ce286@free.fr <066ce286@free.fr>
Inviato: venerdì 20 luglio 2018 14:01
A: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

Hi,

Try to run

ipcs -m
(and also ipcs -pm)


To check who and wich process is using the shm.


----- Mail original -----
De: "Alfonso Moscato" <alfonso.moscato@merqurio.it>
À: pgsql-admin@lists.postgresql.org
Envoyé: Vendredi 20 Juillet 2018 13:14:53
Objet: repeated out of shared memory error - not related to max_locks_per_transaction

We are getting crazy with "out of shared memory" errors, and we can't figure the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu
5.4.0-6ubuntu1~16.04.9)5.4.0 20160609, 64-bit", the server has 92gb of ram, it is a mixed environment (mostly OLTP,
withsome DW), with 100 sessions constantly open (a CRM) and some long queries run every half an hour. 
Everything works fine, except that after 1 day and half we start receiving a lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue, because we have set
max_locks_per_transactionto 384, and when we receive these messages we have no more than 50/100 locks totally. 
Restarting the server usually works fine for one day and hal more, and then messages restart.
Looking at the log, we see that this error starts casually, sometimes on very small queries, returning some kbytes of
data.
We have tried a lot of different configurations. we have tried with pgtune and pgconfig 2.0.

Currently, we have:
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb, effective_cache to 4gb

shared memory limits are very big:
max number of segments = 4096
max seg size (kbytes) = 18014398509465599 max total shared memory (kbytes) = 18014398442373116 min seg size (bytes) = 1

thanks






Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
"Campbell, Lance"
Дата:
I would also lookup the definition of shared buffers and effective cache.  If I remember correctly you can think of
sharedbuffers as how much memory total PostgreSQL has to work with.  Effective cache is how much memory is available
forPostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files
inmemory.  So effective cache should be equal to or larger than shared buffers.  Effective cache is used to help with
theSQL planning. 

Double check the documentation.

Lance

Sent from my iPad

> On Jul 20, 2018, at 6:15 AM, Alfonso Moscato <alfonso.moscato@merqurio.it> wrote:
>
> We are getting crazy with "out of shared memory" errors, and we can't figure
> the reason.
> We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
> 9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
> 20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
> (mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
> some long queries run every half an hour.
> Everything works fine, except that after 1 day and half we start receiving a
> lot of "out of shared memory" messages.
> I am sure it is not related with the usual max_locks_per_transaction issue,
> because we have set max_locks_per_transaction to 384, and when we receive
> these messages we have no more than 50/100 locks totally.
> Restarting the server usually works fine for one day and hal more, and then
> messages restart.
> Looking at the log, we see that this error starts casually, sometimes on
> very small queries, returning some kbytes of data.
> We have tried a lot of different configurations. we have tried with pgtune
> and pgconfig 2.0.
>
> Currently, we have:
> max_connections = 200
> shared_buffers = 23GB
> effective_cache_size = 69GB
> maintenance_work_mem = 2GB
> checkpoint_completion_target = 0.9
> wal_buffers = 16MB
> default_statistics_target = 100
> random_page_cost = 4
> effective_io_concurrency = 2
> work_mem = 60293kB
> min_wal_size = 2GB
> max_wal_size = 4GB
> max_worker_processes = 4
> max_parallel_workers_per_gather = 2
> max_locks_per_transaction = 384
> but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
> effective_cache to 4gb
>
> shared memory limits are very big:
> max number of segments = 4096
> max seg size (kbytes) = 18014398509465599
> max total shared memory (kbytes) = 18014398442373116
> min seg size (bytes) = 1
>
> thanks
>
>
>


Re: repeated out of shared memory error - not related to max_locks_per_transaction

От
MichaelDBA
Дата:
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale

Friday, July 20, 2018 8:32 AM
I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad



R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:
I experimented a lot with shared buffers, but I haven't seen any difference.
With shared_buffers set to 16 gb I start seeing errors after one day and
half, and with 40gb they start exactly at the same moment.
One would expect that if something is consuming shared memory, setting it to
more than double should make some difference, but this isn't the case.
Alfonso

-----Messaggio originale-----
Da: Campbell, Lance <lance@illinois.edu>
Inviato: venerdì 20 luglio 2018 14:33
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to
max_locks_per_transaction

I would also lookup the definition of shared buffers and effective cache.
If I remember correctly you can think of shared buffers as how much memory
total PostgreSQL has to work with.  Effective cache is how much memory is
available for PostgreSQL to run, shared buffers, as well as an estimate of
how much memory is available to the OS to cache files in memory.  So
effective cache should be equal to or larger than shared buffers.  Effective
cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad

> On Jul 20, 2018, at 6:15 AM, Alfonso Moscato <alfonso.moscato@merqurio.it>
wrote:
>
> We are getting crazy with "out of shared memory" errors, and we can't
> figure the reason.
> We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
> 9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9)
> 5.4.0 20160609, 64-bit", the server has 92gb of ram, it is a mixed
> environment (mostly OLTP, with some DW), with 100 sessions constantly
> open (a CRM) and some long queries run every half an hour.
> Everything works fine, except that after 1 day and half we start
> receiving a lot of "out of shared memory" messages.
> I am sure it is not related with the usual max_locks_per_transaction
> issue, because we have set max_locks_per_transaction to 384, and when
> we receive these messages we have no more than 50/100 locks totally.
> Restarting the server usually works fine for one day and hal more, and
> then messages restart.
> Looking at the log, we see that this error starts casually, sometimes
> on very small queries, returning some kbytes of data.
> We have tried a lot of different configurations. we have tried with
> pgtune and pgconfig 2.0.
>
> Currently, we have:
> max_connections = 200
> shared_buffers = 23GB
> effective_cache_size = 69GB
> maintenance_work_mem = 2GB
> checkpoint_completion_target = 0.9
> wal_buffers = 16MB
> default_statistics_target = 100
> random_page_cost = 4
> effective_io_concurrency = 2
> work_mem = 60293kB
> min_wal_size = 2GB
> max_wal_size = 4GB
> max_worker_processes = 4
> max_parallel_workers_per_gather = 2
> max_locks_per_transaction = 384
> but we tried with work_mem to 130mb, shared_buffer to a maximum fo
> 40gb, effective_cache to 4gb
>
> shared memory limits are very big:
> max number of segments = 4096
> max seg size (kbytes) = 18014398509465599 max total shared memory
> (kbytes) = 18014398442373116 min seg size (bytes) = 1
>
> thanks
>
>
>



R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:

I thought it could have be.

Work_mem was 131mb and I reduced it to 60mb. Errors keep starting after one day and half.

Regards

alfonso

 

Da: MichaelDBA <MichaelDBA@sqlexec.com>
Inviato: venerdì 20 luglio 2018 14:36
A: Campbell, Lance <lance@illinois.edu>
Cc: Alfonso Moscato <alfonso.moscato@merqurio.it>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM

I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad

 

 

Re: R: repeated out of shared memory error - not related to max_locks_per_transaction

От
MichaelDBA
Дата:
Try reducing it further just to make sure.

Friday, July 20, 2018 8:39 AM
<!-- /* Font Definitions */ @font-face{font-family:"Cambria Math";panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal{margin:0cm;margin-bottom:.0001pt;font-size:11.0pt;font-family:"Calibri",sans-serif;color:black;} a:link, span.MsoHyperlink{mso-style-priority:99;color:blue;text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed{mso-style-priority:99;color:purple;text-decoration:underline;} p.msonormal0, li.msonormal0, div.msonormal0{mso-style-name:msonormal;mso-margin-top-alt:auto;margin-right:0cm;mso-margin-bottom-alt:auto;margin-left:0cm;font-size:11.0pt;font-family:"Calibri",sans-serif;color:black;} span.StileMessaggioDiPostaElettronica18{mso-style-type:personal-reply;font-family:"Calibri",sans-serif;color:windowtext;} .MsoChpDefault{mso-style-type:export-only;font-size:10.0pt;} @page WordSection1{size:612.0pt 792.0pt;margin:70.85pt 2.0cm 2.0cm 2.0cm;} div.WordSection1{page:WordSection1;} -->

I thought it could have be.

Work_mem was 131mb and I reduced it to 60mb. Errors keep starting after one day and half.

Regards

alfonso

 

Da: MichaelDBA <MichaelDBA@sqlexec.com>
Inviato: venerdì 20 luglio 2018 14:36
A: Campbell, Lance <lance@illinois.edu>
Cc: Alfonso Moscato <alfonso.moscato@merqurio.it>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


 

Friday, July 20, 2018 8:35 AM
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM
I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad



R: R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:

I took the setting from pgtune:

https://pgtune.leopard.in.ua/#/

 

Anyway, I will try to further reduce it.

It is high because we have a very complex database, queries with a lot of tables, and million of records.

Furthermore, I see a lot of free memory.

 

Da: MichaelDBA <MichaelDBA@sqlexec.com>
Inviato: venerdì 20 luglio 2018 14:40
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: 'Campbell, Lance' <lance@illinois.edu>; pgsql-admin@lists.postgresql.org
Oggetto: Re: R: repeated out of shared memory error - not related to max_locks_per_transaction

 

Try reducing it further just to make sure.


Friday, July 20, 2018 8:39 AM

I thought it could have be.

Work_mem was 131mb and I reduced it to 60mb. Errors keep starting after one day and half.

Regards

alfonso

 

Da: MichaelDBA <MichaelDBA@sqlexec.com>
Inviato: venerdì 20 luglio 2018 14:36
A: Campbell, Lance <lance@illinois.edu>
Cc: Alfonso Moscato <alfonso.moscato@merqurio.it>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale



 

Friday, July 20, 2018 8:35 AM

Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale

Friday, July 20, 2018 8:32 AM

I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad

 

 

Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Fabio Pardi
Дата:

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk


regards,

fabio pardi



On 20/07/18 14:35, MichaelDBA wrote:
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale

Friday, July 20, 2018 8:32 AM
I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad




Re: repeated out of shared memory error - not related to max_locks_per_transaction

От
MichaelDBA
Дата:
I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale


Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk


regards,

fabio pardi



On 20/07/18 14:35, MichaelDBA wrote:

Friday, July 20, 2018 8:35 AM
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM
I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad



Re: R: repeated out of shared memory error - not related tomax_locks_per_transaction

От
066ce286@free.fr
Дата:
What are your values for :

autovacuum_max_workers
max_prepared_transactions
block_size
wal_block_size

?

----- Mail original -----
De: "Alfonso Moscato" <alfonso.moscato@merqurio.it>
À: "MichaelDBA" <MichaelDBA@sqlexec.com>, "Lance' 'Campbell" <lance@illinois.edu>
Cc: pgsql-admin@lists.postgresql.org
Envoyé: Vendredi 20 Juillet 2018 14:39:01
Objet: R: repeated out of shared memory error - not related to max_locks_per_transaction




I thought it could have be.

Work_mem was 131mb and I reduced it to 60mb. Errors keep starting after one day and half.

Regards

alfonso





Da: MichaelDBA <MichaelDBA@sqlexec.com>
Inviato: venerdì 20 luglio 2018 14:36
A: Campbell, Lance <lance@illinois.edu>
Cc: Alfonso Moscato <alfonso.moscato@merqurio.it>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction



Perhaps your "work_mem" setting is causing the memory problems. Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale









Campbell, Lance

Friday, July 20, 2018 8:32 AM



I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of
sharedbuffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for
PostgreSQLto run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in
memory.So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the
SQLplanning.  

Double check the documentation.

Lance

Sent from my iPad






R: R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:
autovacuum_max_workers = 3
max_prepared_transactions = 0
block_size = 8192
wal_block_size = 8192

-----Messaggio originale-----
Da: 066ce286@free.fr <066ce286@free.fr>
Inviato: venerdì 20 luglio 2018 15:07
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: pgsql-admin@lists.postgresql.org
Oggetto: Re: R: repeated out of shared memory error - not related to max_locks_per_transaction

What are your values for :

autovacuum_max_workers
max_prepared_transactions
block_size
wal_block_size

?

----- Mail original -----
De: "Alfonso Moscato" <alfonso.moscato@merqurio.it>
À: "MichaelDBA" <MichaelDBA@sqlexec.com>, "Lance' 'Campbell" <lance@illinois.edu>
Cc: pgsql-admin@lists.postgresql.org
Envoyé: Vendredi 20 Juillet 2018 14:39:01
Objet: R: repeated out of shared memory error - not related to max_locks_per_transaction




I thought it could have be.

Work_mem was 131mb and I reduced it to 60mb. Errors keep starting after one day and half.

Regards

alfonso





Da: MichaelDBA <MichaelDBA@sqlexec.com>
Inviato: venerdì 20 luglio 2018 14:36
A: Campbell, Lance <lance@illinois.edu>
Cc: Alfonso Moscato <alfonso.moscato@merqurio.it>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction



Perhaps your "work_mem" setting is causing the memory problems. Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale









Campbell, Lance

Friday, July 20, 2018 8:32 AM



I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of
sharedbuffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for
PostgreSQLto run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in
memory.So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the
SQLplanning.  

Double check the documentation.

Lance

Sent from my iPad






Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Fabio Pardi
Дата:

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.


regards,

fabio pardi



On 20/07/18 15:00, MichaelDBA wrote:
I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale


Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk


regards,

fabio pardi



On 20/07/18 14:35, MichaelDBA wrote:

Friday, July 20, 2018 8:35 AM
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM
I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad




Re: repeated out of shared memory error - not related to max_locks_per_transaction

От
MichaelDBA
Дата:
wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale
Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.


regards,

fabio pardi



On 20/07/18 15:00, MichaelDBA wrote:

Friday, July 20, 2018 9:00 AM
I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale



Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk


regards,

fabio pardi



On 20/07/18 14:35, MichaelDBA wrote:

Friday, July 20, 2018 8:35 AM
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM
I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad



Re: repeated out of shared memory error - not related to max_locks_per_transaction

От
MichaelDBA
Дата:
Here is another quote from depesz about the dangers of setting work_mem too high...

To show some perspective – Let's assume you have some queries that do 4-10 Sort/Hash operations. And you have 100 max_connections limit. This means that theoretically you can get yp to 1000 x work_mem usage! (100 connections, each running the 10-sort/hash query).

Above means, that while increasing work_mem is cool, you have to be sure not create situation when pg will try to allocate more memory than there physically is. A lot of “out of memory" errors reported to PostgreSQL mailing lists came from users which set their work_mem to large values like 512MB or more, and then ran some complex queries. On machine with (relatively) small amount of ram.

Friday, July 20, 2018 9:28 AM
wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale

Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.


regards,

fabio pardi



On 20/07/18 15:00, MichaelDBA wrote:

Friday, July 20, 2018 9:00 AM
I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale



Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk


regards,

fabio pardi



On 20/07/18 14:35, MichaelDBA wrote:

Friday, July 20, 2018 8:35 AM
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale



Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Fabio Pardi
Дата:

Michael,

I think we are talking about 2 different scenarios.

1) the single operation is using more than work_mem -> gets spilled to disk. like: a big sort. That's what i mentioned.

2) there are many many concurrent operations, and one more of them wants to allocate work_mem but the memory on the server is exhausted at that point. -> in that case you will get 'out of memory'. That's what you are referring to.


Given the description of the problem (RAM and Postgres settings) and the fact that Alfonso says that "there is a lot of free memory" i think is unlikely that we are in the second situation described here above.

regards,

fabio pardi





On 20/07/18 15:28, MichaelDBA wrote:
wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale
Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.


regards,

fabio pardi



On 20/07/18 15:00, MichaelDBA wrote:

Friday, July 20, 2018 9:00 AM
I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale



Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk


regards,

fabio pardi



On 20/07/18 14:35, MichaelDBA wrote:

Friday, July 20, 2018 8:35 AM
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM
I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad




Re: repeated out of shared memory error - not related to max_locks_per_transaction

От
Tom Lane
Дата:
"Alfonso Moscato" <alfonso.moscato@merqurio.it> writes:
> We are getting crazy with "out of shared memory" errors, and we can't figure
> the reason.

I don't think any of the advice posted so far has anything to do with your
problem --- certainly, fooling with work_mem is unrelated.  PG shared
memory is a fixed-size arena (for any one setting of shared_buffers,
max_connections, max_locks_per_transaction, and a couple other variables)
and most of its contents are pre-allocated at postmaster start.  What you
are describing sounds like a long-term leak of additional, post-startup
shmem allocations, eventually running out of the available slop in the
shmem arena.  work_mem, and other user-visible knobs, have nothing to do
with this because those control allocations in process private memory
not shmem.

I'm pretty sure that the *only* post-startup shmem allocations in the core
code are for lock table entries.  However, if you're running any non-core
extensions, it's possible that one of them does such allocations and has
a logic error that results in a shmem leak.

As an amelioration measure, you could raise max_locks_per_transaction,
which will increase the arena size without actually eating any additional
space immediately at startup.  That might not cure the problem, but at
least it would increase the interval at which you have to restart the
server.

As for real solutions, I'd first look harder at the question of how many
lock table entries you need.  The fact that you only see a few dozen
active entries when you look (after a failure) doesn't prove a thing about
what the max transient requirement is.  Do you have any applications that
touch a whole lot of tables in a single transaction?  Are you using any
user-defined (advisory) locks, and if so what's the usage pattern like for
those?

The "bug in an extension" theory also needs investigation.

            regards, tom lane


R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:

Michael, Fabio,

moreover, i get the message “Out of shared memory”, not “out of memory”.

Anyway, I can confirm that when errors began  there where more than 10gb of free memory.

Regards

Alfonso

 

Da: Fabio Pardi <f.pardi@portavita.eu>
Inviato: venerdì 20 luglio 2018 15:57
A: MichaelDBA <MichaelDBA@sqlexec.com>
Cc: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Michael,

I think we are talking about 2 different scenarios.

1) the single operation is using more than work_mem -> gets spilled to disk. like: a big sort. That's what i mentioned.

2) there are many many concurrent operations, and one more of them wants to allocate work_mem but the memory on the server is exhausted at that point. -> in that case you will get 'out of memory'. That's what you are referring to.

 

Given the description of the problem (RAM and Postgres settings) and the fact that Alfonso says that "there is a lot of free memory" i think is unlikely that we are in the second situation described here above.


regards,

fabio pardi

 

 

 

 

On 20/07/18 15:28, MichaelDBA wrote:

wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale

Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.

 

regards,

fabio pardi

 

 

On 20/07/18 15:00, MichaelDBA wrote:

 

Friday, July 20, 2018 9:00 AM

I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale


Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk

 

regards,

fabio pardi

 

 

On 20/07/18 14:35, MichaelDBA wrote:

 

Friday, July 20, 2018 8:35 AM

Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale

Friday, July 20, 2018 8:32 AM

I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad

 

 

 

Re: R: repeated out of shared memory error - not related to max_locks_per_transaction

От
MichaelDBA
Дата:
coolio, work_mem is a red herring here.  I think me and Fabio are on the same page now since his last elaboration about work_mem.

Regards,
Michael Vitale
Friday, July 20, 2018 10:23 AM
<!-- /* Font Definitions */ @font-face{font-family:"Cambria Math";panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} @font-face{font-family:Lato;panose-1:2 15 5 2 2 2 4 3 2 3;} @font-face{font-family:"Courier 10 Pitch";panose-1:0 0 0 0 0 0 0 0 0 0;} @font-face{font-family:"Courier 10\000D\000A";panose-1:0 0 0 0 0 0 0 0 0 0;} @font-face{font-family:"Courier\000D\000A 10";panose-1:0 0 0 0 0 0 0 0 0 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal{margin:0cm;margin-bottom:.0001pt;font-size:11.0pt;font-family:"Calibri",sans-serif;color:black;} a:link, span.MsoHyperlink{mso-style-priority:99;color:blue;text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed{mso-style-priority:99;color:purple;text-decoration:underline;} code{mso-style-priority:99;font-family:"Courier New";} tt{mso-style-priority:99;font-family:"Courier New";} p.msonormal0, li.msonormal0, div.msonormal0{mso-style-name:msonormal;mso-margin-top-alt:auto;margin-right:0cm;mso-margin-bottom-alt:auto;margin-left:0cm;font-size:11.0pt;font-family:"Calibri",sans-serif;color:black;} span.StileMessaggioDiPostaElettronica21{mso-style-type:personal-reply;font-family:"Calibri",sans-serif;color:windowtext;} .MsoChpDefault{mso-style-type:export-only;font-size:10.0pt;} @page WordSection1{size:612.0pt 792.0pt;margin:70.85pt 2.0cm 2.0cm 2.0cm;} div.WordSection1{page:WordSection1;} -->

Michael, Fabio,

moreover, i get the message “Out of shared memory”, not “out of memory”.

Anyway, I can confirm that when errors began  there where more than 10gb of free memory.

Regards

Alfonso

 

Da: Fabio Pardi <f.pardi@portavita.eu>
Inviato: venerdì 20 luglio 2018 15:57
A: MichaelDBA <MichaelDBA@sqlexec.com>
Cc: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Michael,

I think we are talking about 2 different scenarios.

1) the single operation is using more than work_mem -> gets spilled to disk. like: a big sort. That's what i mentioned.

2) there are many many concurrent operations, and one more of them wants to allocate work_mem but the memory on the server is exhausted at that point. -> in that case you will get 'out of memory'. That's what you are referring to.

 

Given the description of the problem (RAM and Postgres settings) and the fact that Alfonso says that "there is a lot of free memory" i think is unlikely that we are in the second situation described here above.


regards,

fabio pardi

 

 

 

 

On 20/07/18 15:28, MichaelDBA wrote:

 

Friday, July 20, 2018 9:56 AM

Michael,

I think we are talking about 2 different scenarios.

1) the single operation is using more than work_mem -> gets spilled to disk. like: a big sort. That's what i mentioned.

2) there are many many concurrent operations, and one more of them wants to allocate work_mem but the memory on the server is exhausted at that point. -> in that case you will get 'out of memory'. That's what you are referring to.


Given the description of the problem (RAM and Postgres settings) and the fact that Alfonso says that "there is a lot of free memory" i think is unlikely that we are in the second situation described here above.

regards,

fabio pardi





On 20/07/18 15:28, MichaelDBA wrote:

Friday, July 20, 2018 9:28 AM
wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale

Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.


regards,

fabio pardi



On 20/07/18 15:00, MichaelDBA wrote:

Friday, July 20, 2018 9:00 AM
I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale




R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:
Hi Tom,
i agree with you, there is something allocating and not releasing memory.
This because the error starts after many hours.
I have some extensions installed, namely:
"plpgsql"
"mysql_fdw"
"multicorn"
"pg_cron"
"tds_fdw"
"pg_stat_statements"
I have 200 connections and 384 max_locks_per_transaction in postgresql.conf,
that brings the ttal number og locks to 76.800.
I have surely very big queries, some with more than 50 tables, subqueries,
and so on.
As for your questions:
1) lock table entries needed: we check regularly the number of locks. It
doesn't seem they usually exceed some hundreds. I will create a cron to
check them regularly.
2) We have some very complex functions, touching many tables and returning
1/2 million rows
3) we don't use any kind of personalised locks. The db is set to
read_committed isolation level.
Just two question:
How could I understand which extension is consuming shared memory? I can't
figure how.
Is there some way other than increasing max_locks_per_transaction to drive
postgres to allocate more shared memory?


-----Messaggio originale-----
Da: Tom Lane <tgl@sss.pgh.pa.us>
Inviato: venerdì 20 luglio 2018 16:23
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to
max_locks_per_transaction

"Alfonso Moscato" <alfonso.moscato@merqurio.it> writes:
> We are getting crazy with "out of shared memory" errors, and we can't
> figure the reason.

I don't think any of the advice posted so far has anything to do with your
problem --- certainly, fooling with work_mem is unrelated.  PG shared memory
is a fixed-size arena (for any one setting of shared_buffers,
max_connections, max_locks_per_transaction, and a couple other variables)
and most of its contents are pre-allocated at postmaster start.  What you
are describing sounds like a long-term leak of additional, post-startup
shmem allocations, eventually running out of the available slop in the shmem
arena.  work_mem, and other user-visible knobs, have nothing to do with this
because those control allocations in process private memory not shmem.

I'm pretty sure that the *only* post-startup shmem allocations in the core
code are for lock table entries.  However, if you're running any non-core
extensions, it's possible that one of them does such allocations and has a
logic error that results in a shmem leak.

As an amelioration measure, you could raise max_locks_per_transaction, which
will increase the arena size without actually eating any additional space
immediately at startup.  That might not cure the problem, but at least it
would increase the interval at which you have to restart the server.

As for real solutions, I'd first look harder at the question of how many
lock table entries you need.  The fact that you only see a few dozen active
entries when you look (after a failure) doesn't prove a thing about what the
max transient requirement is.  Do you have any applications that touch a
whole lot of tables in a single transaction?  Are you using any user-defined
(advisory) locks, and if so what's the usage pattern like for those?

The "bug in an extension" theory also needs investigation.

            regards, tom lane



R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:

Hi Evan,

I have tried shared_buffers from 8gb to 40gb, and max_locks_per_transaction from 128 to 384.

I don’t see relevant differences: errors start more or less after one day and half.

Regards

alfonso

 

Da: Evan Bauer <evanbauer@mac.com>
Inviato: venerdì 20 luglio 2018 16:40
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: Fabio Pardi <f.pardi@portavita.eu>; MichaelDBA <MichaelDBA@sqlexec.com>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Alfonso,

 

There was a lot of buzz when 9.6 came out around getting the “Out of shared memory” message due to virtual lock space consumption.  I see that you have set a high value (max_locks_per_transaction = 384), but have you tried adjusting it to see if that either increases or decreases the amount of time before the messages appear and you have to restart pg?

 

Regards,

 

- Evan

 

Evan Bauer
eb@evanbauer.com
+1 646 641 2973
Skype: evanbauer



On Jul 20, 2018, at 10:23, Alfonso Moscato <alfonso.moscato@merqurio.it> wrote:

 

Michael, Fabio,

moreover, i get the message “Out of shared memory”, not “out of memory”.

Anyway, I can confirm that when errors began  there where more than 10gb of free memory.

Regards

Alfonso

 

Da: Fabio Pardi <f.pardi@portavita.eu> 
Inviato: venerdì 20 luglio 2018 15:57
A: MichaelDBA <MichaelDBA@sqlexec.com>
Cc: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Michael,

I think we are talking about 2 different scenarios.

1) the single operation is using more than work_mem -> gets spilled to disk. like: a big sort. That's what i mentioned.

2) there are many many concurrent operations, and one more of them wants to allocate work_mem but the memory on the server is exhausted at that point. -> in that case you will get 'out of memory'. That's what you are referring to.

 

Given the description of the problem (RAM and Postgres settings) and the fact that Alfonso says that "there is a lot of free memory" i think is unlikely that we are in the second situation described here above.


regards,

fabio pardi

 

 

 

 

On 20/07/18 15:28, MichaelDBA wrote:

wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale


Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.

 

regards,

fabio pardi

 

 

On 20/07/18 15:00, MichaelDBA wrote:

 

Friday, July 20, 2018 9:00 AM

I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale



Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk

 

regards,

fabio pardi

 

 

On 20/07/18 14:35, MichaelDBA wrote:

 

Friday, July 20, 2018 8:35 AM

Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM

I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation. 

Lance

Sent from my iPad

 

 

 

 

Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Rui DeSousa
Дата:
What is the systems memory settings (cat /proc/meminfo)? How much swap space is allocated to the system? And more
importantlywhat’s the Commit Limit? 




R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:
Hi Rui.

root@pg:/etc/postgresql/9.6/main# cat /proc/meminfo
MemTotal:       92813728 kB
MemFree:        21016632 kB
MemAvailable:   80814496 kB
Buffers:          196328 kB
Cached:         67843352 kB
SwapCached:            0 kB
Active:         67040236 kB
Inactive:        3374184 kB
Active(anon):    7700380 kB
Inactive(anon):  2937896 kB
Active(file):   59339856 kB
Inactive(file):   436288 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       3905532 kB
SwapFree:        3905532 kB
Dirty:               496 kB
Writeback:             0 kB
AnonPages:       2374804 kB
Mapped:          6664260 kB
Shmem:           8263524 kB
Slab:             696048 kB
SReclaimable:     658492 kB
SUnreclaim:        37556 kB
KernelStack:        4464 kB
PageTables:       422148 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    50312396 kB
Committed_AS:   27622712 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       81856 kB
DirectMap2M:    94289920 kB

-----Messaggio originale-----
Da: Rui DeSousa <rui@crazybean.net>
Inviato: venerdì 20 luglio 2018 16:45
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

What is the systems memory settings (cat /proc/meminfo)? How much swap space is allocated to the system? And more
importantlywhat’s the Commit Limit? 





Re: repeated out of shared memory error - not related to max_locks_per_transaction

От
Shreeyansh Dba
Дата:
Hi Alfonso,

Normally we see the out of shared memory errors comes due to the disk space / kernel parameters issues. And it seems your kernel parameters have default values.

Verify disk space or set kernel parameters based on your storage RAM size.





On Fri, Jul 20, 2018 at 4:44 PM, Alfonso Moscato <alfonso.moscato@merqurio.it> wrote:
We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.

Currently, we have:
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb

shared memory limits are very big:
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1

thanks




Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Rui DeSousa
Дата:
Alfonso,

Do you happened to know how large the process is when it starts giving error messages, anywhere near 47GB?

— but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,

Having  shared_buffer to 40GB would be two high for the current configuration and I would expect to see out of memory
errorsgiven that your system commit limit is 47GB; However, with shared_buffers at 23GB I don’t think you should be
hittingthe 47GB limit — are you? 

To increase the commit limit you can add more swap and/or adjust the overcommit ratio.






Re: R: repeated out of shared memory error - not related to max_locks_per_transaction

От
Tom Lane
Дата:
"Alfonso Moscato" <alfonso.moscato@merqurio.it> writes:
> I have some extensions installed, namely:
> "plpgsql"
> "mysql_fdw"
> "multicorn"
> "pg_cron"
> "tds_fdw"
> "pg_stat_statements"

plpgsql doesn't use any shared memory.  pg_stat_statements does but it's a
one-time, fixed-size allocation.  The others you mention perhaps deserve
investigation; I'm not familiar with their innards.

The fact that the problem seems to be related to the passage of time and
not to any particular application action might perhaps lead one to suspect
pg_cron.  But that's a leap of logic unsupported by any facts.

> I have 200 connections and 384 max_locks_per_transaction in postgresql.conf,
> that brings the ttal number og locks to 76.800.

Right, or more accurately, enough shmem space for that many locks will
be reserved at postmaster start.

> I have surely very big queries, some with more than 50 tables, subqueries,
> and so on.

Well, 50-table queries are not your problem, no matter how much data
they process.  Transactions that touch, say, 100000 different tables
might be your problem.  I've heard of people having to raise
max_locks_per_transaction in order to run pg_dump against databases
with lots of tables, for example.

> As for your questions:
> 1) lock table entries needed: we check regularly the number of locks. It
> doesn't seem they usually exceed some hundreds. I will create a cron to
> check them regularly.

You would really need to catch the state of the lock table just when the
error is reported --- a check even a few seconds before that might not
show any distress.

> How could I understand which extension is consuming shared memory? I can't
> figure how.

AFAIK there isn't any good way except to check their source code for
ShmemAlloc* and ShmemInit* calls.

> Is there some way other than increasing max_locks_per_transaction to drive
> postgres to allocate more shared memory?

That's the only knob that wouldn't also produce a matching increase
in the size of postmaster-startup allocations.  There isn't any real
downside to raising it; the only effect is to increase the size of
the shmem segment.

            regards, tom lane


Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Andres Freund
Дата:
Hi,

On 2018-07-20 13:14:53 +0200, Alfonso Moscato wrote:
> We are getting crazy with "out of shared memory" errors, and we can't figure
> the reason.

What makes them "crazy"?

Could you please give the exact error message and statement it ran?

Are any of your sessions running with serializable isolation?

Greetings,

Andres Freund


Re: repeated out of shared memory error - not related to max_locks_per_transaction

От
Shreeyansh Dba
Дата:
In addition to this.

We have  faced these out of shared memory error in one of our customer environment where we observed the same below values:

max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116

Due to this memory allocation not happened at the various processes level which resulted to out of memory errors. And by setting the new values as per our RAM resolved the out of shared memory error. 




On Fri, Jul 20, 2018 at 8:27 PM, Shreeyansh Dba <shreeyansh2014@gmail.com> wrote:
Hi Alfonso,

Normally we see the out of shared memory errors comes due to the disk space / kernel parameters issues. And it seems your kernel parameters have default values.

Verify disk space or set kernel parameters based on your storage RAM size.





On Fri, Jul 20, 2018 at 4:44 PM, Alfonso Moscato <alfonso.moscato@merqurio.it> wrote:
We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.

Currently, we have:
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb

shared memory limits are very big:
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1

thanks





R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:
Hi andres,
I said crazy beecause when it starts, there are many of them every few
seconds, and the only way to stop them is to restart postgres.
All of our sessions run in read_committed mode.
Regards
Alfonso

-----Messaggio originale-----
Da: Andres Freund <andres@anarazel.de>
Inviato: venerdì 20 luglio 2018 17:14
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to
max_locks_per_transaction


Hi,

On 2018-07-20 13:14:53 +0200, Alfonso Moscato wrote:
> We are getting crazy with "out of shared memory" errors, and we can't
> figure the reason.

What makes them "crazy"?

Could you please give the exact error message and statement it ran?

Are any of your sessions running with serializable isolation?

Greetings,

Andres Freund



R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:
Hi Rui,
we have 100 forked postgres processes, I don't know how to determine if we are hitting the 47Gb limit.
Any suggestion?


-----Messaggio originale-----
Da: Rui DeSousa <rui@crazybean.net>
Inviato: venerdì 20 luglio 2018 17:03
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction


Alfonso,

Do you happened to know how large the process is when it starts giving error messages, anywhere near 47GB?

— but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,

Having  shared_buffer to 40GB would be two high for the current configuration and I would expect to see out of memory
errorsgiven that your system commit limit is 47GB; However, with shared_buffers at 23GB I don’t think you should be
hittingthe 47GB limit — are you? 

To increase the commit limit you can add more swap and/or adjust the overcommit ratio.







R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:

Definitely disk space is not an issue:

 

File system    1K-blocchi    Usati Disponib. Uso% Montato su

udev             46388372        0  46388372   0% /dev

tmpfs             9281376     9396   9271980   1% /run

/dev/sda1        30830588  1801120  27440324   7% /

tmpfs            46406864        4  46406860   1% /dev/shm

tmpfs                5120        0      5120   0% /run/lock

tmpfs            46406864        0  46406864   0% /sys/fs/cgroup

/dev/sdd1       464323324 71361120 369352916  17% /var

/dev/sdb2        29049840    45008  27506108   1% /tmp

tmpfs             9281376        0   9281376   0% /run/user/1005

tmpfs             9281376        0   9281376   0% /run/user/1000

 

I will try to set differently shmem limits

Thanks

alfonso

 

Da: Shreeyansh Dba <shreeyansh2014@gmail.com>
Inviato: venerdì 20 luglio 2018 17:18
A: Alfonso Moscato <alfonso.moscato@merqurio.it>
Cc: pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

In addition to this.

We have  faced these out of shared memory error in one of our customer environment where we observed the same below values:

max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116

Due to this memory allocation not happened at the various processes level which resulted to out of memory errors. And by setting the new values as per our RAM resolved the out of shared memory error. 


 

On Fri, Jul 20, 2018 at 8:27 PM, Shreeyansh Dba <shreeyansh2014@gmail.com> wrote:

Hi Alfonso,


Normally we see the out of shared memory errors comes due to the disk space / kernel parameters issues. And it seems your kernel parameters have default values.

 

Verify disk space or set kernel parameters based on your storage RAM size.

 


 

On Fri, Jul 20, 2018 at 4:44 PM, Alfonso Moscato <alfonso.moscato@merqurio.it> wrote:

We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.

Currently, we have:
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb

shared memory limits are very big:
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1

thanks


 

 

Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Rui DeSousa
Дата:
Just using the common tools like htop or (ps aux);  specifically VIRT column in htop or VSZ column in ps.   Basically,
howmuch virtual memory is allocated to the process (the total address space of the process). 



> On Jul 20, 2018, at 11:38 AM, Alfonso Moscato <alfonso.moscato@merqurio.it> wrote:
>
> Hi Rui,
> we have 100 forked postgres processes, I don't know how to determine if we are hitting the 47Gb limit.
> Any suggestion?
>
>
> -----Messaggio originale-----
> Da: Rui DeSousa <rui@crazybean.net>
> Inviato: venerdì 20 luglio 2018 17:03
> A: Alfonso Moscato <alfonso.moscato@merqurio.it>
> Cc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-admin@lists.postgresql.org
> Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction
>
>
> Alfonso,
>
> Do you happened to know how large the process is when it starts giving error messages, anywhere near 47GB?
>
> — but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
>
> Having  shared_buffer to 40GB would be two high for the current configuration and I would expect to see out of memory
errorsgiven that your system commit limit is 47GB; However, with shared_buffers at 23GB I don’t think you should be
hittingthe 47GB limit — are you? 
>
> To increase the commit limit you can add more swap and/or adjust the overcommit ratio.
>
>
>
>
>
>



Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
"Alfonso Moscato (Merqurio)"
Дата:
Ah yes. You meant virt. It’s around 32gb. On Monday I ‘ll be more precise
Regards




On Fri, Jul 20, 2018 at 6:36 PM +0200, "Rui DeSousa" <rui@crazybean.net> wrote:

Just using the common tools like htop or (ps aux);  specifically VIRT column in htop or VSZ column in ps.   Basically, how much virtual memory is allocated to the process (the total address space of the process).



> On Jul 20, 2018, at 11:38 AM, Alfonso Moscato  wrote:
> 
> Hi Rui,
> we have 100 forked postgres processes, I don't know how to determine if we are hitting the 47Gb limit.
> Any suggestion?
> 
> 
> -----Messaggio originale-----
> Da: Rui DeSousa  
> Inviato: venerdì 20 luglio 2018 17:03
> A: Alfonso Moscato 
> Cc: Tom Lane ; pgsql-admin@lists.postgresql.org
> Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction
> 
> 
> Alfonso,
> 
> Do you happened to know how large the process is when it starts giving error messages, anywhere near 47GB? 
> 
> — but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
> 
> Having  shared_buffer to 40GB would be two high for the current configuration and I would expect to see out of memory errors given that your system commit limit is 47GB; However, with shared_buffers at 23GB I don’t think you should be hitting the 47GB limit — are you?
> 
> To increase the commit limit you can add more swap and/or adjust the overcommit ratio.
> 
> 
> 
> 
> 
> 


Re: repeated out of shared memory error - not relatedto max_locks_per_transaction

От
David Morton
Дата:

This thread went cold very quickly … we are experiencing the exact same symptoms on one of our test systems. Was a solution found or are the investigations / work ongoing ?

 

We are happy to provide any information that may be useful for our case.

 
.ExternalClass * {line-height: 100%} .ExternalClass, .ExternalClass p, .ExternalClass span, .ExternalClass font, .ExternalClass td, .ExternalClass div { line-height: 100%;} p {margin:0; padding:0; margin-bottom:0;}

David Morton
Senior Platform Engineer

MOB +64 21 544 503 EMAIL david.morton@eroad.com

 

EROAD Logo
  
EROAD
Level 3, 260 Oteha Valley Road, Albany, Auckland, New Zealand
PO Box 305 394, Triton Plaza, North Shore 0757, Auckland, NZ

TEL +64 9 927 4700 TOLL FREE 0800 4 EROAD
www.eroad.co.nz - Twitter - LinkedIn

This message is for the named person's use only. It may contain confidential, privileged, private or proprietary information. No confidentiality is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy hard copies of it and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print or copy any part of this message if you are not the intended recipient. EROAD Limited reserves the right to monitor all e-mail communications through its networks. Any views expressed in this message are those of the individual sender, except where the message states otherwise and the sender is authorised to state them to be the views of EROAD Limited. To the maximum extent permitted by law, EROAD Limited is not liable (including in negligence) for viruses or other defects or for changes made to this email or to any attachments. Before opening or using attachments, please check them for viruses and other defects.

 

Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Rui DeSousa
Дата:

On Jul 29, 2018, at 10:31 PM, David Morton <david.morton@eroad.com> wrote:

This thread went cold very quickly … we are experiencing the exact same symptoms on one of our test systems. Was a solution found or are the investigations / work ongoing ?
 
We are happy to provide any information that may be useful for our case.
 
What’s the configuration of your system including /proc/meminfo.  

Re: repeated out of shared memory error - not related tomax_locks_per_transaction

От
Fabio Pardi
Дата:

Hi David,


while symptoms might be the same, the cause is likely to be different.


If I were you, I would open a new thread, and properly describe your problem and your setup.


regards,


fabio pardi



On 30/07/18 04:31, David Morton wrote:

This thread went cold very quickly … we are experiencing the exact same symptoms on one of our test systems. Was a solution found or are the investigations / work ongoing ?

 

We are happy to provide any information that may be useful for our case.

 
.ExternalClass * {line-height: 100%} .ExternalClass, .ExternalClass p, .ExternalClass span, .ExternalClass font, .ExternalClass td, .ExternalClass div { line-height: 100%;} p {margin:0; padding:0; margin-bottom:0;}

David Morton
Senior Platform Engineer

MOB +64 21 544 503 EMAIL david.morton@eroad.com

 

EROAD Logo
  
EROAD
Level 3, 260 Oteha Valley Road, Albany, Auckland, New Zealand
PO Box 305 394, Triton Plaza, North Shore 0757, Auckland, NZ

TEL +64 9 927 4700 TOLL FREE 0800 4 EROAD
www.eroad.co.nz - Twitter - LinkedIn

This message is for the named person's use only. It may contain confidential, privileged, private or proprietary information. No confidentiality is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy hard copies of it and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print or copy any part of this message if you are not the intended recipient. EROAD Limited reserves the right to monitor all e-mail communications through its networks. Any views expressed in this message are those of the individual sender, except where the message states otherwise and the sender is authorised to state them to be the views of EROAD Limited. To the maximum extent permitted by law, EROAD Limited is not liable (including in negligence) for viruses or other defects or for changes made to this email or to any attachments. Before opening or using attachments, please check them for viruses and other defects.

 


R: repeated out of shared memory error - not related to max_locks_per_transaction

От
"Alfonso Moscato"
Дата:

I’ve not given up. I’m just collecting more data to better understand the root cause.

I was suggested to thorough investigate extensions.

Just to recap, I started with (I am including here only relevant data, you can find the complete configuration in the first email) max_locks_per_transaction at 384, 92GB of RAM, 23Gb of shared buffers, and

"plpgsql"

"mysql_fdw"

"multicorn" (ldap access)

"pg_cron"

"tds_fdw"

"pg_stat_statements"

Since then, I removed tds_fdw and pg_stat_statements, and I stopped all the queries using mysql_fdw. Multicorn refreshes a materialized view every 30 minutes, and so I don’t think it could be responsible of a continuous increase, minute after minute.

We are working to install pgagent and remove pg_cron too.

Then, I’ve been collecting shared memory usage every minute.

I see a constant increase of shared memory, more or less 200k every minute. When all users stop it decreases and sometimes it stops, but then it restarts without a clear reason.

AFTER removing pg_stat_statements and increasing max_locks_per_transaction to 512 we haven’t experienced anymore “out of shared memory” messages.

I don’t know if I’ve just postponed the moment they will start, or if something is really changed (I’m inclined to not be too optimist, as the increase continues)

As for now we are at

 

lun 30 lug 2018, 11.07.01, CEST | Shmem:          20255156 kB

 

You can see here shared memory increase during last Friday morning:

ven 27 lug 2018, 10.50.01, CEST | Shmem:          19767352 kB

ven 27 lug 2018, 10.49.01, CEST | Shmem:          19767136 kB

ven 27 lug 2018, 10.48.01, CEST | Shmem:          19766320 kB

ven 27 lug 2018, 10.47.01, CEST | Shmem:          19766208 kB

ven 27 lug 2018, 10.46.01, CEST | Shmem:          19765952 kB

ven 27 lug 2018, 10.45.01, CEST | Shmem:          19765792 kB

ven 27 lug 2018, 10.44.01, CEST | Shmem:          19765616 kB

ven 27 lug 2018, 10.43.01, CEST | Shmem:          19765480 kB

ven 27 lug 2018, 10.42.01, CEST | Shmem:          19765408 kB

ven 27 lug 2018, 10.41.01, CEST | Shmem:          19765232 kB

ven 27 lug 2018, 10.40.01, CEST | Shmem:          19765072 kB

ven 27 lug 2018, 10.39.01, CEST | Shmem:          19764920 kB

ven 27 lug 2018, 10.38.01, CEST | Shmem:          19764768 kB

ven 27 lug 2018, 10.37.01, CEST | Shmem:          19764576 kB

ven 27 lug 2018, 10.36.01, CEST | Shmem:          19764520 kB

ven 27 lug 2018, 10.35.01, CEST | Shmem:          19764312 kB

ven 27 lug 2018, 10.34.01, CEST | Shmem:          19764112 kB

ven 27 lug 2018, 10.33.01, CEST | Shmem:          19763984 kB

ven 27 lug 2018, 10.32.01, CEST | Shmem:          19763976 kB

ven 27 lug 2018, 10.31.01, CEST | Shmem:          19763976 kB

ven 27 lug 2018, 10.30.01, CEST | Shmem:          19763560 kB

ven 27 lug 2018, 10.29.01, CEST | Shmem:          19763424 kB

ven 27 lug 2018, 10.28.01, CEST | Shmem:          19763224 kB

ven 27 lug 2018, 10.27.01, CEST | Shmem:          19763144 kB

ven 27 lug 2018, 10.26.01, CEST | Shmem:          19763080 kB

ven 27 lug 2018, 10.25.01, CEST | Shmem:          19762920 kB

ven 27 lug 2018, 10.24.01, CEST | Shmem:          19762776 kB

ven 27 lug 2018, 10.23.02, CEST | Shmem:          19762648 kB

ven 27 lug 2018, 10.22.02, CEST | Shmem:          19762296 kB

ven 27 lug 2018, 10.21.01, CEST | Shmem:          19762056 kB

ven 27 lug 2018, 10.20.01, CEST | Shmem:          19761848 kB

ven 27 lug 2018, 10.19.01, CEST | Shmem:          19761576 kB

ven 27 lug 2018, 10.18.01, CEST | Shmem:          19761488 kB

ven 27 lug 2018, 10.17.01, CEST | Shmem:          19761408 kB

ven 27 lug 2018, 10.16.01, CEST | Shmem:          19761900 kB

ven 27 lug 2018, 10.15.01, CEST | Shmem:          19761680 kB

ven 27 lug 2018, 10.14.01, CEST | Shmem:          19760544 kB

ven 27 lug 2018, 10.13.01, CEST | Shmem:          19760472 kB

ven 27 lug 2018, 10.12.01, CEST | Shmem:          19760336 kB

ven 27 lug 2018, 10.11.01, CEST | Shmem:          19760240 kB

ven 27 lug 2018, 10.10.01, CEST | Shmem:          19760168 kB

ven 27 lug 2018, 10.09.01, CEST | Shmem:          19760024 kB

ven 27 lug 2018, 10.08.01, CEST | Shmem:          19759232 kB

ven 27 lug 2018, 10.07.01, CEST | Shmem:          19759096 kB

ven 27 lug 2018, 10.06.01, CEST | Shmem:          19758848 kB

ven 27 lug 2018, 10.05.01, CEST | Shmem:          19758664 kB

ven 27 lug 2018, 10.04.01, CEST | Shmem:          19758528 kB

ven 27 lug 2018, 10.03.01, CEST | Shmem:          19758328 kB

ven 27 lug 2018, 10.02.01, CEST | Shmem:          19758040 kB

ven 27 lug 2018, 10.01.01, CEST | Shmem:          19757840 kB

ven 27 lug 2018, 10.00.01, CEST | Shmem:          19757696 kB

ven 27 lug 2018, 09.59.01, CEST | Shmem:          19757656 kB

ven 27 lug 2018, 09.58.01, CEST | Shmem:          19757488 kB

ven 27 lug 2018, 09.57.01, CEST | Shmem:          19757376 kB

ven 27 lug 2018, 09.56.01, CEST | Shmem:          19757240 kB

ven 27 lug 2018, 09.55.01, CEST | Shmem:          19757184 kB

ven 27 lug 2018, 09.54.01, CEST | Shmem:          19757152 kB

ven 27 lug 2018, 09.53.01, CEST | Shmem:          19746856 kB

ven 27 lug 2018, 09.52.01, CEST | Shmem:          19740408 kB

ven 27 lug 2018, 09.51.01, CEST | Shmem:          19739976 kB

ven 27 lug 2018, 09.50.01, CEST | Shmem:          19739848 kB

ven 27 lug 2018, 09.49.01, CEST | Shmem:          19739584 kB

ven 27 lug 2018, 09.48.01, CEST | Shmem:          19739480 kB

ven 27 lug 2018, 09.47.01, CEST | Shmem:          19739344 kB

ven 27 lug 2018, 09.46.01, CEST | Shmem:          19739152 kB

ven 27 lug 2018, 09.45.01, CEST | Shmem:          19739096 kB

ven 27 lug 2018, 09.44.01, CEST | Shmem:          19739072 kB

ven 27 lug 2018, 09.43.01, CEST | Shmem:          19739000 kB

ven 27 lug 2018, 09.42.01, CEST | Shmem:          19738920 kB

ven 27 lug 2018, 09.41.01, CEST | Shmem:          19738792 kB

ven 27 lug 2018, 09.40.01, CEST | Shmem:          19738600 kB

ven 27 lug 2018, 09.39.01, CEST | Shmem:          19738464 kB

ven 27 lug 2018, 09.38.01, CEST | Shmem:          19738384 kB

ven 27 lug 2018, 09.37.01, CEST | Shmem:          19738136 kB

ven 27 lug 2018, 09.36.01, CEST | Shmem:          19738008 kB

ven 27 lug 2018, 09.35.01, CEST | Shmem:          19737872 kB

ven 27 lug 2018, 09.34.01, CEST | Shmem:          19737824 kB

ven 27 lug 2018, 09.33.01, CEST | Shmem:          19737728 kB

ven 27 lug 2018, 09.32.01, CEST | Shmem:          19737600 kB

ven 27 lug 2018, 09.31.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.30.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.29.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.28.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.27.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.26.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.25.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.24.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.23.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.22.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.21.01, CEST | Shmem:          19732200 kB

ven 27 lug 2018, 09.20.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.19.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.18.02, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.17.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.16.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.15.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.14.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.13.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.12.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.11.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.10.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.09.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.08.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.07.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.06.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.05.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.04.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.03.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.02.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.01.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 09.00.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.59.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.58.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.57.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.56.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.55.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.54.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.53.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.52.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.51.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.50.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.49.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.48.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.47.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.46.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.45.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.44.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.43.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.42.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.41.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.40.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.39.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.38.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.37.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.36.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.35.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.34.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.33.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.32.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.31.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.30.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.29.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.28.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.27.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.26.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.25.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.24.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.23.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.22.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.21.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.20.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.19.01, CEST | Shmem:          19724008 kB

ven 27 lug 2018, 08.18.01, CEST | Shmem:          19724008 kB

 

Regards

Alfonso

 

 

Da: David Morton <david.morton@eroad.com>
Inviato: lunedì 30 luglio 2018 04:32
A: Alfonso Moscato (Merqurio) <alfonso.moscato@merqurio.it>; Rui DeSousa <rui@crazybean.net>
Cc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-admin@lists.postgresql.org
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

This thread went cold very quickly … we are experiencing the exact same symptoms on one of our test systems. Was a solution found or are the investigations / work ongoing ?

 

We are happy to provide any information that may be useful for our case.

 


David Morton
Senior Platform Engineer


MOB +64 21 544 503 EMAIL 
david.morton@eroad.com

 

EROAD Logo

  

EROAD
Level 3, 260 Oteha Valley Road, Albany, Auckland, New Zealand
PO Box 305 394, Triton Plaza, North Shore 0757, Auckland, NZ


TEL +64 9 927 4700 TOLL FREE 0800 4 EROAD
www.eroad.co.nz - Twitter - LinkedIn


This message is for the named person's use only. It may contain confidential, privileged, private or proprietary information. No confidentiality is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy hard copies of it and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print or copy any part of this message if you are not the intended recipient. EROAD Limited reserves the right to monitor all e-mail communications through its networks. Any views expressed in this message are those of the individual sender, except where the message states otherwise and the sender is authorised to state them to be the views of EROAD Limited. To the maximum extent permitted by law, EROAD Limited is not liable (including in negligence) for viruses or other defects or for changes made to this email or to any attachments. Before opening or using attachments, please check them for viruses and other defects.