Обсуждение: Error when execute insert/update
Hi guys, I have a PostgreSQL 9.5 server, which started showing the following message in the log: ,ERROR,54000,"multixact ""members"" limit exceeded","This command would create a multixact with 2 members, but the remaining space is only enough for 0 members.","Run a database-wide VACUUM in database with OID 139059 with reduced vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age settings.",,,"SQL statement ""SELECT 1 FROM ONLY ""public"".""alf_server"" x WHERE ""id"" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x""","insert into I started the vacuum in the database , but the same for the size ,I believe it will take a long time. Is there any way to increase this limit member while the vacuum is running to free up the application? thanks a lot,
On Sun, 11 Sep 2022, 11:29 am Leandro Bugalho, <leandrohb@gmail.com> wrote:
Hi guys, I have a PostgreSQL 9.5 server, which started showing the
following message in the log:
Why not you move on latest PostgreSQL like 14, 13.
,ERROR,54000,"multixact ""members"" limit exceeded","This command
would create a multixact with 2 members, but the remaining space is
only enough for 0 members.","Run a database-wide VACUUM in database
with OID 139059 with reduced vacuum_multixact_freeze_min_age and
vacuum_multixact_freeze_table_age settings.",,,"SQL statement ""SELECT
1 FROM ONLY ""public"".""alf_server"" x WHERE ""id""
OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x""","insert into
I started the vacuum in the database , but the same for the size ,I
believe it will take a long time. Is there any way to increase this
limit member while the vacuum is running to free up the application?
thanks a lot,
This upgrade is in our follow up. But unfortunately this issue occurred previously our upgrade. Now we need to fix this. ANy suggestions ? On Sun, Sep 11, 2022 at 6:08 AM Zaid Shabbir <zaidshabbir@gmail.com> wrote: > > > > On Sun, 11 Sep 2022, 11:29 am Leandro Bugalho, <leandrohb@gmail.com> wrote: >> >> Hi guys, I have a PostgreSQL 9.5 server, which started showing the >> following message in the log: > > > Why not you move on latest PostgreSQL like 14, 13. > > >> >> ,ERROR,54000,"multixact ""members"" limit exceeded","This command >> would create a multixact with 2 members, but the remaining space is >> only enough for 0 members.","Run a database-wide VACUUM in database >> with OID 139059 with reduced vacuum_multixact_freeze_min_age and >> vacuum_multixact_freeze_table_age settings.",,,"SQL statement ""SELECT >> 1 FROM ONLY ""public"".""alf_server"" x WHERE ""id"" >> OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x""","insert into >> >> I started the vacuum in the database , but the same for the size ,I >> believe it will take a long time. Is there any way to increase this >> limit member while the vacuum is running to free up the application? >> >> thanks a lot, >> >>
Leandro Bugalho <leandrohb@gmail.com> writes: > This upgrade is in our follow up. But unfortunately this issue > occurred previously our upgrade. Now we need to fix this. ANy > suggestions ? There is no magic bullet that will avoid the need for the cleanup VACUUM, if that's what you mean. What you should be thinking about is how to prevent a recurrence of the situation. In ordinary circumstances autovacuum ought to have kept you out of trouble ... have you disabled that, or changed settings to render it insufficiently aggressive? regards, tom lane
Thank you, I understand the need for a vacuum , my question is if it is possible in my situation to increase some parameter to correct the error of "multixact"""members""limit exceeded" now? , for the application to be able to write again, while executing the vacuum aggressive.Yes and I'm checking how to avoid this in the future... the autovacuum is enabled and I'll try to adjust it after I get through this situation. thank you so much On Sun, Sep 11, 2022 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > Leandro Bugalho <leandrohb@gmail.com> writes: > > This upgrade is in our follow up. But unfortunately this issue > > occurred previously our upgrade. Now we need to fix this. ANy > > suggestions ? > > There is no magic bullet that will avoid the need for the cleanup > VACUUM, if that's what you mean. What you should be thinking > about is how to prevent a recurrence of the situation. In ordinary > circumstances autovacuum ought to have kept you out of trouble ... > have you disabled that, or changed settings to render it > insufficiently aggressive? > > regards, tom lane
On 2022-Sep-11, Leandro Bugalho wrote: > Hi guys, I have a PostgreSQL 9.5 server, which started showing the > following message in the log: > > ,ERROR,54000,"multixact ""members"" limit exceeded","This command > would create a multixact with 2 members, but the remaining space is > only enough for 0 members.","Run a database-wide VACUUM in database > with OID 139059 with reduced vacuum_multixact_freeze_min_age and > vacuum_multixact_freeze_table_age settings.",,,"SQL statement ""SELECT > 1 FROM ONLY ""public"".""alf_server"" x WHERE ""id"" > OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x""","insert into > > I started the vacuum in the database , but the same for the size ,I > believe it will take a long time. Is there any way to increase this > limit member while the vacuum is running to free up the application? Sorry I'm late to the party. I hope you're already out of this hole. So this reply is for the archives' sake. There's no configuration parameter you can change that will help you in this case. One way out of this that's perhaps a tad quicker than vacuuming the entire database, is vacuuming just the table(s) that have the oldest multixacts. If you're lucky, it's just a not-so-big table that can be dealt with quicker than the entire database. You're looking for the table(s) with the highest values of mxid_age(pg_class.relminmxid). Have VACUUM process them successfully and their relminmxid will advance to a current value, and that will enable VACUUM to move the global minmxid limit forward and delete the oldest multixact data, giving some extra room for new multixacts to be created. Now, it is suprising that autovacuum didn't already did this by itself. It definitely should have. However, there are some problem cases: Sometimes, this is caused by tables that have corrupted data; those cause vacuum to terminate with an error without advancing the limit, so your database as a whole is stuck in old-multixactid land. If you can fix the corruption and let vacuum complete, that will work. If you cannot, you could just DROP the table. If it has valuable data, maybe you can CREATE TABLE AS SELECT FROM corrupt_table, with enough smarts to avoid falling over due to the corruption. Another cause we've seen is leftover temp tables: autovacuum cannot process them because temp tables cannot be read/written by sessions other than the one that created them. What you can do in this case is just drop those temp tables. We added some additional protections for this case in Postgres 10, but 9.5 still had some holes. This is particularly problematic if the server crashes and there is some session that is using temp tables, and afterwards the session ID is not reused. Those temp tables would be dropped at startup by another session that uses the same ID, but if --say-- the ID is very high and you don't have that many sessions again afterwards, they won't. (If you do DROP any tables, you will need to vacuum some other table afterwards, to let the code that advances the minimum have a chance to run.) -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/ "Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar al alfarero y ver qué formas se pueden sacar del otro" (C. Halloway en La Feria de las Tinieblas, R. Bradbury)