Re: Fwd: "SELECT COUNT(*) FROM" still causing issues (deadlock) in PostgreSQL 14.3/4?
От | Marco Boeringa |
---|---|
Тема | Re: Fwd: "SELECT COUNT(*) FROM" still causing issues (deadlock) in PostgreSQL 14.3/4? |
Дата | |
Msg-id | 6c9bf4fd-f22b-4894-d7c9-30bf68ffcf6d@boeringa.demon.nl обсуждение исходный текст |
Ответ на | Re: "SELECT COUNT(*) FROM" still causing issues (deadlock) in PostgreSQL 14.3? (Marco Boeringa <marco@boeringa.demon.nl>) |
Ответы |
Re: Fwd: "SELECT COUNT(*) FROM" still causing issues (deadlock) in PostgreSQL 14.3/4?
|
Список | pgsql-bugs |
Ok, I found it out using 'sudo baobab'. It is the: 'var/lib/postgresql/14/main/pg_wal' folder that is filled up with 890 GB of data... causing the file system root to run out of space and Ubuntu opening the disk usage analyzer and a warning as a consequence. I have never seen this happen under normal operation when I am not seeing this issue popup. So this is something to do with WAL. Op 22-7-2022 om 23:08 schreef Marco Boeringa: > Unfortunately, it now also again, as I have seen before in the same > situation, took my PG14.4 server down by filling up my entire Ubuntu > file system root, the PostgreSQL server process shut down the > connection automatically as a consequence. > > It is not clear to me what is actually being written out that causes > this. I am not really an Ubuntu / Linux expert. Any suggestions for > commands to quickly find out where PostgreSQL may have dumped 100's of > GB of data in file system root, as I had over 700 GB free space there? > > Op 22-7-2022 om 21:07 schreef Bruce Momjian: >> On Fri, Jul 22, 2022 at 09:56:06AM +0200, Marco Boeringa wrote: >>> Unfortunately, after more testing, it turns out this issue still >>> persists in >>> PostgreSQL 14.4. >>> >>> I have now encountered exactly the same problem as described in the >>> original >>> issue below: One autovacuum session that never finishes with no wait >>> event (or >>> is just incredibly slow and not finishing after many hours although >>> it should >>> in minutes considering the relative small dataset and normal >>> operation), and >>> the "client backend" and "parallel worker" stuck on the same wait >>> events as >>> listed below with the same "SELECT COUNT (*)" SQL statement. >>> >>> One thing to note as well, besides this being workstation level >>> hardware with >>> ECC RAM, is that I now also activated 'pg_checksums' on the PostgreSQL >>> databases, and reloaded all data, so all data should now have >>> checksums. No >>> PostgreSQL error at all is generated via the ODBC connection I use >>> to access >>> and update the database when this happens and PostgreSQL appears >>> stuck on the >>> autovacuum. So I guess this now means I can now pretty much exclude >>> a hardware >>> error, and this must be some software issue, considering the checksums. >> You might want to run these queries and show us the output, in case it >> suggests a cause: >> >> SELECT version(); >> >> -- non-default server settings >> SELECT name, current_setting(name), source >> FROM pg_settings >> WHERE source NOT IN ('default', 'override'); >>
В списке pgsql-bugs по дате отправления: