unlink large objects

Поиск
Список
Период
Сортировка
От Philip Crotwell
Тема unlink large objects
Дата
Msg-id Pine.GSO.4.10.10106081017020.26888-100000@tigger.seis.sc.edu
обсуждение исходный текст
Ответы Re: unlink large objects
Список pgsql-jdbc
HI

I am having trouble with a 7.1rc4 database filling up my disks. What I do
is put a large number of "small" large objects of seismic data into the
database in one process and use another process to unlink them after they
reach a certain age to form a buffer. The unlink seems to be working, and
some disk space is reclaimed, but the size of the database continues to
grow until the disk fills and the postgres backend dies. I have tried
vacuuming, but that doesn't help.

I poked around in the database directory and found a file named 16948 that
is 960Mb or almost all of the space on my partition. If the unlinks were
completely cleaning up, then my 8 days data buffer should be about 150Mb.
Is there a way to tell what this file is? I guess it is all the large
objects dumped in together??? Does anyone know why my unlinks wouldn't be
completely freeing the disk space?

lgelg pg> ls -l 16948
-rw-------    1 postgres postgres 959438848 Jun  8 14:31 16948
lgelg pg> pwd
/home/postgres/data/base/18721
lgelg pg>

I have put some more info below, if it helps. But basically I think that
the messages are all related to the disk filing, but don't explain why it
filled.

thanks,
Philip


Here is a snippet of the java code fo my unlink, and I am using autocommit
off:
            lobj = ((org.postgresql.Connection)conn).getLargeObjectAPI();
...snip...

 logger.debug("before large object delete");
            // loop over all large objects, deleting them
            it = oid.iterator();
            while (it.hasNext()) {
                Integer nextId = (Integer)it.next();

                logger.debug("Deleting large object "+nextId);
                // delete large object data
                lobj.delete(nextId.intValue());
            }
            it = null;

            // commit changes
            logger.debug("Commiting...");
            jdbcDataChunk.commit();
            conn.commit();
            logger.info("Commiting done.");


Here is the java exception I get:
An I/O error has occured while flushing the output - Exception:
java.io.IOException: Broken pipe
Stack Trace:

java.io.IOException: Broken pipe
        at java.net.SocketOutputStream.socketWrite(Native Method)
        at java.net.SocketOutputStream.write(SocketOutputStream.java,
Compiled Code)
        at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java,
Compiled Code)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java,
Compiled Code)
        at org.postgresql.PG_Stream.flush(PG_Stream.java, Compiled Code)
        at org.postgresql.Connection.ExecSQL(Connection.java, Compiled
Code)
        at org.postgresql.jdbc2.Statement.execute(Statement.java, Compiled
Code)
        at org.postgresql.jdbc2.Statement.executeQuery(Statement.java,
Compiled Code)
        at
org.postgresql.jdbc2.PreparedStatement.executeQuery(PreparedStatement.java,
Compile
d Code)
        at
edu.sc.seis.anhinga.database.JDBCChannelId.getDBId(JDBCChannelId.java,
Compiled Cod
e)
        at
edu.sc.seis.anhinga.database.JDBCDataChunk.put(JDBCDataChunk.java,
Compiled Code)
        at edu.sc.seis.anhinga.symres.Par4ToDB.run(Par4ToDB.java, Compiled
Code)
End of Stack Trace


Here are the messages in the serverlog:
DEBUG:  MoveOfflineLogs: remove 00000000000000D7
DEBUG:  MoveOfflineLogs: remove 00000000000000D8
DEBUG:  MoveOfflineLogs: remove 00000000000000D9
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
ERROR:  Write to hashjoin temp file failed
DEBUG:  MoveOfflineLogs: remove 00000000000000DA
FATAL 2:  ZeroFill(/home/postgres/data/pg_xlog/xlogtemp.19371) failed: No
such file or directo
ry
ERROR:  Write to hashjoin temp file failed
Server process (pid 19371) exited with status 512 at Thu Jun  7 03:32:52
2001
Terminating any active server processes...
NOTICE:  Message from PostgreSQL backend:
        The Postmaster has informed me that some other backend  died
abnormally and possibly c
orrupted shared memory.
        I have rolled back the current transaction and am       going to
terminate your databa
se system connection and exit.
        Please reconnect to the database system and repeat your query.
NOTICE:  Message from PostgreSQL backend:
        The Postmaster has informed me that some other backend  died
abnormally and possibly c
orrupted shared memory.
        I have rolled back the current transaction and am       going to
terminate your databa
se system connection and exit.
        Please reconnect to the database system and repeat your query.
NOTICE:  Message from PostgreSQL backend:
        The Postmaster has informed me that some other backend  died
abnormally and possibly c
orrupted shared memory.
        I have rolled back the current transaction and am       going to
terminate your databa
se system connection and exit.
        Please reconnect to the database system and repeat your query.
NOTICE:  Message from PostgreSQL backend:
        The Postmaster has informed me that some other backend  died
abnormally and possibly c
orrupted shared memory.
        I have rolled back the current transaction and am       going to
terminate your databa
se system connection and exit.
        Please reconnect to the database system and repeat your query.
NOTICE:  Message from PostgreSQL backend:
        The Postmaster has informed me that some other backend  died
abnormally and possibly c
orrupted shared memory.
        I have rolled back the current transaction and am       going to
terminate your databa
se system connection and exit.
        Please reconnect to the database system and repeat your query.
Server processes were terminated at Thu Jun  7 03:32:53 2001
Reinitializing shared memory and semaphores
DEBUG:  database system was interrupted at 2001-06-07 03:32:47 UTC
DEBUG:  CheckPoint record at (0, 3686817652)
DEBUG:  Redo record at (0, 3686817652); Undo record at (0, 0); Shutdown
FALSE
DEBUG:  NextTransactionId: 9905192; NextOid: 846112
DEBUG:  database system was not properly shut down; automatic recovery in
progress...
DEBUG:  ReadRecord: record with zero len at (0, 3686817716)
DEBUG:  redo is not required
DEBUG:  database system is in production state



В списке pgsql-jdbc по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: Couple of patches for jdbc driver
Следующее
От: Tom Lane
Дата:
Сообщение: Re: unlink large objects