C client memory usage grows
От | Patrick L. Nolan |
---|---|
Тема | C client memory usage grows |
Дата | |
Msg-id | agvbj4$2tmk$1@news.hub.org обсуждение исходный текст |
Список | pgsql-interfaces |
I'm writing a C program that uses libpq to read a big table. First I tried a dumb query like "SELECT * FROM mytable". It ran out of memory after fetching about 9 million rows. Tom Lane suggested that I should use a cursor to fetch data in more manageable chunks. I have tried that, and it doesn't really seem to cure the problem. My program's memory usage grows steadily, no matter how many rows I FETCH at a time. The relevant portion of my program looks sort of like this: res = PQexec(conn, "BEGIN WORK"); res = PQexec(conn, "DECLARE mycur BINARY CURSOR FOR SELECT * FROM mytable"); while(1) { res = PQexec(conn, "FETCH 8192 FROM mycur"); nrows = PQntuples(res); if (nrows <= 0) break; for (i=0;i< nrow; i++) { /* Extract data from row */ } } res = PQexec(conn, "COMMIT"); I have experimented with other values of the number of rows in the FETCH command, and it doesn't seem to make much difference in speed or memory usage. The size of the client grows from 4 MB to 72 MB over about a minute. On a sufficiently large table it will continue to grow until it dies. I don't do any mallocs at all in my code, so it's libpq that uses all the memory. It acts as if each FETCH operation opens a whole new set of buffers, ignoring the ones that were used before. I suppose you might need that for reverse fetching and such, but it works like a memory leak in my application. Is there some way around this? Details: This is postgres 7.1 on Red Hat Linux 7.1. -- * Patrick L. Nolan * * W. W. Hansen Experimental Physics Laboratory (HEPL) * * Stanford University *
В списке pgsql-interfaces по дате отправления: