psql memory usage when creating delimited files
От | David Brain |
---|---|
Тема | psql memory usage when creating delimited files |
Дата | |
Msg-id | 450EDCCA.1050006@bandwidth.com обсуждение исходный текст |
Ответы |
Re: psql memory usage when creating delimited files
|
Список | pgsql-general |
I had in interesting issue the other day while trying to generate delimited files from a query in psql, using: \f'|' \t \a \o out.file select * from really_big_table sort by createddate; This quantity of data involved here is fairly large (maybe 2-4GB). Watching the memory usage, the postmaster consumed a fair chunk of RAM (obviously) while running the query, but I was surprised to see psql start taking increasingly large quantities or RAM, to the point that in the end the machines memory was exhausted, postmaster died (and restarted OK) causing psql to quit. I was surprised in that I assumed that psql would just be taking rows from postmaster and writing them to disk, hence requiring very little RAM, but it appeared that it tried to load the data into memory. Is there some option I'm missing in my export script that would prevent this happening? I managed to work around the issue by issuing a number of smaller queries, but that's not something I want to do on a regular basis. Thanks, David.
В списке pgsql-general по дате отправления: