Re: Large Tables(>1 Gb)
От | Tom Lane |
---|---|
Тема | Re: Large Tables(>1 Gb) |
Дата | |
Msg-id | 5329.962344201@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Large Tables(>1 Gb) (Fred_Zellinger@seagate.com) |
Список | pgsql-general |
Fred_Zellinger@seagate.com writes: > After a few weeks, my table eclipsed approximately 1Gb, and when I looked > at it in my PG_DATA/database directory, I noticed that there were two > files: MYTABLE and MYTABLE.1. I was curious why this happened, but I > figured that Postgres must break up tables over 1Gb into multiple > files.(right?) Check. It's to work around OSes that don't handle large files. > Then, while running psql, I did a "select * from MYTABLE;" Well, psql just > sits there while the hard drive light blinks like crazy, pulling the table > up into memory. I have 256Mb of RAM, so this takes awhile. When I start > up "top" and watch my process table, the postgres backend is sucking up the > CPU time pulling the data and the psql frontend is sucking up the memory > accepting the results. Yeah. libpq has this nifty little API that provides random access to a query result set --- so it wants to suck the entire result set into the client application's RAM before it will let the app have any of it. Actually, there are error-handling reasons for doing it that way too. But anyway the point is that that client-side API is not well designed for huge result sets. It's not a backend problem. The usual workaround is to use DECLARE CURSOR and FETCH to grab the result in bite-size chunks, like a few hundred or thousand rows at a time. Sooner or later someone will probably extend libpq to offer some kind of "streaming" API for scanning through large result sets without buffering them in client RAM. Doesn't seem to have gotten to the top of anyone's TODO list yet though... the FETCH solution works well enough to keep the annoyance level down... regards, tom lane
В списке pgsql-general по дате отправления: