Re: Best way to reduce server rounds getting big BLOBs
От | Merlin Moncure |
---|---|
Тема | Re: Best way to reduce server rounds getting big BLOBs |
Дата | |
Msg-id | CAHyXU0z=3ctdOY38uRrKsrO+nOgfWqGYXv25WXBO7+uj7BhK3A@mail.gmail.com обсуждение исходный текст |
Ответ на | Best way to reduce server rounds getting big BLOBs (Jorge Arévalo <jorgearevalo@libregis.org>) |
Ответы |
Re: Best way to reduce server rounds getting big
BLOBs
|
Список | pgsql-general |
On Wed, May 15, 2013 at 11:31 AM, Jorge Arévalo <jorgearevalo@libregis.org> wrote: > Hello, > > I'd like to know what's the best way to reduce the number of server rounds in a libpq C app that fetches BLOBs from a remotePostgreSQL server. > > About 75% of the time my app uses is spent querying database. I basically get binary objects (images). I have to fetchall the images from a table. This table can be really big (in number of rows) and each image can be big too. #1 thing to make sure of when getting big blobs is that you are fetching data in binary. If you are not, do so before changing anything else (I wrote a library to help do that, libpqtypes). > I guess I should go for cursors. If I understood the concept of "cursor", basically the query is executed, a ResultSetis generated inside the database server, and the client receives a "pointer" to this ResultSet. You can get allthe rows by moving this pointer over the ResultSet, calling the right functions. But you still have to go to the databasefor each chunk of data. Am I right? cursors are a way to page through a query result without fetching all the data at once. this would be most useful if you are processing one row at a time on the client side. but if the client needs all the data held in memory, cursors will only help in terms of reducing the temporary memory demands while doing the transfer. So it's hard to say if it's worth using them until you describe the client side requirements a little better. merlin
В списке pgsql-general по дате отправления: