Re: JDBC and processing large numbers of rows
От | Guido Fiala |
---|---|
Тема | Re: JDBC and processing large numbers of rows |
Дата | |
Msg-id | 200405120837.42865.guido.fiala@dka-gmbh.de обсуждение исходный текст |
Ответ на | Re: JDBC and processing large numbers of rows (Sean Shanny <shannyconsulting@earthlink.net>) |
Ответы |
Re: JDBC and processing large numbers of rows
Re: JDBC and processing large numbers of rows Re: JDBC and processing large numbers of rows |
Список | pgsql-jdbc |
Reading all this i'd like to know if all this isn't just a tradeof between _where_ the memory is consumed? If your JDBC-client holds all in memory - it gets an OutOfMem-Exception. If your backend uses Cursors - it caches the whole resultset and probably starts swapping and gets slow (needs the memory of all users). If you use Limit and Offset the database has to do more to find the data-snippet and in worst case (last few records) still needs temporary the whole resultset? (not sure here) Is that just a "choose your poison" ? At least in the first case the memory of the Client _gets_ used too and not all load to the backend, on the other side - most the the user does not really read all the data at all, so it puts unnecessary load on all the hardware. Really like to know what the best way to go is then... Guido
В списке pgsql-jdbc по дате отправления: