Jennie,
You say it returns a million rows, but you have a limit of 788 ?? How
is that possible.
What version of postgres, and driver are you using ?
Dave
On 18-Jul-06, at 12:42 PM, jennie browne wrote:
> We have a prepared statement example below:
>
>
>
> Query = SELECT consumer.consumer_id, consumer.address,
> consumer.operator_id
>
> FROM consumer
>
> INNER JOIN registration_list_consumer AS rlc ON
> (consumer.consumer_id = rlc.consumer_id AND
> rlc.registration_list_id = 15)
>
> INNER JOIN registration_list as rl on (rl.registration_list_id =
> rlc.registration_list_id AND rl.status_id = 25 )
>
> WHERE consumer.address_type_id IN (1)
>
> AND NOT EXISTS (SELECT 'x' FROM target_run_transaction trt
>
> WHERE consumer.consumer_id = trt.consumer_id AND trt.target_run_id
> = 468) AND NOT EXISTS (SELECT 'x' FROM registration_list_consumer
> AS rlc_exclude, registration_list rl WHERE
> rlc_exclude.registration_list_id = rl.registration_list_id AND
> rlc_exclude.consumer_id = consumer.consumer_id AND rl.status_id = 25
>
> AND rlc_exclude.registration_list_id in (34)) ORDER BY RANDOM()
> LIMIT 788
>
>
>
> With no dynamic parameters populated during runtime
> it will return roughly 1million rows. During execution of the query
> the preparedStatment grows in size up to 80MB and beyond.
>
>
>
> We have the fetchSize set to 1000
>
> And autocommit set to false.
>
>
>
> And the following params set
>
>
>
>
> stmt = conn.prepareStatement
> (query,ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
>
>
>
> can you suggest a work around to fix this problem
> why is it taking up so much memory?
>
>
>
> Thanks,
>
> jennie
>
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>