Обсуждение: SELECT ... FOR UPDATE and ResultSet
PostgreSQL will lock the rows identified by SELECT ... FOR UPDATE even if I don't do anything with the ResultSet after statement.executeQuery(), at least in simple tests. Is this the semantically correct result from a database standards standpoint? Can I count on this behavior for PostgreSQL? Or will it fail to lock all rows if I have a sufficiently large ResultSet and Connection.setFetchSize() does it's magic? I notice that there is PostgreSQL documentation saying that the FOR UPDATE stuff logically happens after LIMIT. The same logic might then conceptually apply to ResultSets that don't acquire all query results from the server. Thanks for any tips.
Jeffrey Tenny <jeffrey.tenny@comcast.net> writes: > Can I count on this behavior for PostgreSQL? Or will it fail to lock > all rows if > I have a sufficiently large ResultSet and Connection.setFetchSize() does > it's magic? In the current implementation, the backend will only lock those rows actually returned to the client. If setFetchSize() causes not all the rows to be fetched, you lose ... regards, tom lane
Tom Lane wrote:
for row retrieval with respect to FOR UPDATE?
Meanwhile, that's good to know. Is there any faster way to lock these rows
than by fetching id's?
i.e. something faster than:
SELECT id FROM table WHERE id IN (<list>)
If I wanted to lock 100 rows?
Sounds like I can't just do a SELECT COUNT(id) FROM table... either,
much less the documented SELECT id FROM table ... LIMIT 1 FOR UPDATE
Note that in my case most of my locks are nothing more than concurrency synchronizations.
I don't even update the rows in question. I'm using them as high level database mutexs to control
updates of rows in other tables related to the ones I locked. In the case of multiple rows to be locked as in the IN list example above,
I'm typically locking 100 out of 10,000+ rows.
Is there some better way of doing this "database mutexing"?
It has to be at the database level, I have multiple app servers operating on the same database,
so they can't just synchronize this behavior in app server memory.
Given that the documentation says row locks cause disk writes, I'm certainly concerned about the performance
of this technique of database synchronization.
Thanks,
Dave
Any idea if there's a JDBC or other standards requirement here that is considered the desired goalJeffrey Tenny <jeffrey.tenny@comcast.net> writes:Can I count on this behavior for PostgreSQL? Or will it fail to lock all rows if I have a sufficiently large ResultSet and Connection.setFetchSize() does it's magic?In the current implementation, the backend will only lock those rows actually returned to the client. If setFetchSize() causes not all the rows to be fetched, you lose ...
for row retrieval with respect to FOR UPDATE?
Meanwhile, that's good to know. Is there any faster way to lock these rows
than by fetching id's?
i.e. something faster than:
SELECT id FROM table WHERE id IN (<list>)
If I wanted to lock 100 rows?
Sounds like I can't just do a SELECT COUNT(id) FROM table... either,
much less the documented SELECT id FROM table ... LIMIT 1 FOR UPDATE
Note that in my case most of my locks are nothing more than concurrency synchronizations.
I don't even update the rows in question. I'm using them as high level database mutexs to control
updates of rows in other tables related to the ones I locked. In the case of multiple rows to be locked as in the IN list example above,
I'm typically locking 100 out of 10,000+ rows.
Is there some better way of doing this "database mutexing"?
It has to be at the database level, I have multiple app servers operating on the same database,
so they can't just synchronize this behavior in app server memory.
Given that the documentation says row locks cause disk writes, I'm certainly concerned about the performance
of this technique of database synchronization.
Thanks,
Dave
Jeff, I don't know if you are trying to support multiple databases or not, but this behavior does vary across databases. In DB2 for example only the current row your cursor is on is locked, Oracle will lock all rows that satisfy the query (even if you never fetch them all). --Barry Tom Lane wrote: > Jeffrey Tenny <jeffrey.tenny@comcast.net> writes: > >>Can I count on this behavior for PostgreSQL? Or will it fail to lock >>all rows if >>I have a sufficiently large ResultSet and Connection.setFetchSize() does >>it's magic? > > > In the current implementation, the backend will only lock those rows > actually returned to the client. If setFetchSize() causes not all the > rows to be fetched, you lose ... > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 9: the planner will ignore your desire to choose an index scan if your > joining column's datatypes do not match >