Cursors performance
От | Raimon Fernandez |
---|---|
Тема | Cursors performance |
Дата | |
Msg-id | 6456736F-70E9-4896-B63F-8B0B99C099C6@montx.com обсуждение исходный текст |
Список | pgsql-novice |
Hi folks, We're still evaluating our transcision from 4D Server to PostgreSQL. As 4D maintains selections in the server, and only sends the rows than can be viewed, it's fast from a user perspective. We want to use REALbasic as a frontend, and we are trying two options: 1: open a cursor in the server, and fetch the records while they are needed 2: send a new SELECT with LIMIT and OFFSET the option 1 is really fast, but we don't know if PostgreSQL can achieve good performance with 100 users connected. The worst scenario would be that each user had one or two cursors opened, but never in a normal situation, as those cursors will exist only in SELECTS with more than, for example, 5000 rows, but we don't want to have a situation that now works but not in the future ... we would start a transaction, create the cursor with the option to be alive after transaction cancelled, and close the transaction, we don't need the transaction, we just only create it because the cursor requiere it. with this option, the selection is in thge server and we only fetch the data as we need. the option 2, would be more direct, as we send the SELECT and we don't have to maintain cursors, transactions. With this option, the problem arises when we have a SELECT and some users adds new rows that can modify this select, this could be in some missing rows, but I think we would also have this problem in the option 1, as those new records wil never show without doing a new SELECT. so, apart from asking how postgreSQL can work with some hundred cursors, any recomendations on database designs books for those kind of situations ? thanks for your time, regards, raimon fernandez barcelona
В списке pgsql-novice по дате отправления: