Re: Large Tables(>1 Gb)
От | Tom Lane |
---|---|
Тема | Re: Large Tables(>1 Gb) |
Дата | |
Msg-id | 19026.962379136@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Large Tables(>1 Gb) (Denis Perchine <dyp@perchine.com>) |
Список | pgsql-general |
Denis Perchine <dyp@perchine.com> writes: > 2. Use limit & offset capability of postgres. > select * from big_table limit 1000 offset 0; > select * from big_table limit 1000 offset 1000; This is a risky way to do it --- the Postgres optimizer considers limit/offset when choosing a plan, and is quite capable of choosing different plans that yield different tuple orderings depending on the size of the offset+limit. For a plain SELECT as above you'd probably be safe enough, but in more complex cases such as having potentially- indexable WHERE clauses you'll very likely get bitten, unless you have an ORDER BY clause to guarantee a unique tuple ordering. Another advantage of FETCH is that you get a consistent result set even if other backends are modifying the table, since it all happens within one transaction. regards, tom lane
В списке pgsql-general по дате отправления: