Re: Read table rows in chunks

Поиск
Список
Период
Сортировка
От David G. Johnston
Тема Re: Read table rows in chunks
Дата
Msg-id CAKFQuwbRbO3qAQis113P+WnemfGF_pPNHi9TzZTs0A3qu=_ETw@mail.gmail.com
обсуждение исходный текст
Ответ на Read table rows in chunks  (Sushrut Shivaswamy <sushrut.shivaswamy@gmail.com>)
Список pgsql-hackers
On Sat, Apr 27, 2024 at 12:47 AM Sushrut Shivaswamy <sushrut.shivaswamy@gmail.com> wrote:

I"m trying to read the rows of a table in chunks to process them in a background worker.

This list really isn't the place for this kind of discussion.  You are doing application-level stuff, not working on patches for core.  General discussions and questions like this should be directed to the -general mailing list.

I want to ensure that each row is processed only once.

Is failure during processing possible?


I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT {limit_size}` functionality for this but I"m running into issues.

FOR UPDATE and SKIPPED LOCKED clauses usually come into play for this use case.
 
Can you please suggest any alternative to periodically read rows from a table in chunks while processing each row exactly once.


I think you are fooling yourself if you think you can do this without writing back to the row the fact it has been processed.  At which point ensuring that you only retrieve and process unprocessed rows is trivial - just don't select ones with a status of processed.

If adding a column to the data is not possible, record processed row identifiers into a separate table and inspect that.

DavId J.

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Kashif Zeeshan
Дата:
Сообщение: Re: Read table rows in chunks
Следующее
От: Daniel Gustafsson
Дата:
Сообщение: Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?