Re: UPDATE many records
От | Rob Sargent |
---|---|
Тема | Re: UPDATE many records |
Дата | |
Msg-id | 9BB7DDC4-16D7-4137-8F6A-5C4F4F7AB2A0@gmail.com обсуждение исходный текст |
Ответ на | Re: UPDATE many records (Alban Hertroys <haramrae@gmail.com>) |
Ответы |
Re: UPDATE many records
|
Список | pgsql-general |
I must emphasize: This estimate is HIGHLY dependent on hardware and the complexity of the table (number of indices, etc). (I suspect there’s a correlation between table size (business value) and number of indices)On Jan 6, 2020, at 1:29 PM, Alban Hertroys <haramrae@gmail.com> wrote:I think you’re overcomplicating the matter.
I’d just do it as a single update in one transaction. It’s only 50M rows. It may take half an hour or so on decent hardware, depending on how resource-intensive your function is.
If that fails[1], only then would I start looking into batching things. But then you still need to figure out why it fails and what to do about that; if it fails it will probably fail fast, and if not, then you’re looking at a one-off situation that won’t require more than a few workarounds - after which you can just run the update again.
Ad 1). No harm has been done, it’s a single transaction that rolled back.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
В списке pgsql-general по дате отправления: