Re: Optimizing bulk update performance
От | Jasen Betts |
---|---|
Тема | Re: Optimizing bulk update performance |
Дата | |
Msg-id | kli7rm$n6r$1@gonzo.reversiblemaps.ath.cx обсуждение исходный текст |
Ответ на | Optimizing bulk update performance (Yang Zhang <yanghatespam@gmail.com>) |
Список | pgsql-general |
On 2013-04-27, Yang Zhang <yanghatespam@gmail.com> wrote: > On Sat, Apr 27, 2013 at 1:55 AM, Misa Simic <misa.simic@gmail.com> wrote: >> Optionaly you can run vacuum analyze after bulk operation... > > But wouldn't a bulk UPDATE touch many existing pages (say, 20% > scattered around) to mark rows as dead (per MVCC)? I guess it comes > down to: will PG be smart enough to mark dead rows in largely > sequential scans (rather than, say, jumping around in whatever order > rows from foo are yielded by the above join)? A plpgsql FOR-IN-query loop isn't going to be that smart, it's a procedural language ans does things procedurally, if you want to do set operations use SQL. this: UPDATE existing-table SET .... FROM temp-table WHERE join-condition; will likely get you a sequential scan over the existing table and should be reasonably performant as long as temp-table is small enough to fit in memory. -- ⚂⚃ 100% natural
В списке pgsql-general по дате отправления: