Re: processing large amount of rows with plpgsql
От | Marc Mamin |
---|---|
Тема | Re: processing large amount of rows with plpgsql |
Дата | |
Msg-id | C4DAC901169B624F933534A26ED7DF310861B65D@JENMAIL01.ad.intershop.net обсуждение исходный текст |
Ответ на | Re: processing large amount of rows with plpgsql (Geert Mak <pobox@verysmall.org>) |
Ответы |
Re: processing large amount of rows with plpgsql
|
Список | pgsql-general |
> > There is (almost) no way to > > force commit inside a function -- > > So what you are saying is that this behavior is normal and we should > either equip ourselves with enough disk space (which I am trying now, > it is a cloud server, which I am resizing to gain more disk space and > see what will happen) or do it with an external (scripting) language? > Hello, a relative simple way to workaround your performance/resource problem is to slice the update. e.g.: create function myupdate(slice int) ... for statistics_row in SELECT * FROM statistics WHERE id % 16 = slice or: WHERE hashtext(id::text) % 16 = slice ... and then call your function with the values 1..15 (when using 16 slices) Use a power of 2 for the number of slices. It may be faster to use many slices and this allows to do the job in parallel on a few threads. HTH, Marc Mamin
В списке pgsql-general по дате отправления: