Re: max_expr_depth

Поиск
Список
Период
Сортировка
От Joseph Shraibman
Тема Re: max_expr_depth
Дата
Msg-id 3B2EC615.CBE8D3B3@selectacast.net
обсуждение исходный текст
Ответ на max_expr_depth  (Joseph Shraibman <jks@selectacast.net>)
Список pgsql-general
Doug McNaught wrote:

>
> The issue for me would be: OK, 1000 entries in an IN works fine.
> Maybe 2000 works fine.  At some point (as you've seen) you hit a
> limit, whether it's query length, recursion depth or whatever.  Then
> you have to go rewrite your code.  I like to do it right the first
> time.  ;)
>
> If you know you will never ever have more than N items in the IN
> clause, and N is demonstrably less than the limit, use IN.  "Never
> ever" is a phrase that often turns out to be false in software
> devlopment...
>
> If you're doing the updates in batches (say, 1000 at a time using IN)
> you still might want to consider wrapping the whole thing in a
> transaction.  That way, if the client or the network craps out in the
> middle of the run, you don't have a half-complete set of updates to
> clean up.

Actually is this case I'd prefer to have as much of the updates done as
possible. Remember they were single updates that I'm buffering to
improve performance.

Right now I'm flushing my buffer every minute, and in a minute at our
current rate of processing there won't be more than 185 records to
update.  If I write my buffer after it reaches a limit of 500 I should
stay well below the 10000 postgres limit and still save on performance.


--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio.  http://www.targabot.com

В списке pgsql-general по дате отправления:

Предыдущее
От: Doug McNaught
Дата:
Сообщение: Re: max_expr_depth
Следующее
От: GH
Дата:
Сообщение: Re: patent