Re: TB-sized databases
От | Trevor Talbot |
---|---|
Тема | Re: TB-sized databases |
Дата | |
Msg-id | 90bce5730711300215x3ff6c68ekf1c55ce799e48578@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: TB-sized databases (Gregory Stark <stark@enterprisedb.com>) |
Ответы |
Re: TB-sized databases
|
Список | pgsql-performance |
On 11/29/07, Gregory Stark <stark@enterprisedb.com> wrote: > "Simon Riggs" <simon@2ndquadrant.com> writes: > > On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote: > >> In fact an even more useful option would be to ask the planner to throw > >> error if the expected cost exceeds a certain threshold... > > Tom's previous concerns were along the lines of "How would know what to > > set it to?", given that the planner costs are mostly arbitrary numbers. > Hm, that's only kind of true. > Obviously few people know how long such a page read takes but surely you would > just run a few sequential reads of large tables and set the limit to some > multiple of whatever you find. > > This isn't going to precise to the level of being able to avoid executing any > query which will take over 1000ms. But it is going to be able to catch > unconstrained cross joins or large sequential scans or such. Isn't that what statement_timeout is for? Since this is entirely based on estimates, using arbitrary fuzzy numbers for this seems fine to me; precision isn't really the goal.
В списке pgsql-performance по дате отправления: