Re: Calculating statistic via function rather than with query is slowing my query
От | Craig Ringer |
---|---|
Тема | Re: Calculating statistic via function rather than with query is slowing my query |
Дата | |
Msg-id | 4E4CA3F2.1080604@ringerc.id.au обсуждение исходный текст |
Ответ на | Re: Calculating statistic via function rather than with query is slowing my query (Anish Kejariwal <anishkej@gmail.com>) |
Ответы |
Re: Calculating statistic via function rather than with
query is slowing my query
|
Список | pgsql-performance |
On 18/08/2011 9:03 AM, Anish Kejariwal wrote: > Thanks for the help Pavel and Craig. I really appreciate it. I'm > going to try a couple of these different options (write a c function, > use a sql function with case statements, and use plperl), so I can see > which gives me the realtime performance that I need, and works best > for clean code in my particular case. Do you really mean "realtime"? Or just "fast"? If you have strongly bounded latency requirements, any SQL-based, disk-based system is probably not for you. Especially not one that relies on a statics-based query planner, caching, and periodic checkpoints. I'd be looking into in-memory databases designed for realtime environments where latency is critical. Hard realtime: If this system fails to respond within <x> milliseconds, all the time, every time, then something will go "smash" or "boom" expensively and unrecoverably. Soft realtime: If this system responds late, the late response is expensive or less useful. Frequent late responses are unacceptable but the occasional one might be endurable. Just needs to be fast: If it responds late, the user gets irritated because they're sitting and waiting for a response. Regular long stalls are unacceptable, but otherwise the user can put up with it. You're more concerned with average latency than maximum latency. -- Craig Ringer
В списке pgsql-performance по дате отправления: