Re: query overhead
От | Craig Ringer |
---|---|
Тема | Re: query overhead |
Дата | |
Msg-id | 5000E75A.6030108@ringerc.id.au обсуждение исходный текст |
Ответ на | query overhead (Andy Halsall <halsall_andy@hotmail.com>) |
Список | pgsql-performance |
On 07/11/2012 07:46 PM, Andy Halsall wrote:
PL/PgSQL or SQL stored proc? There's a definite calling overhead for PL/PgSQL compared to plain SQL functions. SQL functions in turn cost more than a direct statement.
I've written an Immutable stored procedure that takes no parameters and returns a fixed value to try and determine the round trip overhead of a query to PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX domain sockets.
These costs aren't big. They're massively outweighed by any kind of disk access or any non-trivial query. They start to add up if you have a lot of procs that wrap a simple "SELECT * FROM x WHERE x.id = $1" though.
0.0015 to 0.002 milliseconds?Client measures are suggesting ~150-200 microseconds to call sp and get the answer back
That's ... um ... fast. Presumably that's during a loop where your no-op is run repeatedly without connection setup costs, etc.
UNIX domain sockets are typically at least as fast and somewhat lower overhead.
ping to loopback returns in ~20 microseconds (I assume domain sockets are equivalent).
So it looks like a local no-op overhead of at least 150 microseconds which would leave us struggling.
Could someone please let me know if this is usual and if so where the time's spent?
Short of getting a faster server, is there anything I can do to influence this?
I'm not sure how much a faster server would help with single query response time. It'll help with response to many parallel queries, but may not speed up a single query, especially a tiny lightweight one, particularly dramatically.
The Xeon 7040:
http://ark.intel.com/products/27226/Intel-Xeon-Processor-7040-(4M-Cache-3_00-GHz-667-MHz-FSB)
is not the newest beast out there, but it's not exactly slow.
Honestly, PostgreSQL's focus is on performance with bigger units of work, not on being able to return a response to a tiny query in microseconds. If you are converting an application that has microsecond response time requirements and hammers its database with millions of tiny queries, PostgreSQL is probably not going to be your best bet.
If you're able to adapt the app to use set-oriented access patterns instead of looping, eg instead of (pseudocode):
customer_ids = [ ... array from somewhere ... ]
for customer_id in ids:
c = SELECT c FROM customer c WHERE customer_id = :id
if c.some_var:
UPDATE customer SET c.some_other_var = 't'
you can do:
UPDATE customer SET c.some_other_var = [expression] WHERE [expression]
then you'll get much better results from Pg.
--
Craig Ringer
В списке pgsql-performance по дате отправления: