Re: Performance issues when the number of records are around 10 Million

Поиск
Список
Период
Сортировка
От Guillaume Lelarge
Тема Re: Performance issues when the number of records are around 10 Million
Дата
Msg-id 4BE8FC8E.9090702@lelarge.info
обсуждение исходный текст
Ответ на Performance issues when the number of records are around 10 Million  (venu madhav <venutaurus539@gmail.com>)
Список pgadmin-support
Le 11/05/2010 08:34, venu madhav a écrit :
> Hi all,
>       In my database application, I've a table whose records can reach 10M
> and insertions can happen at a faster rate like 100 insertions per second in
> the peak times. I configured postgres to do auto vacuum on hourly basis. I
> have frontend GUI application in CGI which displays the data from the
> database. When I try to get the last twenty records from the database, it
> takes around 10-15  mins to complete the operation.This is the query which
> is used:
> 
> * select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
> e.wifi_addr_2, e.view_status, bssid  FROM event e, signature s WHERE
> s.sig_id = e.signature   AND e.timestamp >= '1270449180' AND e.timestamp <
> '1273473180'  ORDER BY e.cid DESC,  e.cid DESC limit 21 offset 10539780;
> *
> Can any one suggest me a better solution to improve the performance.
> 

You're on the wrong mailing list. This mailing list deals with pgAdmin
support.

You would be better to ask this on pgsql-performance for example. Make
sure you send also an EXPLAIN ANALYZE of the query. If you don't, that's
the first thing they will ask for. And last, having an offset this big
is just asking for (performance) trouble.


-- 
Guillaume.http://www.postgresqlfr.orghttp://dalibo.com


В списке pgadmin-support по дате отправления:

Предыдущее
От: venu madhav
Дата:
Сообщение: Performance issues when the number of records are around 10 Million
Следующее
От: Paul McNary
Дата:
Сообщение: pgadmin 1.10.2 pgscript Build issue on USLC Compiler