Re: Performance issues when the number of records are around 10 Million
От | A. Kretschmer |
---|---|
Тема | Re: Performance issues when the number of records are around 10 Million |
Дата | |
Msg-id | 20100511070355.GB32479@a-kretschmer.de обсуждение исходный текст |
Ответ на | Performance issues when the number of records are around 10 Million (venu madhav <venutaurus539@gmail.com>) |
Список | pgsql-general |
In response to venu madhav : > Hi all, > In my database application, I've a table whose records can > reach 10M and insertions can happen at a faster rate like 100 > insertions per second in the peak times. I configured postgres to do > auto vacuum on hourly basis. I have frontend GUI application in CGI > which displays the data from the database. > When I try to get the last twenty records from the database, > it takes around 10-15 mins to complete the operation.This is the query > which is used: > > select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, > e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1, > e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE > s.sig_id = e.signature AND e.timestamp >= '1270449180' AND > e.timestamp < '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 > offset 10539780; First, show us the table-definition for both tables. Secondly the output generated from EXPLAIN ANALYSE <your query> I'm surprised about the "e.timestamp >= '1270449180'", is this a TIMESTAMP-column? And, to retrieve the last twenty records you should write: ORDER BY ts DESC LIMIT 20 With a proper index on this column this should force an index-scan. Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99
В списке pgsql-general по дате отправления: