Re: performance in selecting from very large recordset
От | Dann Corbit |
---|---|
Тема | Re: performance in selecting from very large recordset |
Дата | |
Msg-id | D90A5A6C612A39408103E6ECDD77B8294CE514@voyager.corporate.connx.com обсуждение исходный текст |
Ответ на | performance in selecting from very large recordset ("Hijax" <hijax_pl@wp.pl>) |
Ответы |
Re: performance in selecting from very large recordset
|
Список | pgsql-general |
> -----Original Message----- > From: Hijax [mailto:hijax_pl@wp.pl] > Sent: Thursday, December 18, 2003 5:34 AM > To: pgsql-general@postgresql.org > Subject: [GENERAL] performance in selecting from very large recordset > > > Hi, > > What about posgtesql performace in selecting from million of records? > > I have a database with about 23 miilion of touples in single > table. Data represents a analog signal values from multiples > sources... These values are gathered in every minute... > > How to create a select statement.... or how to create > indexes... for maximum performance... > > Any guidelines? The normal guideline is that for 'equal' searches or 'in list' searches, use a hashed index. That is what I do with most database systems. For comparision searches, you need a btree index. You can add both kinds, but the more indexes you add the slower the inserts will be. There were problems with hashed indexes in PostgreSQL before. I think that they have all been addressed in 7.4, but I am not an expert on their usage in that environment, so it would be good if others can supply feedback. I develop database systems under many environments, and so my advice is not PG specific. There is a performance alias that would be a better place for your inquiry than the general list.
В списке pgsql-general по дате отправления: