Re: [GENERAL] identifying performance hits: how to ???
От | The Hermit Hacker |
---|---|
Тема | Re: [GENERAL] identifying performance hits: how to ??? |
Дата | |
Msg-id | Pine.BSF.4.21.0001121356380.46499-100000@thelab.hub.org обсуждение исходный текст |
Ответ на | Re: [GENERAL] identifying performance hits: how to ??? (Karl DeBisschop <kdebisschop@range.infoplease.com>) |
Список | pgsql-general |
On Wed, 12 Jan 2000, Karl DeBisschop wrote: > > > Anyone know if read performance on a postgres database decreases at > > an increasing rate, as the number of stored records increase? > > > > It seems as if I'm missing something fundamental... maybe I am... is > > some kind of database cleanup necessary? With less than ten > > records, the grid populates very quickly. Beyond that, performance > > slows to a crawl, until it _seems_ that every new record doubles the > > time needed to retrieve... > > Are you using indexes? > > Are you vacuuming? > > I may have incorrectly inferred table sizes and such, but the behavior > you describe seems odd - we typically work with hundreds of thousands > of entries in our tables with good results (though things do slow down > for the one DB we use with tens of millions of entries). An example of a large database that ppl can see in action...the search engine we are using on PostgreSQL, when fully populated, works out to around 6million records... and is reasnably quick...
В списке pgsql-general по дате отправления: