Re: [GENERAL] []performance issues
От | Peter A. Daly |
---|---|
Тема | Re: [GENERAL] []performance issues |
Дата | |
Msg-id | 3D4EB44C.9020503@ix.netcom.com обсуждение исходный текст |
Ответ на | Re: []performance issues ("Christopher Kings-Lynne" <chriskl@familyhealth.com.au>) |
Список | pgsql-hackers |
We have tables of over 3.1 million records. Performance is fine for most things as long as access hits an index. As already stated, count(*) takes a long time. Just took over a minute for me to check the record count. Our DB is primarily a data warehouse role. Creating an index on a char(43) field on that table from scratch takes a while, but I think that's expected. Under normal loads we have well under 1 second "LIKE" queries on that the indexed char(43) field in the table with a join on a table of 1.1 million records using a char(12) primary key. Server is a Dell PowerEdge 2400, Dual PIII 667's with a gig of memory, 800 something megs allocated to postgres shared buffers. -Pete Andrew Sullivan wrote: >On Fri, Aug 02, 2002 at 03:48:39PM +0400, Yaroslav Dmitriev wrote: > > >>So I am still interested in PostgreSQL's ability to deal with >>multimillon records tables. >> >> > >[x-posted and Reply-To: to -general; this isn't a development >problem.] > >We have tables with multimillion records, and they are fast. But not >fast to count(). The MVCC design of PostgreSQL will give you very >few concurerncy problems, but you pay for that in the response time >of certain kinds of aggregates, which cannot use an index. > >A > > >
В списке pgsql-hackers по дате отправления: