Re: Scaling 10 million records in PostgreSQL table
От | Steve Crawford |
---|---|
Тема | Re: Scaling 10 million records in PostgreSQL table |
Дата | |
Msg-id | 50733317.9080408@pinpointresearch.com обсуждение исходный текст |
Ответ на | Scaling 10 million records in PostgreSQL table (Navaneethan R <nava@gridlex.com>) |
Ответы |
Re: Scaling 10 million records in PostgreSQL table
|
Список | pgsql-performance |
On 10/08/2012 08:26 AM, Navaneethan R wrote: > Hi all, > > I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need toaccess the last week data from the table. > It takes huge time to process the simple query.So, i throws time out exception error. > > query is : > select count(*) from dealer_vehicle_details where modified_on between '2012-10-01' and '2012-10-08' and dealer_id=270001; > > After a lot of time it responds 1184 as count > > what are the ways i have to follow to increase the performance of this query? > > The insertion also going parallel since the daily realtime updation. > > what could be the reason exactly for this lacking performace? > > What version of PostgreSQL? You can use "select version();" and note that 9.2 has index-only scans which can result in a substantial performance boost for queries of this type. What is the structure of your table? You can use "\d+ dealer_vehicle_details" in psql. Have you tuned PostgreSQL in any way? If so, what? Cheers, Steve
В списке pgsql-performance по дате отправления: