Re: Is there any limitations
От | li li |
---|---|
Тема | Re: Is there any limitations |
Дата | |
Msg-id | Pine.GSO.4.33.0212031150100.6166-100000@ix.cs.uoregon.edu обсуждение исходный текст |
Ответ на | Re: Is there any limitations (Nikk Anderson <Nikk.Anderson@parallel.ltd.uk>) |
Список | pgsql-performance |
Hi Nikk, > I thought of doing this recently, as queries were taking so long. Instead > of breaking the table up, we clustered the data. This physically moves all > the data by key close to each other on disk (sounds kind of like defragging > a disk). This boosts query responses no end - for example our table has ~ > 10 million rows, a query that was taking 45 seconds to return, now takes 7 > seconds. To keep the table tidy, we run the cluster regularly. > I've clustered the data with a non-key attribute. Now the query time is about couple of minutes, but I expect less than one minute. Is there any trick in using cluster? I found that the primary key disappeared after clustering. Or it's better to cluster with primary key? My primary key is a composite. I picked one attribute as cluster key. > > As the size of the original table is increasing fast, I want > > to get it separated once the size grows up to a limit. So there > > will be a large amount of tables (having same structure) in a > > database. Is > > there any potential performance problem with this design? > > > > I think the problems would mainly be in management, as you would have to > keep track of the new table names, key names, and index names. > You are right. I have to keep track of these table names. However, I don't see any necessity for key names or index names. Because, as I metioned above, all these tables have exactly same structure. Thanks for quick response. Li Li
В списке pgsql-performance по дате отправления: