Re: index speed-up and automatic tables/procedures creation
От | Tom Lane |
---|---|
Тема | Re: index speed-up and automatic tables/procedures creation |
Дата | |
Msg-id | 21716.1259304579@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | index speed-up and automatic tables/procedures creation ("Jean-Yves F. Barbier" <12ukwn@gmail.com>) |
Ответы |
Re: index speed-up and automatic tables/procedures creation
Re: index speed-up and automatic tables/procedures creation |
Список | pgsql-novice |
"Jean-Yves F. Barbier" <12ukwn@gmail.com> writes: > 1)- I'd like to keep a table in one piece, but it'll be huge (several millions rows > and growing); can a segmentation of indexes (all indexes that are used for > searching) speed-up this table scans enough to keep it as responsive to queries as > multiple tables? And what can I do about the primary key index, which is monolitic? > (I can't use inheritance as there are some integrity references into it.) I think you're wasting your time. What you are setting out to do here is manually emulate the top layer or so of a large index. Unless you have very specific (and unusual) data access patterns that you know in considerable detail, this is not a game you are going to win. Just go with the one big table and one index, you'll be happier. (Note that "several million rows" is not big, it's barely enough to notice.) You will see a lot of discussion about partitioning of tables if you look around the list archives, but this is not done with the idea that it makes access to any one row faster. The biggest motivation usually is to allow dropping ranges of data cheaply, like throwing away a month's or year's worth of old data at once. regards, tom lane
В списке pgsql-novice по дате отправления: