Re: Read performance on Large Table
От | Scott Ribe |
---|---|
Тема | Re: Read performance on Large Table |
Дата | |
Msg-id | 580E17C5-363E-4D24-B381-80A1F4C83C81@elevated-dev.com обсуждение исходный текст |
Ответ на | Re: Read performance on Large Table (Scott Marlowe <scott.marlowe@gmail.com>) |
Ответы |
Re: Read performance on Large Table
|
Список | pgsql-admin |
On May 21, 2015, at 9:05 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > > I've done a lot of partitioning of big data sets in postgresql and if > there's some common field, like data, that makes sense to partition > on, it can be a huge win. Indeed. I recently did it on exactly this kind of thing, a log of activity. And the common queries weren’t slow at all. But if I wanted to upgrade via dump/restore with minimal downtime, rather than set up Slony or try my luck with pg_upgrade,I could dump the historical partitions, drop those tables, then dump/restore, then restore the historical partitionsat my convenience. (In this particular db, history is unusually huge compared to the live data.) -- Scott Ribe scott_ribe@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice
В списке pgsql-admin по дате отправления: