Re: 10 TB database
От | Whit Armstrong |
---|---|
Тема | Re: 10 TB database |
Дата | |
Msg-id | 8ec76080906150601k3313b06sbd1285a09a3d1e89@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: 10 TB database (Grzegorz Jaśkiewicz <gryzman@gmail.com>) |
Ответы |
Re: 10 TB database
|
Список | pgsql-general |
I have a 300GB database, and I would like to look at partitioning as a possible way to speed it up a bit. I see the partitioning examples from the documentation: http://www.postgresql.org/docs/8.3/static/ddl-partitioning.html Is anyone aware of additional examples or tutorials on partitioning? Thanks, Whit 2009/6/15 Grzegorz Jaśkiewicz <gryzman@gmail.com>: > On Mon, Jun 15, 2009 at 1:00 PM, Artur<a_wronski@gazeta.pl> wrote: >> Hi! >> >> We are thinking to create some stocks related search engine. >> It is experimental project just for fun. >> >> The problem is that we expect to have more than 250 GB of data every month. >> This data would be in two tables. About 50.000.000 new rows every month. > > Well, obviously you need to decrease size of it, by doing some > normalization than. > If some information is the same across table, stick it into separate > table, and assign id to it. > > If you can send me sample of that data, I could tell you where to cut size. > I have that big databases under my wings, and that's where > normalization starts to make sens, to save space (and hence speed > things up). > >> We want to have access to all the date mostly for generating user requesting >> reports (aggregating). >> We would have about 10TB of data in three years. > > For that sort of database you will need partitioning for sure. > > > Napisz do mnie, to moge pomoc prywatnie, moze za niewielka danina ;) > > -- > GJ > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
В списке pgsql-general по дате отправления: