Re: Postgres for a "data warehouse", 5-10 TB
От | Scott Marlowe |
---|---|
Тема | Re: Postgres for a "data warehouse", 5-10 TB |
Дата | |
Msg-id | CAOR=d=0zmw-EGRJyYmwCTxR5g6osbFLS5i-3kzjG05j7S=1JaA@mail.gmail.com обсуждение исходный текст |
Ответ на | Postgres for a "data warehouse", 5-10 TB (Igor Chudov <ichudov@gmail.com>) |
Ответы |
Re: Postgres for a "data warehouse", 5-10 TB
Re: Postgres for a "data warehouse", 5-10 TB |
Список | pgsql-performance |
On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov <ichudov@gmail.com> wrote: > I have a server with about 18 TB of storage and 48 GB of RAM, and 12 > CPU cores. 1 or 2 fast cores is plenty for what you're doing. But the drive array and how it's configured etc are very important. There's a huge difference between 10 2TB 7200RPM SATA drives in a software RAID-5 and 36 500G 15kRPM SAS drives in a RAID-10 (SW or HW would both be ok for data warehouse.) > I do not know much about Postgres, but I am very eager to learn and > see if I can use it for my purposes more effectively than MySQL. > I cannot shell out $47,000 per CPU for Oracle for this project. > To be more specific, the batch queries that I would do, I hope, Hopefully if needs be you can spend some small percentage of that for a fast IO subsystem is needed. > would either use small JOINS of a small dataset to a large dataset, or > just SELECTS from one big table. > So... Can Postgres support a 5-10 TB database with the use pattern > stated above? I use it on a ~3TB DB and it works well enough. Fast IO is the key here. Lots of drives in RAID-10 or HW RAID-6 if you don't do a lot of random writing.
В списке pgsql-performance по дате отправления: