Re: [PERFORM] Postgres and really huge tables

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: [PERFORM] Postgres and really huge tables
Дата
Msg-id 1169154247.9586.100.camel@state.g2switchworks.com
обсуждение исходный текст
Ответ на Postgres and really huge tables  (Brian Hurt <bhurt@janestcapital.com>)
Список pgsql-advocacy
On Thu, 2007-01-18 at 14:31, Brian Hurt wrote:
> Is there any experience with Postgresql and really huge tables?  I'm
> talking about terabytes (plural) here in a single table.  Obviously the
> table will be partitioned, and probably spread among several different
> file systems.  Any other tricks I should know about?
>
> We have a problem of that form here.  When I asked why postgres wasn't
> being used, the opinion that postgres would "just <explicitive> die" was
> given.  Personally, I'd bet money postgres could handle the problem (and
> better than the ad-hoc solution we're currently using).  But I'd like a
> couple of replies of the form "yeah, we do that here- no problem" to
> wave around.

It really depends on what you're doing.

Are you updating every row by a single user every hour, or are you
updating dozens of rows by hundreds of users at the same time?

PostgreSQL probably wouldn't die, but it may well be that for certain
batch processing operations it's a poorer choice than awk/sed or perl.

If you do want to tackle it with PostgreSQL, you'll likely want to build
a truly fast drive subsystem.  Something like dozens to hundreds of
drives in a RAID-10 setup with battery backed cache, and a main server
with lots of memory on board.

But, really, it depends on what you're doing to the data.

В списке pgsql-advocacy по дате отправления:

Предыдущее
От: Christopher Browne
Дата:
Сообщение: Re: Linuxworld Toronto, April 30 - May 2
Следующее
От: "Denis Lussier"
Дата:
Сообщение: EnterpriseDB Apology