Re: Backing up 16TB of data (was Re: > 16TB worth of
От | scott.marlowe |
---|---|
Тема | Re: Backing up 16TB of data (was Re: > 16TB worth of |
Дата | |
Msg-id | Pine.LNX.4.33.0304251641080.2484-100000@css120.ihs.com обсуждение исходный текст |
Ответ на | Re: Backing up 16TB of data (was Re: > 16TB worth of (Jan Wieck <JanWieck@Yahoo.com>) |
Список | pgsql-general |
On Fri, 25 Apr 2003, Jan Wieck wrote: > Ron Johnson wrote: > > > > On Mon, 2003-04-21 at 13:23, Jeremiah Jahn wrote: > > > I have a system that will store about 2TB+ of images per year in a PG > > > database. Linux unfortunatly has the 16TB limit for 32bit systems. Not > > > really sure what should be done here. Would life better to not store the > > > images as BLOBS, and instead come up with some complicated way to only > > > store the location in the database, or is there someway to have postgres > > > handle this somehow? What are other people out there doing about this > > > sort of thing? > > > > Now that the hard disk and file system issues have been hashed around, > > have you thought about how you are going to back up this much data? > > Legato had shown a couple years ago already that Networker can backup > more than a Terabyte per hour. They used an RS6000 with over 100 disks > and 36 DLT 7000 drives on 16 controllers if I recall correctly ... not > your average backup solution but it's possible. But I doubt one can > configure something like this with x86 hardware. I'm sure you could, but it might well involve 12 PII-350's running a trio of DLTs each, with a RAID array for caching. :-)
В списке pgsql-general по дате отправления: