On Sat, 6 Feb 1999, Thomas Reinke wrote:
> I may be dating myself really badly here, but isn't there a hard limit
> on
> the file system at 2Gig? I thought the file size attribute in Unix is
> represented as a 32 bit signed long, which happens to be a max value
> of 2147483648. If I'm right, it means the problem is fundamentally
> with the file system, not with PostGres, and you won't solve this
> unless the os supports larger files.
PostgreSQL has internal code that is supposed to automagically break up a
table into 2gb chunks so that thsi isn't a problem...
>
> gjerde@icebox.org wrote:
> >
> > Hi,
> > I saw a message a couple of weeks ago from someone having problems with
> > larger than 2GB tables. I have similar problems.
> >
> > PostgreSQL: anon-cvs as of today (2/5/1999)
> > OS: Redhat Linux 5.2 (running 2.0.35)
> >
> > I created a database called mcrl, and a table called mcrl3_1.
> > I copied in a set of 450MB of data twice(which comes to pg file size of
> > 2.4GB or so).
> >
> > When it hit 2GB I got this message:
> > mcrl=> copy mcrl3_1 FROM '/home/gjerde/mcrl/MCR3_1.txt';
> > ERROR: mcrl3_1: cannot extend
> >
> > The table file looks like this:
> > [postgres@snowman mcrl]$ ls -l mcrl*
> > -rw------- 1 postgres postgres 2147482624 Feb 5 16:49 mcrl3_1
> >
> > It did NOT create the .1 file however, which I did see when I tried this
> > on 6.4.2(but still didn't work).
> >
> > I looked around in the code(specifically src/backend/storage/smgr/*.c),
> > but couldn't figure too much of it out. I'll have to figure out how
> > postgres handles the database files first..
> >
> > Hope this helps,
> > Ole Gjerde
>
> --
> ------------------------------------------------------------
> Thomas Reinke Tel: (416) 460-7021
> Director of Technology Fax: (416) 598-2319
> E-Soft Inc. http://www.e-softinc.com
>
Marc G. Fournier
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org