Re: [HACKERS] Problems with >2GB tables on Linux 2.0
От | Tom Lane |
---|---|
Тема | Re: [HACKERS] Problems with >2GB tables on Linux 2.0 |
Дата | |
Msg-id | 16773.918490701@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] Problems with >2GB tables on Linux 2.0 (Bruce Momjian <maillist@candle.pha.pa.us>) |
Ответы |
Re: [HACKERS] Problems with >2GB tables on Linux 2.0
|
Список | pgsql-hackers |
Bruce Momjian <maillist@candle.pha.pa.us> writes: >> However, I'm using John's suggestion of reducing the file size a lot more, >> to ensure we don't hit any math errors, etc. So the max file size is about >> 1.6Gb. > I can imagine people finding that strange. It it really needed. Is > there some math that could overflow with a larger value? Well, that's the question all right --- are you sure that there's not? I think "max - 1 blocks" is pushing it, since code that computes something like "the byte offset of the block after next" would fail. Even if there isn't any such code today, it seems possible that there might be someday. I'd be comfortable with 2 billion (2000000000) bytes as the filesize limit, or Andreas' proposal of 1Gb. I also like the proposals to allow the filesize limit to be configured even lower to ease splitting huge tables across filesystems. To make that work easily, we really should adopt a layout where the data files don't all go in the same directory. Perhaps the simplest is: * First or only segment of a table goes in top-level data directory, same as now. * First extension segment is .../data/1/tablename.1, second is .../data/2/tablename.2, etc. (Using numbers for the subdirectorynames prevents name conflict with ordinary tables.) Then, just configuring the filesize limit small (a few tens/hundreds of MB) and setting up symlinks for the subdirectories data/1, data/2, etc gets the job done. Starting to feel old --- I remember when a "few tens of MB" was a monstrous hard disk, never mind a single file ... regards, tom lane
В списке pgsql-hackers по дате отправления: