RE: [HACKERS] Problems with >2GB tables on Linux 2.0
От | gjerde@icebox.org |
---|---|
Тема | RE: [HACKERS] Problems with >2GB tables on Linux 2.0 |
Дата | |
Msg-id | Pine.LNX.4.05.9902071653340.30975-100000@snowman.icebox.org обсуждение исходный текст |
Ответ на | RE: [HACKERS] Problems with >2GB tables on Linux 2.0 (Peter T Mount <peter@retep.org.uk>) |
Ответы |
RE: [HACKERS] Problems with >2GB tables on Linux 2.0
Re: [HACKERS] Problems with >2GB tables on Linux 2.0 |
Список | pgsql-hackers |
On Sun, 7 Feb 1999, Peter T Mount wrote: > Anyhow, I'm about to start the test, using RELSEG_SIZE set to 243968 which > works out to be 1.6Gb. That should stay well away from the overflow > problem. Hi, I just did a checkout of the cvs code, hardcoded RELSEG_SIZE to 243968, and it works beautifully now! I imported about 2.2GB of data(table file size) and it looks like this: -rw------- 1 postgres postgres 1998585856 Feb 7 16:22 mcrl3_1 -rw------- 1 postgres postgres 219611136 Feb 7 16:49 mcrl3_1.1 -rw------- 1 postgres postgres 399368192 Feb 7 16:49 mcrl3_1_partnumber_index And it works fine.. I did some selects on data that should have ended up in the .1 file, and it works great. The best thing about it, is that it seems at least as fast as MSSQL on the same data, if not faster.. It did take like 45 minutes to create that index.. Isn't that a bit long(AMD K6-2 350MHz)? :) Suggestion: How hard would it be to make copy tablename FROM 'somefile' give some feedback? Either some kind of percentage or just print out something after each 10k row chunks or something like that. Thanks, Ole Gjerde
В списке pgsql-hackers по дате отправления: