Problem with 6.5 and tables >2Gb
От | Peter T Mount |
---|---|
Тема | Problem with 6.5 and tables >2Gb |
Дата | |
Msg-id | Pine.LNX.4.04.9901250916150.14361-100000@maidast.retep.org.uk обсуждение исходный текст |
Список | pgsql-hackers |
Well, I thought I'd test to see how postgres handles tables larger than 2Gb, mainly because some data sets I have will end up that big, and I know of atleast one project out there (www.tass-survey.org) that's using postgres and will hit this limit. Now, I know that (in theory), when a table reaches the magical 2Gb file limit (imposed on most Unixes as the max file size), it should start a fresh file, and use it after that point. So, I created a table: create table smallcat (gsc char(18),ra float4,dec float4,mag float4); Then I wrote a short bash script that repeatedly ran psql, and copy from a flat file containing 26653 rows. The first 1000 loops ran in 3h 59m. Now this is an improvement over 6.4.2, which when I ran this same test, I killed it after 10 hours. Now this table now contains 26,653,000 rows, but is still under the 2Gb limit, so I ran the script to insert another 100 copies of the file. While the 29th block was being inserted, the table reached the 2Gb limit. In the database directory, a new file appeared smallcat.1, then the error: ERROR: cannot read block 262143 of smallcat The smallcat.1 file is of zero length, and the backend then dies. It looks to me like the code that splits the file does the split, but the backend still tries to append to the original file. If I get chance, I may have a peek at the source, but I'm catching up on the JDBC driver at the moment (amongst other things), so may not get the chance. Peter -- Peter T Mount peter@retep.org.uk Main Homepage: http://www.retep.org.uk PostgreSQL JDBC Faq: http://www.retep.org.uk/postgresJava PDF Generator: http://www.retep.org.uk/pdf
В списке pgsql-hackers по дате отправления: