large duplicated files
От | Ryan D. Enos |
---|---|
Тема | large duplicated files |
Дата | |
Msg-id | 46C54144.3080009@ucla.edu обсуждение исходный текст |
Ответы |
Re: large duplicated files
|
Список | pgsql-novice |
Hi, I am very new to postgresql and am not really a programmer of any type. I use pgsql to manage very large voter databases for political science research. My problem is that my database is creating large duplicate files, i.e.: 17398.1, 17398.2, 17398.3, etc. Each is about 1g in size. I understand that each of these is probably a part of a file that pgsql created because of a limit on file size and that they may be large indexes. However, I don't know where these files came from or how to reclaim the disk space. I have extensively searched the archives and found that I am not the first to have this problem. I have followed the suggestions to previous posters, using a VACUUM FULL command and REINDEX. But nothing reclaims the disk space. I have tried to see the type of file by using: select * from pg_class where relfilenode ="" but this returns 0 rows. How can I reclaim this space and prevent these files from being created in the future? Any help would be greatly appreciated. Thanks. Ryan
В списке pgsql-novice по дате отправления: