Re: BIG files
От | Bruno Wolff III |
---|---|
Тема | Re: BIG files |
Дата | |
Msg-id | 20050619124808.GC32482@wolff.to обсуждение исходный текст |
Ответ на | BIG files (rabt@dim.uchile.cl) |
Ответы |
Re: BIG files
|
Список | pgsql-novice |
On Sat, Jun 18, 2005 at 13:45:42 -0400, rabt@dim.uchile.cl wrote: > Hi all Postgresql users, > > I've been using MySQL for years and now I have decided to switch to Postgresql, > because I needed more robust "enterprise" features like views and triggers. I > work with VERY large datasets: 60 monthly tables with 700,000 rows and 99 > columns each, with mostly large numeric values (15 digits) ( NUMERIC(15,0) > datatypes, not all filled). So far, I've migrated 2 of my tables to a dedicated > > The main problem is disk space. The database files stored in postgres take 4 or > 5 times more space than in Mysql. Just to be sure, after each bulk load, I > performed a VACUUM FULL to reclaim any posible lost space, but nothing gets > reclaimed. My plain text dump files with INSERTS are just 150 Mb in size, while > the files in Postgres directory are more than 1 Gb each!!. I've tested other > free DBMS like Firebird and Ingres, but Postgresql is far more disk space > consumer than the others. From discussions I have seen here, MYSQL implements Numeric using a floating point type. Postgres stores it using something like a base 10000 digit for each 4 bytes of storage. Plus there will be some overhead for storing the precision and scale. You might be better off using bigint to store your data. That will take 8 bytes per datum and is probably the same size as was used in MYSQL.
В списке pgsql-novice по дате отправления: