Impression of PostgreSQL 6.5.1
От | marten@feki.toppoint.de |
---|---|
Тема | Impression of PostgreSQL 6.5.1 |
Дата | |
Msg-id | 199908140326.UAA11632@feki.toppoint.de обсуждение исходный текст |
Список | pgsql-general |
Just to give my first experiences of PostgreSQl after working several days with this tool: a) I´ve done some benchmarks about the behaviour of PostgreSQL, when working with tables with many, many rows. I´ve created a table with two columns (varchar and integer), both with indexes. The insert speed was about 533 insert/seconds at the beginning, but slows down to 15 insert/seconds when having 600000 rows. It´s not a linear movement, but a degressive slow down (first fast, later slower). PostgreSQL has the possibility of three hash methods. Which one is suited for which cases ? Just to get an impression ? b) I´ve noticed, that "vacuumdb" can help, but I noticed, that it seems to be better not to run "vacuumdb" while another program is doing heavy work in some tables. I´ve seen several deadlocks between my test programs and "vacuumdb". c) The size of the index files grows and grows and grows. "vacuumdb" does not reduce the size of the index files, though in "verbose" mode it seems to tell me, that it deleted several entries in the index files. New index changes seems to be appended to the index files :-( The only way to get rid of this seems to drop all indexes and create them again ... improving the speed also. This is a critical point ... d) The library "libpg" is very nice ! Small and it does it´s work ! What I would like to know. has anybody working databases using PostgreSQL with sizes more than 500 MByte and is happy with it ??? We consider it using for research projects starting in autumn, but we would like to know, how it behaves with large databases ? Marten
В списке pgsql-general по дате отправления: