dump/reload
От | Chris Albertson |
---|---|
Тема | dump/reload |
Дата | |
Msg-id | 35992AE1.C977BD82@topdog.logicon.com обсуждение исходный текст |
Список | pgsql-hackers |
>> >> It takes too long time to reload big tables... > > I have to agree here...the one application that *I* really use >this for is an accounting server...any downtime is unacceptable, because >the whole system revolves around the database backend. > > Take a look at Michael Richards application (a search engine) >where it has several *million* rows, and that isn't just one table. >Michael, how long would it take to dump and reload that? I just did a dump and reload. The reload took about 18 hours on a Sun Ultra sparc, dual CPU 256MB RAM and UWSCSI disks. The database was not that big either. After gzipping the dump file it was only a few hundred megabytes. After reload it is about 12 million rows total. That said, if you guys would reduce to per-tupple overhead and/ or make the thing go faster I'd be happy to dump and reload. Down time is an issue with some people. My suggestion is to dump the database but _don't drop it_ keep running the old version while the new version is being rebuilt. This does require running both version 6.3 and 6.4 servers either on diferent port numbers and data directories or a second computer. In any case, there is no need to be down except for a few minutes no matter how big your database. So as a user my request to the development team is Performence, Performence, Performence. Don't trade away performence for anything. You can code around a messing feature but a slow DBMS forces you back to using flat files. -- --Chris Albertson chris@topdog.logicon.com Voice: 626-351-0089 X127 Logicon RDA, Pasadena California Fax: 626-351-0699
В списке pgsql-hackers по дате отправления: