Re: Acclerating INSERT/UPDATE using UPS
От | Weslee Bilodeau |
---|---|
Тема | Re: Acclerating INSERT/UPDATE using UPS |
Дата | |
Msg-id | 45D0A36F.8070903@hypermediasystems.com обсуждение исходный текст |
Ответ на | Re: Acclerating INSERT/UPDATE using UPS (Christopher Browne <cbbrowne@acm.org>) |
Список | pgsql-hackers |
Christopher Browne wrote: > kawasima@cs.tsukuba.ac.jp (Hideyuki Kawashima) wrote: >> Joshua, >> >> I appreciate your quick & informative reply. And, I also really >> appreciate your kind comments. Since I have joined this ML 3 hours >> ago, I tried to be polite and slightly nervous. But I was relieved >> by your message. > > Your idea sounds interesting; there is likely to be a considerable > resistance to mechanisms, however, that would be likely to make > PostgreSQL less robust. > > Be aware, also, that in a public forum like this, people are sometimes > less gentle than Joshua. > > The fundamental trouble with this mechanism is that a power outage can > instantly turn a database into crud. I can think of a few places where I don't care about the data if the power is lost - * Web-based session data A lot of web sites have separate session-only databases. If the database goes down, we have to truncate the tables anyways when it comes back up. * Reporting slaves We have replication slaves setup for internal (staff-only) reporting. Often a lot of temp and summary tables as well. If the data is lost, don't care. Its a reporting database. Re-syncing from another slave is no biggie for total data loss. Less a concern given the speed increase of the data it creates as well as data coming in from the master. * Front-end cache slaves Same type of situation as the reporting slaves. Basic front-end cache that replicates data to take load off the master. The slaves still have to do all the same insert/updates, but this means they'll spend less time in locks. They crash, point the apps to the master or another slave while you fix it. Weslee
В списке pgsql-hackers по дате отправления: