Re: Why we lost Uber as a user
| От | Stephen Frost |
|---|---|
| Тема | Re: Why we lost Uber as a user |
| Дата | |
| Msg-id | 20160801132931.GF4028@tamriel.snowman.net обсуждение исходный текст |
| Ответ на | Re: Why we lost Uber as a user (Hannu Krosing <hkrosing@gmail.com>) |
| Ответы |
Re: Why we lost Uber as a user
|
| Список | pgsql-hackers |
* Hannu Krosing (hkrosing@gmail.com) wrote: > On 07/27/2016 12:07 AM, Tom Lane wrote: > > > >> 4. Now, update that small table 500 times per second. > >> That's a recipe for runaway table bloat; VACUUM can't do much because > >> there's always some minutes-old transaction hanging around (and SNAPSHOT > >> TOO OLD doesn't really help, we're talking about minutes here), and > >> because of all of the indexes HOT isn't effective. > > Hm, I'm not following why this is a disaster. OK, you have circa 100% > > turnover of the table in the lifespan of the slower transactions, but I'd > > still expect vacuuming to be able to hold the bloat to some small integer > > multiple of the minimum possible table size. (And if the table is small, > > that's still small.) I suppose really long transactions (pg_dump?) could > > be pretty disastrous, but there are ways around that, like doing pg_dump > > on a slave. > Is there any theoretical obstacle which would make it impossible to > teach VACUUM not to hold back the whole vacuum horizon, but just > to leave a single transaction alone in case of a long-running > REPEATABLE READ transaction ? I've looked into this a couple of times and I believe it's possible to calculate what records have to remain available for the long-running transaction, but it's far from trivial. I do think that's a direction which we really need to go in, however. Having a single horizon which is dictated by the oldest running transaction isn't a tenable solution in environments with a lot of churn. Thanks! Stephen
В списке pgsql-hackers по дате отправления: