Re: Why we lost Uber as a user
От | Merlin Moncure |
---|---|
Тема | Re: Why we lost Uber as a user |
Дата | |
Msg-id | CAHyXU0w9V11CfPn4_Ah51TQ0OdQVS5fM9mjNZv-J2wGUqfwPZw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Why we lost Uber as a user (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Why we lost Uber as a user
|
Список | pgsql-hackers |
On Tue, Jul 26, 2016 at 5:07 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Josh Berkus <josh@agliodbs.com> writes: >> To explain this in concrete terms, which the blog post does not: > >> 1. Create a small table, but one with enough rows that indexes make >> sense (say 50,000 rows). > >> 2. Make this table used in JOINs all over your database. > >> 3. To support these JOINs, index most of the columns in the small table. > >> 4. Now, update that small table 500 times per second. > >> That's a recipe for runaway table bloat; VACUUM can't do much because >> there's always some minutes-old transaction hanging around (and SNAPSHOT >> TOO OLD doesn't really help, we're talking about minutes here), and >> because of all of the indexes HOT isn't effective. > > Hm, I'm not following why this is a disaster. OK, you have circa 100% > turnover of the table in the lifespan of the slower transactions, but I'd > still expect vacuuming to be able to hold the bloat to some small integer > multiple of the minimum possible table size. (And if the table is small, > that's still small.) I suppose really long transactions (pg_dump?) could > be pretty disastrous, but there are ways around that, like doing pg_dump > on a slave. > > Or in short, this seems like an annoyance, not a time-for-a-new-database > kind of problem. Well, the real annoyance as I understand it is the raw volume of bytes of WAL traffic a single update of a field can cause. They switched to statement level replication(!). merlin
В списке pgsql-hackers по дате отправления: