Re: New features for pgbench
От | Tom Lane |
---|---|
Тема | Re: New features for pgbench |
Дата | |
Msg-id | 10348.1171298637@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: New features for pgbench (Greg Smith <gsmith@gregsmith.com>) |
Ответы |
Re: New features for pgbench
Re: New features for pgbench |
Список | pgsql-patches |
Greg Smith <gsmith@gregsmith.com> writes: > Right now when you run pgbench, the results vary considerably from run to > run even if you completely rebuild the database every time. I've found > that a lot of that variation comes from two things: This is a real issue, but I think your proposed patch does not fix it. A pgbench run will still be penalized according to the number of checkpoints or autovacuums that happen while it occurs. Guaranteeing that there's at least one is maybe a bit more fair than allowing the possibility of having none, but it's hardly a complete fix. Also, this approach means that short test runs will have artificially lower TPS results than longer ones, because the fixed part of the maintenance overhead is amortized over fewer transactions. I believe it's a feature, not a bug, that Postgres shoves a lot of maintenance out of the main transaction pathways and into background tasks. That allows us to deal with higher peak transaction rates than we otherwise could do. Maybe the right way to think about approaching this issue is to try to estimate a "peak TPS" (what we can achieve when no maintenance processing is happening) and a "long-term average TPS" (net throughput allowing for maintenance processing). I don't have a specific suggestion about how to modify pgbench to account for this, but I do think we need something more than a single TPS number if we want to describe the system behavior well. regards, tom lane
В списке pgsql-patches по дате отправления: