David Fetter wrote:
> Folks,
>
> As we move forward, we run into increasingly complex situations under
> the general rubric of concurrency.
>
> What test frameworks are already out there that we can use in our
> regression test suite? If there aren't any, how might we build one?
Not entirely on-topic, but I thought I'd chime in:
After seeing the results from the hungarian blogger whose name I can't
find [1] and quite a bit of discussion in the german usergroup before
that I wanted to see how hard it would be to actually implement a
framework to automate performance test runs.
I mainly used it as a reason to have a look at Moose [2], but since it
was quite entertaining so far and I don't see any roadblocks I think
I'll follow through.
Design Targets:
*) OO Design so it's easily pluggable, extensible (& testable ?)
*) Standalone CLI mode as well as daemonized "benchmark client" (think
buildfarm)
*) Automate all aspects of benchmark operation, starting with a clean
slate and (optionally) returning to same
*) Extensible Benchmark system, currently thinking about: *) Synthetic cpu/memory/disk benchmarks (for evaluating and
verifying
the performance baseline of a given system, probably sysbench?) *) pgbench *) Whatever is freely available in the
DBT/TPCenvironment *) As discussed, custom benchmarks testing specific subsystems with
high concurrency/throughput, would need a proper tool I presume
Currently it doesn't do much more than building branches and creating
clusters, but I guess I'll have something rudimentarily usable by mid
november.
I haven't thought about result aggregation & rendering/UI part of the
whole thing so far, so if anyone has some ideas in that direction they'd
be very much appreciated when the time has come.
If I duplicated someones efforts already I'm sorry, I didn't want to
raise anybodys hopes before I was sure that I really want to do this ;).
best regards,
Michael
[1] http://suckit.blog.hu/2009/09/26/postgresql_history
[2] http://www.iinteractive.com/moose/