Passing/failing a percentage of iterations?

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Passing/failing a percentage of iterations?

JUnit - User mailing list
Hello.


I am the author of the jtensors package:


  http://io7m.github.io/jtensors/


It's a computer-graphics focused vector/matrix algebra package. It has
a very large test suite that tries to prove properties of the functions
by hammering them with thousands of random values. For example, it
generates a thousand random vectors, normalizes those vectors, and then
checks that the magnitude of all of those vectors equals 1.0. A single
test may look like:


  @Test public final void testMagnitudeNormal()
  {
    for (int index = 0; index < TestUtilities.TEST_RANDOM_ITERATIONS;
  ++index) {
      final double x = this.randomLargePositive();
      final double y = this.randomLargePositive();
      final double z = this.randomLargePositive();
      final double w = this.randomLargePositive();
      final T v = this.newVectorM4D(x, y, z, w);


      final T vr = this.newVectorM4D();
      VectorM4D.normalize(v, vr);
      Assert.assertNotSame(v, vr);


      final double m = VectorM4D.magnitude(vr);
      System.out.printf("%s → %s → %f\n", v, vr, m);
      Assert.assertEquals(1.0, m, this.delta());
    }
  }


The problem: Because the most of the package deals with floating point
values (and randomly generated at that), maybe one value in every ten
thousand iterations will cause a test failure, and will fail the build
as a result.


I've recently added support for IEEE754 binary16 vectors, which behave
as though they were vectors with double-precision elements but which
use 16-bit floating point values internally. Because they act as
double-precision vectors, I pass them through the test suite for double
precision vectors. Naturally, due to their very low numeric precision,
the number of precision issues that have tripped test failures is
unacceptable. The floating point delta value is configurable, as shown
above, and this does eliminate a lot of failure cases, but I feel like
I'm going about this the wrong way.


I feel like it would be better in this case if I could have
Assert.assertEquals() simply count the number of successes and
failures, and then pass or fail a test as a whole if the number of
failures is above a configurable threshold. I have a fairly good idea
how I would implement this using a @Rule, but it would (apparently)
mean replacing thousands and thousands of calls to Assert.assertEquals
over the entire test suite. The current test suite is over 100k lines,
so I'm hesitant to make intrusive changes unless I know they'll work.


Does anyone have a good/better solution to this problem before I take
the plunge?


M