The Shark Fin: Pex

April 26, 2009 10:02
Imagine: you have a method that needs to be tested.

You right click on it, select "Generate Tests" and a tool generates a suite of tests that cover all code paths. If you later update the original method, you just need to regenerate the tests and the missing coverage is back.

Given that you are still reading,  I will assume that you're ready to take the red pill.

The Shark Fin.
 
In Microsoft Research they've got something to change the world of unit testing. Pex. You can run this tool from Visual Studio context menu (!), and it would determine inputs that cover (almost) all execution paths for a
method in your code.

Before generating the actual tests, Pex executes the code multiple times, each time picking up a logical branch that hasn't been run yet. Each time it gets better understanding of the code and digs deeper and deeper until it desides to stop.

When does it stop? If no loops and recursion, Pex would stop pretty quickly as there's only a small finite number of execution paths. However, in most cases the code does contain some complicated logic, so there are several exploration bounds for the test generator: time based, memory usage based, restricted amount of if-else branches to analyze and so on.

Is it smart and creative enough to craft good tests? Yes.
Does it make writing tests obsolete? No.
Does it support mocks and stubs to create isolated tests? Yes.
Is it a released product? No.
Do they have any learning videos? Yes, yes and yes.

Why bother.

What's bad about writing unit tests manually (besides it's less fun)?
  • It takes reasonable time and skills to write readable and maintainable tests.
  • More tests does not necessarily mean better code coverage.
  • It's pretty easy to abandon updating existing tests when the related functionality gets changed.

Stubs framework.

To make the things even more exciting, Pex ships with its own mocking framework called Stubs. Johnathan de Halleux, one of Pex developers, recently suggested to add Stubs to Mocking Frameworks Compare project (and eventually implemented everything himself, having lost all hope of  getting it done by me).

Performance comparison gives amazing results. Here's just one scenario:

Mocking methods.
Moq      : 100 repeats: 174,683 +- 8%    msec
Rhino    : 100 repeats: 320,685 +- 132%  msec
NMock2   : 100 repeats: 125,853 +- 25%   msec
Isolator : 100 repeats:1012,666 +- 4%    msec
Stubs    : 100 repeats:   1,396 +- 54%   msec

Can you believe that?! 1000 times faster than Isolator! The reason is that Stubs doesn't have the overhead of an API. In any other mocking framework, you arrange the expectations using their API, something like

hand.Expect(x => x.TouchIron(It.IsAny<Iron>())).Throws(new BurnException());

but in Stubs your API is C#, lambda expressions and closures:

hand.TouchIron = (iron => {  if (iron.IsHot) throw new BurnException();  });

As fast as a virtual method call. If you like the idea - start from reading Johnathan's post about Stubs and check them out at
Mocking Frameworks Compare!


Comments

Comments are closed