Metaprogramming Unit Tests, Part 2
by Sean Cribbs
Last time I talked about DRYing up unit tests was quite a while ago. Recently, I had a need to specify a lot of unit tests for some complicated model manipulations. Every time I added a new parameter to the domain of possibilities, the number, complexity and obscurity of the unit tests increased. Having recently seen some ‘bootlegs’ of RejectConf, I was inspired by zenspider’s matrix idea. Now, this might be considered duplication of effort, but here’s what I came up with. Most of the patterns that need complicated parameters in the app in question are creation patterns. As such, I focused on automatically generating tests for creating valid and (not creating) invalid model objects. You could do something similar for patterns that update.
First, we start with a list of the options that can be passed to the creator, also co-opting the
creator method from the previous installment:
This sets up the parameter names for the columns of our matrix. Notice how I spaced them out too so you can easily see what is happening for each parameter.
Now we need to specify the range of our tests, including which patterns produce valid models and which ones produce invalid ones.
In this case, I’ve said that essentially
date_of_birth is not a required field. Let’s assume we want our user to have a first name, a last name, an email, and the email has to be in a valid format. We’ll test each parameter independently, like any good scientific process.
Notice how I made four independent test cases, where each parameter was manipulated independently. If you want to be extremely exhaustive, you could specify all permutations of the parameters, valid and invalid, but in most cases and well designed systems, they will remain independent.
One last thing I’d like to be able to do is verify that certain extra conditions hold true. For example, maybe we have something that generates a login name from the user’s email address. We want to verify that it starts with the first part of their email address, so we’ll add a block to the end of the test definition that returns a boolean, like so:
These tools will handle most cases in which the domain of the model creation is widely varied. I found it especially useful when dealing with models that are ‘parents’ and are responsible for the creation of many associated models. In some cases, I had more than one way to specify the associated models in which one way might win over another, or certain combinations would produce error conditions, but not others. Specifying these interactions in a matrix fashion greatly simplified and clarified the bug-fixing process.
So, I’ve teased you long enough. Here’s the three magic methods that will add this matrix capability to your unit tests. (Place them in
One extra feature that I didn’t specify above is that the
invalid_create tests will print the validation errors to the console while testing. I like this because I mostly use Aptana/RadRails, but you could just as easily turn it off.