SUnit is a framework to write and perform test cases in Smalltalk,
originarily written by the father of Extreme Programming1,
SUnit allows one to write the tests and check
results in Smalltalk; while this approach has the disadvantage that
testers need to be able to write simple Smalltalk programs, the
resulting tests are very stable.
What follows is a description of the philosophy of
a description of its usage, excerpted from Kent Beck's paper in which
Testing is one of those impossible tasks. You'd like to be absolutely complete, so you can be sure the software will work. On the other hand, the number of possible states of your program is so large that you can't possibly test all combinations.
If you start with a vague idea of what you'll be testing, you'll never get started. Far better to start with a single configuration whose behavior is predictable. As you get more experience with your software, you will be able to add to the list of configurations.
Such a configuration is called a fixture. Two example fixtures
for testing Floats can be
2.0; two fixtures for
testing Arrays can be
#(1 2 3).
By choosing a fixture you are saying what you will and won't test for. A complete set of tests for a community of objects will have many fixtures, each of which will be tested many ways.
To design a test fixture you have to
You can predict the results of sending a message to a fixture. You need to represent such a predictable situation somehow. The simplest way to represent this is interactively. You open an Inspector on your fixture and you start sending it messages. There are two drawbacks to this method. First, you keep sending messages to the same fixture. If a test happens to mess that object up, all subsequent tests will fail, even though the code may be correct.
More importantly, though, you can't easily communicate interactive tests to others. If you give someone else your objects, the only way they have of testing them is to have you come and inspect them.
By representing each predictable situation as an object, each with its own fixture, no two tests will ever interfere. Also, you can easily give tests to others to run. Represent a predictable reaction of a fixture as a method. Add a method to TestCase subclass, and stimulate the fixture in the method.
If you're testing interactively, you check for expected results
directly, by printing and inspecting your objects. Since tests are in
their own objects, you need a way to programmatically look for
problems. One way to accomplish this is to use the standard error
handling mechanism (
#error:) with testing logic to signal errors:
2 + 3 = 5 ifFalse: [self error: 'Wrong answer']
When you're testing, you'd like to distinguish between errors you are checking for, like getting six as the sum of two and three, and errors you didn't anticipate, like subscripts being out of bounds or messages not being understood.
There's not a lot you can do about unanticipated errors (if you did something about them, they wouldn't be unanticipated any more, would they?) When a catastrophic error occurs, the framework stops running the test case, records the error, and runs the next test case. Since each test case has its own fixture, the error in the previous case will not affect the next.
The testing framework makes checking for expected values simple by
providing a method,
#should:, that takes a Block as an argument.
If the Block evaluates to true, everything is fine. Otherwise, the test
case stops running, the failure is recorded, and the next test case
So, you have to turn checks into a Block evaluating to a Boolean,
and send the Block as the parameter to
In the example, after stimulating the fixture by adding an object to an empty Set, we want to check and make sure it's in there:
SetTestCase>>#testAdd empty add: 5. self should: [empty includes: 5]
There is a variant on
TestCase>>#shouldnt: causes the test
case to fail if the Block argument evaluates to true. It is there so you
don't have to use
Once you have a test case this far, you can run it. Create an instance
of your TestCase subclass, giving it the selector of the testing
run to the resulting object:
(SetTestCase selector: #testAdd) run
If it runs to completion, the test worked. If you get a walkback, something went wrong.
As soon as you have two test cases running, you'll want to run them both one after the other without having to execute two do it's. You could just string together a bunch of expressions to create and run test cases. However, when you then wanted to run “this bunch of cases and that bunch of cases” you'd be stuck.
The testing framework provides an object to represent a bunch of
TestSuite runs a collection of test
cases and reports their results all at once. Taking advantage of
TestSuites can also contain other
TestSuites, so you can put Joe's tests and Tammy's tests together
by creating a higher level suite. Combine test cases into a test
(TestSuite named: 'Money') add: (MoneyTestCase selector: #testAdd); add: (MoneyTestCase selector: #testSubtract); run
The result of sending
#run to a
TestSuite is a
TestResult object. It records all the test cases that caused
failures or errors, and the time at which the suite was run.
All of these objects are suitable for being stored in the image and retrieved. You can easily store a suite, then bring it in and run it, comparing results with previous runs.
gnu Smalltalk includes a Smalltalk script to simplify running SUnit test suites. It is called gst-sunit. The command-line to gst-sunit specifies the packages, files and classes to test:
testconstitutes a separate test case.
TestSuitesScripter variableAt: 'mysqluser' ifAbsent: [ 'root' ]
Note that a
#variableAt: variant does not exist, because
the testsuite should pick default values in case the variables are
not specified by the user.
 Extreme Programming is a software engineering technique that focuses on team work (to the point that a programmer looks in real-time at what another one is typing), frequent testing of the program, and incremental design.