Test-driven development (TDD) is a (relatively) modern software development technique; and embedded systems inevitably lag behind the cutting-edge. Nevertheless, there's nothing special about embedded systems that negates the advantages this strategy can bring to a team.
A Brief Overview of TDD
This article doesn't set out to provide an in-depth treatment of TDD itself. With that said, it may be helpful to lay out an overview of the methodology involved.
Briefly, TDD can be said to follow a cycle:
Write A Test. This provides a definition for the new functionality to add to the system.
Run tests. At this point, it's important to see the test fail. If it does not, it could indicate:
- Failure to properly integrate the test into the test harness
- An improperly written test
- A test for functionality that already exists
In any of these cases, the test should be re-written until it fails.
Write code to pass test. Critically, this should only be the code required to pass the test at hand. Further functionality should be added as the tests for that functionality are added.
Run tests. If tests pass, continue. If not, continue making incremental changes until they do.
Refactor. Clean up the code, put functionality in the correct location, eliminate duplication.
Repeat. That is, write a new test for the next atom of functionality.
In this way, test cases always keep pace with production code. Additionally, it forces a developer to first think about how her software will be used, rather than how it will be implemented.
Challenges in the Embedded Environment
The primary issue faced with trying to use a tight workflow with automated testing in an embedded environment is one of integration. Nearly every platform will have its own library of software resources, of varying degrees of quality, and various styles of interfaces.
Furthermore, the C language is nearly ubiquitous in the embedded world. One will sometimes encounter C++, but almost never anything higher-level. There are various advantages to this—C excels at direct hardware interaction and low-level optimization—but for the purposes of testing, it's another stumbling block.
Almost by definition, an embedded application runs on an esoteric hardware platform—the binary will not run on your native development system. Even if it were compiled natively, it would probably be trying to write to and read from hardcoded (likely restricted) memory addresses, where a peripheral exists on the actual microcontroller.
After this list of challenges (which, by the way, is by no means exhaustive), you might be wondering: why bother in the first place? I don't present this simply as a rhetorical tool—it reflects my own doubts as I began to learn about TDD specifically and automated testing in particular.
The primary factor that pushed me over the edge was the increasing complexity of the applications I was developing. With this complexity came subtle, difficult-to-diagnose bugs—ones which in some cases cost weeks of developer time to track down, reproduce, and repair. TDD is not a silver bullet, but I can name several times when a seemingly simplistic test would have caught an error which instead made it to a field deployment, and cost time and money to diagnose.
This blog post is the first in a series. Later posts will deal with specific solutions to test harness integration, workflow, and unique challenges from the embedded world.