A collection of strategies for ensuring an efficient and effective test process
Testing is oftentimes the least glamorous part of the design process. However, it is essential in ensuring that the final product is a success. This article outlines some high level strategies that can be employed to create a test plan that is both thorough and does not needlessly impede the development process.
Begin planning the test process during requirements and specifications creation
While the thought of having to write both a requirement document and test plan simultaneously elicits cries of pain from most engineers, it is vitally important to begin planning for test and verification activities at the beginning of the product design. While typically there isn’t enough information in these early design stages to write specific test procedures, there is enough information to make a fair determination about what hardware and software resources will be needed to carry out testing as well as what capital purchases will be required to support the testing effort. This early insight helps create a more accurate budget and schedule for the project as a whole and helps to head off potential bottlenecks that may occur due to an insufficient allocation of test resources and personnel.
Determine what level of testing is required for each milestone
Just as the design follows a set of milestones (proof of concept, prototype, mass production etc.), so too should the accompanying test plans. A proof of concept phase probably only requires a basic set of tests to ensure product viability while a functional prototype requires more thorough testing. A device in preparation for product release will typically undergo very comprehensive testing to ensure all product requirements and specifications are met. Meanwhile, devices coming off the production line will most often be subjected to the minimum number of tests needed to ensure they were built correctly. This ensures that the production line is not slowed down more than necessary. Alternatively, devices that utilize a new or experimental technology will typically require substantial testing in the early design phases to ensure that a viable product can be developed. There are no strict rules about how much testing is required for each milestone and it’s the responsibility of the design and test personnel to come up with a test plan that ensures all the design goals are met.
Determine if certification agency testing is required and plan accordingly
If the product requires certification from an agency such as Underwriters Laboratories (UL) or the FCC, be sure to schedule and budget for this in the overall project plan. The specific details involved with preparing a product for certification agency testing is a complex subject in and of itself and thus will be covered in depth in future articles. It is worth noting here, however, that pretty much all products sold to the general public will require some sort certification testing such as a product safety certification and, if the product incorporates electronics in any way, an unintentional radiator/emitter compliance certification from the FCC or equivalent regulatory agency (e.g. Industry Canada etc…).
Determine if and when outsourcing makes sense
By no means does the entire testing process need to happen in-house and often it is quicker and cheaper to outsource a portion of the testing effort to a third party. For example, some environmental testing, such as lightning simulation testing, requires very specialized and expensive equipment which makes it difficult for most companies to perform it in-house. Therefore it makes sense to outsource this testing to a vendor who specializes in it. Indeed, even the equipment required for more mundane testing can be significantly expensive and thus it would not make sense for most companies to own it outright. Likewise, there are vendors who specialize in creating equipment for production line product verification such as in-circuit test (ICT) machines. Oftentimes, it’s cheaper and quicker to have these vendors create test fixtures for mass production rather than try to build them in-house.
Begin by using a bottom up testing approach
In modern products, each subsystem of the device can be incredibly complex and prone to issues. Therefore, it is imperative to create test procedures that verify that each subsystem works correctly in isolation. By verifying that each subsystem meets its individual requirements, future product issues can be more easily isolated to the interfaces and interactions between subsystems.
Ensure that the entire device is tested as a whole
If all the subsystems pass their respective tests, it’s tempting to connect them all up and call it a day. Unfortunately, just because each subsystem plays nicely on its own does not mean that they will play nicely together. The reasons for this are as varied as the subsystems themselves but some examples are:
- a poorly implemented communication protocol resulting in subsystems that can’t talk to each other
- electrical noise causing crosstalk between systems
- a subcontractor being given the wrong set of specifications, a real world example of which can be found Here
This doesn’t just stop at the boundary of the device however. If the device interacts with a larger system, for example a device that connects to the internet, then care must be taken to ensure that the device integrates seamlessly into the larger system and doesn’t cause issues for other components or users of that system.
Make sure to test the device as it will be utilized by the end user
Despite their best efforts, designers inevitably end up building assumptions into their test procedures. While these assumptions are typically harmless and often serve to expedite the testing process, they can be catastrophic if the designer fails to account for user behavior. Rather than try to anticipate every action that a user might take, it’s oftentimes more effective to distribute the product to a small group of beta testers and have them report back about what works and what doesn’t.
Test for specific failure conditions and verify the product can recover appropriately
Unfortunately, it is not enough to simply verify that the product doesn’t fail during normal operation. Inevitably, time and continued use will lead to system failure and it’s the responsibility of the system designer to ensure that the product recovers gracefully or at the very least does not cause harm to the user. Some real world examples of this include:
- A part on a medical ventilator breaks, causing the system to lose pressure. Does the device detect this problem and alert the user in an appropriate amount of time?
- A device undergoing a firmware update experiences memory corruption. Does the device successfully roll back to a previous version of firmware and notify the user that the update failed?
- A bearing in a vehicle transmission fails, leading to increased friction and potential damage. Does the vehicle’s diagnostic systems detect this condition and notify the user that the vehicle needs service?
Given that such failures are inevitable, any comprehensive testing scheme should include tests which purposefully cause the product to fail so that the designer can be sure the product will respond in an appropriate manner. A common tool for developing failure testing is a Failure Modes and Effects Analysis (FMEA). Essentially a FMEA is a process where potential failure modes are identified and categorized by how serious their consequences are, how likely they are to occur and how easy they are to detect. Using this categorization, the designers and test engineers have an easier time determining which failure modes require extensive testing and which ones can be deprioritized in the testing process.
Utilize automated testing
One of the main reasons that products get released without adequate testing is that the developers lack the time and manpower required to properly vet the device. Automated testing, while requiring substantial upfront effort, can save time in the long run by reducing the effort and time required to run multiple unit tests. There are many ways to accomplish this, from simple Python scripting to full blown LabVIEW implementations with dedicated hardware. Ultimately, it will probably not be possible to automate every test required, but opportunities to reduce the labor involved with the test process should be considered whenever it is feasible.
The principles listed above are just some of the strategies that can be employed to construct a viable test plan. Boulder Engineering Studio (BES) utilizes these principles as much as possible in order to improve the quality and speed of the development process. Future articles will dive into some of the specific hardware testing processes that BES employs and one can read about how BES integrates testing into our embedded software development here: Embedded TDD Challenges
Interested in learning more about Boulder Engineering Studio? Let's chat!
Previous Blog Posts
Defining A Product
High Speed Signal Integrity in FPCs