The Pursuit of Perfection – An Effective Embedded Unit Test Process for Efficient Testing

A specialist lecture on methods, techniques and tools to achieve robust embedded testing.

It’ll never happen in the field
Optimise later
Perfection is the enemy of good
It’s good enough

Every developer has heard at least one of these phrases. I’m sure you have. You may have even used them yourself. They hold some truth for developers of enterprise or front-end software. However, if you have used these in embedded software, you are very wrong.

Flawless embedded software is important in many industries… Embedded code underpins everything; it’s the fundamental layer of all devices. All other software - middleware logic, databases, web servers, user interfaces - everything depends on a functioning bottom layer of software.

And in the top layer of software - and with all due respect to those who attempt to get it right - it rarely matters if the pixels are out in the user interface, or it’s the wrong colour. It doesn’t affect the functionality of the software.

As Embedded software developers, our focus is to accelerate and improve software development in the pursuit of perfection. In this session we will look at elements that come together to make a successful, fully tested, embedded application. Important aspects discussed include requirements traceability, software metrics, testing frameworks, code coverage and automation. An outline of the topics covered is included…

Use a Defined Process
You have probably heard of the unspoken truth in the software industry; a lot of code isn’t well checked. There are many reasons for this, including:

  • Sections of it are written in a rush without taking into account the rest of the system.
  • Testing is cursory, or performed without a full understanding of function.
  • Developers find that testing is boring, or they’re afraid they will discover bugs to fix.
  • Quality assurance engineers (those whose task it is to test) often work without complete requirements or deep understanding of the code (a common source of frustration).

You can address most of these concerns by sticking to a defined process in your test and verification activities. However there are several important things to consider before you start to figure out the process to use:

  • It is important that your team supports the process you define, and that you build in iterative feedback loops to refine and improve it.
  • You can base your process on Agile, Waterfall or a DevOps variation. Whichever you choose, your process should always feature test design as early as possible.
  • The test plan, in which we document the system test design, must always be kept as a “living document”. It will undergo regular updates and must therefore be under version control.

Linking Requirements to Test Cases
Implementation of test cases designed in the early stage of the project can sometimes not be realised as intended. If we don’t adjust the specification, then real tests and documentation get out of sync and you introduce technical debt into your test set. In the worst case, we can leave the tests in an unpredictable state.

A revision to the requirements specification is the most common reason for updating tests. To appreciate which of the existing system tests have to be rethought during such a revision of the requirements, an up-to-date traceability table is vital. This table keeps track of which requirement is being tested in which test case. Create and maintain these tables manually or use requirements engineering / management tools.

Efficient Test Design
The same basic engineering concepts that apply to the design of good software are needed when designing tests. Efficient test design comprises a variety of steps in which you increasingly increase the testing depth.

The specification of the program must drive the nature of the tests. For unit testing, we design the tests to check that the individual unit meets all design decisions taken in the design specification of the unit.

Non-trivial software can process a large (infinite) number of different input data. This is exasperated in situations where the order and timings of data entry order is important. Testers have the difficult task to develop a few discreet cases to test a system that can accommodate an infinite number of scenarios. The results of testing activities need to be fed back to the development team, tests do not improve quality: developers do.

First Test
The object of the first test case in any unit test process should be to execute the test unit in the simplest way possible. Not only does this perform a useful check of the unit under test, it also verifies the functionality of the build and test tool chain.

The confidence gained from knowing that you can execute a simple unit test isolated from the complete system is valuable. It not only provides the tester a foundation upon which to build but also a route to debug any failures.

Positive vs Negative Testing

A comprehensive unit test specification should include positive testing, that the unit does what it is supposed to do; and negative testing, that the unit does nothing it is not meant to do.
In the pursuit of perfection, we’re looking for mistakes everywhere, even in cases that will “never” happen.

Initial testing should show that the software unit under test does what it needs to do. The test model should obey the specifications; each test case will test one or more specification statements. If multiple requirements are involved, it is best to ensure that the sequence of test cases corresponds to the sequence of statements in the unit’s primary specification.

You should first look to improve existing test cases and then add further test cases to show that the program does nothing that is not specified. This depends on error guessing and the expertise of the tester to predict problem areas.

We should also design functional tests to address issues such as performance, safety and security requirements.

Estimate and Measure the Test Progress (Coverage and Metrics)
Plan the testing aspects as precisely as you would the entire project. An important part of this plan is defining target metrics. For example, how many undetected errors, of which category, can the software under test have at delivery? The type and extent of testing will depend on these figures.

Code Metrics are measures taken automatically from source code. For example, the number of linearly independent paths. If we know from previous experience how long it takes to test, on average, per path, then you can estimate the time to test the complete application.

Analysis of code coverage information, extracted from running the test suite as we build it up, it is possible to refine the initial estimates and monitor the progress of the testing process. However, beware that as you approach high levels of code coverage, 90+%, the testing effort can increase exponentially. Difficult to test and complex parts of the application are often hiding in the final 10%.

Run the Tests
After we have selected/designed suitable test cases, we can execute them. Running the tests can be manual, semi-automatic or fully automatic. The level of automation depends on two factors: the liability for software errors and the repetition rate of the tests.

  • For security-related applications, the preference should always be to run automated testing. Well-designed test scripts allow an exact repeatability of tests.
  • In DevOps environments, the gold standard is to execute a complete suite of automated tests on each code modification. This allows for rapid feedback to developers and the confidence that the code is always in a tested and ‘ready-to-release’ state.
  • Outside of DevOps, test automation allows for easy regression testing. This allows developers to ensure that added functionality does not compromise previous testing effort.

We should make an investment in a suitable test automation framework upfront to save effort throughout the duration of the project.

Despite the determinism and repeatability, fully automated tests do not mitigate the need for complete documentation and tuning of the test design with changing versions of the requirement specification.

Without documentation, the test suite becomes a box of mysteries; no one dares to delete a test, because we do not understand its function. Nobody knows what the tests do, except that to pass them is essential. In this scenario, new requirements force us to add new tests, and so the test collection becomes increasingly impenetrable.


  • Experience has shown that a conscientious approach to unit testing will detect many bugs at a stage of the software development, where we can correct them economically.
  • Be humble about what your unit tests can achieve, unless you have extensive requirements documentation for the unit under test, the testing phase will be iterative and exploratory.
  • Fix your embedded software; take the time. It doesn’t have to be perfect — but it helps to be close.
Adam Mackay
Adam Mackay

Adam is an engineer with over 20 years’ experience in managing, designing, developing and deploying software for safety and business critical...

45 Minuten Vortrag


22. Juli


Embedded Testing






Copyright © 2021 HLMC Events GmbH