Tutorial :How to perform regression tests in embedded systems


What good practices and strategies are there for running regression tests in embedded environments or in other situations where the possibility to automate tests is very limited.

In my experience a lot of the testing has to be performed manually i.e. a tester needs to push a sequence of buttons and verify that the machine behaves correctly. As a developer it is really hard to assure yourself that your changes don't break something else.

Without proper regression tests the situation gets even worse during big refactorings and such.

Does anyone recognize the problem? Did you find a good solution or process to deal with this kind of problem?


Personally, I'm a big fan of having my embedded code compile on both the target hardware and my own computer. For example, when targeting an 8086, I included both an entry point that maps to reset on the 8086 hardware and a DOS entry point. The hardware was designed so all IO was memory mapped. I then conditionally compiled in a hardware simulator and conditionally changed the hardware memory locations to simulated hardware memory.

If I were to work on a non-x86 platform, I'd probably write an emulator instead.

Another approach is to create a test rig where all the inputs and outputs for the hardware are controlled through software. We use this a lot in factory testing.

One time we built a simulator into the IO hardware. That way the rest of the system could be tested by sending a few commands over CAN to put the hardware into simulated mode. Similarly, well-factored software could have a "simulated mode" where the IO is simulated in response to software commands.


For embedded testing, I would suggest that you design your way out of this very early in the development process. Sandboxing your embedded code to run on a PC platform helps a lot, and then do mocking afterwards :)

This will ensure integrety for the most of it, but you would still need to do system and acceptance testing manually later on.


Does anyone recognize the problem?

Most definitely.

Did you find a good solution or process to deal with this kind of problem?

A combination of techniques:

  • Automated tests;
  • Brute-force tests, i.e. ones which aren't as intelligent as automated tests, but which repeatedly test a feature over a long period (hours or days), and can be left to run without human intervention;
  • Manual tests (often hard to avoid);
  • Testing on a software emulator on a PC (or as a last resort, a hardware emulator).

With regard to compiling on a PC compiler: that would certainly make sense for high-level modules, and for low-level modules with a test suitable harness.

When it comes to, for example, parts of the code which have to deal with real-time signals from multiple sources, emulation is a good place to start, but I don't think it is sufficient. There is often no substitute for testing the code on the actual hardware, in as realistic an environment as possible.


Unlike most responders so far, I work with embedded environments that do not resemble desktop systems at all, and therefore cannot emulate the embedded system on the desktop.

In order to write good testing systems, you need your test system to have feedforward and feedback. JTAG is the most common feed-forward way to control the device. You can set the complete state of the device (perhaps even the entire board if you're lucky) and then set the test code to run. At which point you get your feedback. JTAG can also serve as a feedback device. However, a logic analyzer with an software API is the best in this situation. You can look for certain levels on pins, count pulses and even parse data streams from streaming peripherals.


Provide test harnesses / sandboxes / mockups for individual subsystems, and for the entire project, that emulate the target environment.

This does not remove the need for tests in the real environment, but greatly reduces their number as the simulation will catch most problems so by the time they all pass and you perform the expensive human-driven test you are reasonably confident you will pass that first time.


Apart from the suggestions so far about ensuring your app can build and at least partially test on normal PC's (which is also useful for using tools like Valgrind) I would think about your software design.

One project I worked on had a component for driving the hardware, one for dealing with management tasks and another of network management. The network management was handled by SNMP so it was easy to write scripts that ran remotely to drive the hardware to do something.

To run the low level hardware tests I wrote a simple script reader that parsed test scripts and injected commands into the IPC of my driver. As the output was video based it was hard to automate the verification of the handling other than by eye, but it certainly saved me RSI. It was also very useful in generating scripts that stress tested or simulated known failure conditions to ensure bugs didn't re-occur.

If I where doing it all over again I would probably implement a shared library used by the test harness and the real code to send the core messages. I would then wrap up the lib in python (or something similar) so my testing could be slightly more "scriptable".


I agree with everyone who says automated hardware is a must - we're using that approach to test embedded software with some of our units. We have built up large two-rack test stations full of hardware simulators and we use NI TestStand with a mix of Labview VIs, C# code, vendor DLLs, etc to manage all of it. We have to test a lot of hardware - that's why we have all of that crap. If you're just testing software then you can scale it back to the bare essentials. Testing a serial interface? Just build a device to simulate the serial traffic and exercise all of the messages (and a few non-valid messages) to ensure the software responds correctly. Testing DIO? That's easy-there are plenty of USB peripherals or embedded devices to simulate DIO. If timing is important you'll have to use another embedded device to get the tight tolerances you're looking for, otherwise a PC will do just fine.

The important part is to always know what you're testing and not to test anything other than that. If it's software, make sure the test is independent of the hardware to the largest degree possible. If you're testing waveform generation or something with a D/A, separate out the tasks - test the D/A hardware with a special build of software on the embedded device that doesn't do anything fancy except spit out a prearranged sequence of voltage levels. Then you can see if your references are off, if your filters are set to the wrong frequency, etc. Then you should be able to test the software independent of the hardware - use a development board to test the software and verify behavior at the processor pins is correct.


A solution in use where I work is automated nightly build and test procedure.

  1. Check out trunk head code from source control.
  2. Build project and load onto target.
  3. Run PC controlled automated test scripts.

The test scripts are easy to run if you are using some sort of communication protocol. That's good for internal unit tests. What makes the situation more interesting (and thorough) is to make a wiring harness that plugs into the board to simulate external IO.

Emulating is good for development and basic initial testing, but real physical operating time is the only reliable method for system validation. Physical operation can ferret out non-code issues (caused by coding methods) such as voltage sags, noise, glitches, debounce issues, race conditions, etc.

Prolonged system testing is important as well. Setting up an automated test to abuse a system continuously for days/weeks straight is a good way to force out issues that may not crop up until several months later in the field. Telling a customer to just cycle power whenever things start acting funny is not a luxury that all industries can entertain.


In my experience, automated hardware testing has been critical. -- Investing in dual compilation in both PC & target is a "nice to have" feature but given the choice, I'd much rather invest in automated hardware testing. It'll be a more cost-effective solution in the end since the manufacturing arms will want/need the capability anyways for failure analysis.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »