Computing stuff tied to the physical world

Masking interrupts – or not

As pointed out in the previous article:

  • interrupts can happen between any two instructions

That’s a problem – consider the following pseudo-code:

if (some condition is true, i.e. data has been received)
  get the data
  clear the data (or flag)

This would seem to be the obvious and most logical flow of control, but it suffers from a potential “race condition”: it can fail, (very) occasionally. One reason for this is that between the get and the clear, an interrupt might occur. The data has been picked up, but the flag saying so hasn’t been cleared yet.

We need a way to prevent interrupts from happening, briefly!

Even a single statement in C/C++ can fail, the classical example being “a = a + 1”, or even “a++”. That’s because such a statement is likely to be compiled to multiple machine instructions (load a, inc a, save a). Again, an interrupt which changes “a” in between the load and the save will cause trouble.

But only rarely: perhaps a few seconds from now, perhaps a few weeks. There is no way to debug this, or even find out about it. Such intermittent bugs can be a nightmare, or worse!

The simple solution is to disable and re-enable interrupts around each critical section:

if (some condition is true, i.e. data has been received)
  get the data
  clear the data (or flag)

We’ve avoided the race, but we’re also occasionally postponing interrupts a bit. Whether that matters depends entirely on the application. If you’re toggling an I/O pin on each interrupt, driven by a fixed timer, then you’ll see that exact timing is impossible: once every so often, the toggling will be a little late, perhaps 1..10 µs. On a 10 KHz toggle, this will be very noticeable.

So by disabling interrupts, however briefly, we’ve introduced something else: jitter.

There are a number of ways to reduce this problem:

  • only disable the interrupt as briefly as possible
  • disable just the one interrupt source that interferes
  • use a spinlock-like approach to overcome a race condition
  • only use test-and-set type instructions, which are atomic

The first two are simplest to implement. The other two are a bit more specialised and may not always be applicable. They won’t be covered further here.

But that’s not all. Jitter is everywhere. Here’s another reason:

  • interrupts happen only between instructions

Every instruction takes some time. Some instructions (such as pushing multiple registers onto the stack) take much more time – several clock cycles.

And even this is not the full story. On an ARM processor, there can be several bus masters, such as a DMA controller, or even a second processor in multi-core architectures. They all need access to the internal bus, be it for flash or RAM memory accesses, or for access to hardware peripherals. As a result, there will be some contention, which is automatically resolved in hardware by very briefly delaying competing masters. There goes another clock cycle. Or two, or three (flash memory access, for example, may need more than one cycle).

Which means that the precise timing of handling interrupts (which can only be as accurate as the µC’s master clock) is next to impossible.


But the ARM architecture has one more very clever trick up its sleeve: the “IRQLATENCY” system register. This is the minimum number of cycles allowed for the processor to respond to an interrupt request.

What this means, for the default value of 16 (on LPC8xx), is that the processor waits at least 16 clock cycles before calling the ISR.

In most cases, the last instruction will have been completed, so the rest of the time will be cycle-by-cycle idling until that minimum is reached. In other words, each interrupt will be handled precisely 16 clock cycles after it occurs, allowing us to write very precise I/O pin toggling code, for example. Except when the instruction takes more than 16 cycles, that is.

If we really want to push it, we could raise that value further, up to 255. Then, even a brief interrupt disable/enable pause could fit into that time period. We’d end up with a more precise way to time interrupts, at the cost of twiddling (the processor’s) thumbs even when there is no need.

Or you could set the value to zero – then the processor will call your ISRs as soon as it possibly can at all times. Maximal responsiveness, but slightly more jitter. You decide.


One last trick we can play with the NVIC hardware on ARM, is to single out one specific interrupt type and turn it into a “Non-Maskable Interrupt” (NMI).

As the name says, this will cause the interrupt to always be serviced, regardless of whether interrupts are masked or not. In combination with a high IRQLATENCY setting, you’ll get the most predictable / deterministic behaviour ever.

It’s rare to have to go this far – there are simply limits to what a µC can do in terms of responsiveness. Traditionally, non-maskable interrupts have been used to respond to power-loss triggers from a UPS, forcing the processor save its state as soon as possible and then shut down cleanly, before power loss causes it to do so in a totally unpredictable way.

Let’s also keep the time scale in mind: on a lowly LPC8xx, the clock usually runs at 12 MHz or 30 MHz. A clock cycle is just over 80 nanoseconds, so 16 of those is still less than 1.5 µs.

[Back to article index]