Computing stuff tied to the physical world

Interrupt service times

Jitter is bad, but often still under a few microseconds and not something to worry about most of the time. Interrupt latency can be more serious – this is when the disable/enable durations become large and when multiple interrupts could pile up.

Take an example with a fast SPI chip as well as a fast serial connection. Perhaps we’d like to present serial access to a large dataflash memory chip. Let’s assume that both the serial and the SPI hardware are interrupt-driven, and that the drivers briefly disable interrupts as part of their processing task, to avoid potential race conditions.

Both devices may require quick responsiveness to incoming data, to avoid losing bytes and causing “overruns”. For a serial port running at 115,200 baud, there is about 86 µs between each incoming character in the worst case. Perhaps similar rates for the SPI bus.

Let’s make things still worse, and assume that we also need to drive an I/O pin in software, from a timer running at 10 KHz. That means one interrupt every 100 µs.

None of these specs exceed what could be done with an LPC8xx, even at 12 MHz probably. But the problem is that interrupts cause overhead, and that these interrupts can happen independently – in any order.

One source of overhead comes from the nature of interrupts: no matter what the processor is doing, it needs to save its state and turn its attention to the interrupt. As described previously, this requires pushing a number of register values onto the stack, i.e. to RAM.

Similarly, restoring the previous state requires restoring all those registers from the stack again. So for a single interrupt, we will have the following overhead:

  • completion of the current machine instruction
  • perhaps overlapping with it: the minimum value set in IRQLATENCY
  • push the current state onto the stack
  • fetch the address of the ISR and jump to it
  • do whatever the ISR is supposed to be doing…
  • return from the ISR
  • pop the previous state from the stack

This could easily take 5..50 µs for a single interrupt if we include the time when each ISR is doing real work. Now consider that there will be different interrupt sources, all firing at almost the same time, and that due to an unfortunate coincidence each next interrupt has higher priority than the last:

  • ISR1 gets started
  • oops, ISR2 happens, suspends ISR1, and gets started
  • dang, we just got the ISR3 interrupt, it starts, does its thing, and returns
  • now ISR2 runs to completion
  • and finally, ISR1 gets a chance to complete

But what if that ISR1 code hasn’t yet had a chance to save its incoming data? Maybe there’s already a second new value by the time it gets to it! We’re in trouble…

Obviously, this is somewhat contrived, and the chance of this happening is extremely low. But it can happen, and depending on various external conditions, it might crash our application, lose incoming information, write incomplete data, or mess up a calculation.

If we now add a few interrupt disable/enable sections to our code, worst-case timings may get even worse. Now we’re in even more trouble: our app worked “flawlessly” until now and all of a sudden a small and unrelated change causes it to fail? Occasionally? Yikes!

Conclusions we can draw from this:

  • keep all ISR code as short and fast as possible
  • don’t even think of adding any delay loops inside ISRs
  • be careful how you assign the different interrupt priorities
  • make good estimates of worst-case and compound interrupt service times
  • also do the same for stack use (nested interrupts need more stack space!)
  • build and test interrupt-based features in isolation

And perhaps the easiest way to benefit from interrupts without losing any sleep over it: keep your interrupt use limited and – whenever possible – based on proven library code. The reasoning behind this last guideline is that solid interrupt code can be implemented, and (given enough time and wide use) all race condition bugs can be identified and fixed.

[Back to article index]