Computing stuff tied to the physical world

Interrupts on ARM

Most processor families have a mechanism to handle “hardware interrupts”. The ARM Cortex series is no exception – it has surprisingly sophisticated support for them, in fact.

But first: what is an interrupt?

You can think of a µC / CPU as performing the following task, millions of times per second:

  • fetch the next instruction from memory
  • advance the instruction pointer
  • perform whatever that instruction says
  • rinse and repeat

This is all nice and great, but it completely ignores the outside world. If this was all we had, then we’d have to constantly check all the hardware peripherals whether there is anything we need to take care of: serial input or output, some amount of clock time has passed, a digital or analog pin change, etc. Given the many possible sources of occasional work, we’d be wasting our time checking up on all that. With fast I/O, we’d have to check very often!

Here’s what an interrupt-aware CPU does instead:

  • if there’s an interrupt event, process it
  • fetch the next instruction from memory
  • advance the instruction pointer
  • perform whatever that instruction says
  • rinse and repeat

So in a way, interrupts are nothing but lots of checking, done in hardware. At the start of every new instruction, the processor does the checking for us. Except that now, it’s virtually instant (and free): all the interrupt signals are OR-ed together to generate a single logic level – when set, there’s an interrupt “pending”, and the processor will divert its attention.

A couple of key points here:

  • interrupts happen only between instructions
  • interrupts can happen between any two instructions
  • interrupts add overhead, but only when they actually happen

It seems so simple, but as you will see later on, this can cause all sorts of trouble.

An important note in the context of embedded microcontrollers: hardware interrupts are also essential for “waking up” a µC when it’s in some low-power / power-down mode.

The stack

What a processor needs to do to process – or “service” – an interrupt is not trivial: it was doing something, with all sorts of context in its registers. And now it needs to somehow suspend that work, take care of the interrupt, and then resume what it was doing before.

To further complicate this: we’d like to be able to write the interrupt code (Interrupt Service Routine, or ISR) in a higher-level language such as C or C++, not just assembly.

This is where the hardware stack plays an essential role: when an interrupt is about to be serviced, all the state of the processor (i.e. its registers, including the instruction pointer) are pushed onto the stack, the instruction pointer is changed to the address of the ISR code, and then… the above loop is simply resumed!

The effect is that all of a sudden, the CPU starts running the ISR code. At the end of that code is a special “return from interrupt” instruction, which pops all the saved registers from the stack, and then again resumes the above loop. We’re back where we were before!

There are several clever optimisations for this important mechanism on ARM, such as not saving all the registers and automatically restoring registers on return. This allows ARM chips to efficiently support C/C++ code without any special interrupt entry/exit instructions. But the essential model and mechanism remains unaffected.

Each interrupt routine eats up some stack space when started and gives it back when done.

Interrupt vectors

Hardware interrupts can be used for lots of different purposes. Some interrupts might occur extremely often (many tens of thousands of times per second) – in the case of fast peripherals, such as SPI, or a serial port set to a very high baud rate. This in itself is fine, but we need to be careful with overhead. Interrupts “eat away” processor time from the main processing task – if we’re not careful, we could end up consuming more time than the processor has (and cause it to totally lock up).

Then again, some interrupts are very infrequent, at least on a µC’s MHz performance scale.

The solution to get high performance is called “interrupt vectoring” and causes each type of interrupt to jump to a different ISR. The first words in flash memory are reserved precisely for this purpose, and are normally set up to hold the addresses of each of the ISRs.

This means that when any specific ISR is called, it will know exactly what just happened and can do whatever is needed (i.e. copy some data in or out), to then return quickly.

Actual use

The result of all this is truly magnificent. All we need to do is define one or more interrupt handlers in our own code – there are a few dozen different ones, with pre-defined names.

As example, here is a “SysTick” ISR, which gets called when the dedicated “system tick” hardware timer fires – this is usually set up to happen every 1, 10, or 100 milliseconds and can then be used to keep track of (real) time:

    extern "C" void SysTick_Handler () {

The extern "C" is needed here, because the definition of these special functions is done in C, not C++, as part of the runtime setup. It needs to match to override the default code.

To actually cause the interrupt to happen requires some additional setup, which is different for each type of hardware. In the case of SysTick, it’s all wrapped up in some library code:


The argument value should be the SysTick count. In this example, when running the µC clock at 12 MHz, the above will fire every 12,000 counts, which is 1000 times per second.


Every ARM Cortex chip has a “Nested Interrupt Vector Controller” (NVIC), which does even more than what has been described so far: you can also set interrupt priorities (i.e. when an interrupt happens while another one is being serviced), mask/unmask interrupts, and define a non-maskable interrupt. More about this later.

As a result, an ARM-based µC can be set up to handle a wide variety of interrupt sources and service each of them differently, with a minimal amount of overhead. Dealing with 50,000 hardware interrupts per second is no big deal, if the work for each one is limited.

The key is to keep in mind that an ISR runs in “interrupt context” and in borrowed time. Normally, you should only do the minimum needed to take care of time-critical tasks, such as extracting received data or sending out new data, to keep communication going, etc. Everything else can be done in the main application loop, in the order which makes sense.

[Back to article index]