Computing stuff tied to the physical world

Level interrupts

In Hardware, Software on Jun 26, 2012 at 00:01

The ATmega’s pin-change interrupt has been nagging at me for some time. It’s a tricky beast, and I’d like to understand it well to try and figure out an issue I’m having with it in the RF12 library.

Interrupts are the interface between real world events and software. The idea is simple: instead of constantly having to poll whether an input signal changes, or some other real-world event occurs (such as a hardware count-down timer reaching zero), we want the processor to “somehow” detect that event and run some code for us.

Such code is called an Interrupt Service Routine (ISR).

The mechanism is very useful, because this is an effective way to reduce power consumption: go to sleep, and let an interrupt wake up the processor again. And because we don’t have to keep checking for the event all the time.

It’s also extremely hard to do these things right, because – again – the ISR can be triggered any time. Sometimes, we really don’t want interrupts to get in our way – think of timing loops, based on the execution of a carefully chosen number of instructions. Or when we’re messing with data which is also used by the ISR – for example: if the ISR adds an element to a software queue, and we want to remove that element later on.

The solution is to “disable” interrupts, briefly. This is what “cli()” and “sei()” do: clear the “interrupt enable” and set it again – note the double negation: cli() prevents interrupts from being serviced, i.e. an ISR from being run.

But this is where it starts to get hairy. Usually we just want to prevent an interrupt to happen now – but we still want it to happen. And this is where level-interrupts and edge-interrupts differ.

A level-interrupt triggers as long a an I/O signal has a certain level (0 or 1) and works as follows:

JC s Grid page 22

Here’s what happens at each of those 4 points in time:

  1. an external event triggers the interrupt by changing a signal (it’s usually pulled low, by convention)
  2. the processor detects this and starts the ISR, as soon as its last instruction finishes
  3. the ISR must clear the source of the interrupt in some way, which causes the signal to go high again
  4. finally, the ISR returns, after which the processor resumes what it had been doing before

The delay from (1) to (3) is called the interrupt latency. This value can be extremely important, because the worst case determines how quickly our system responds to external interrupts. In the case of the RFM12B wireless module, for example, and the way it is normally set up by the RF12 code, we need to make sure that the latency remains under 160 µs. The ISR must be called within 160 µs – always! – else we lose data being sent or received.

The beauty of level interrupts, is that they can deal with occasional cli() .. sei() interrupt disabling intervals. If interrupts are disabled when (1) happens, then (2) will not be started. Instead, (2) will be started the moment we call sei() to enable interrupts again. It’s quite normal to see interrupts being serviced right after they are enabled!

The thing about these external events is that they can happen at the most awkward time. In fact, take it from me that such events will happen at the worst possible time – occasionally. It’s essential to think all the cases through.

For example: what happens if an interrupt were to occur while an ISR is currently running?

There are many tricky details. For one, an ISR tends to require quite a bit of stack space, because that’s where it saves the state of the running system when it starts, and then restores that state from when it returns. If we supported nested interrupts, then stack space would at least double and could easily grow beyond the limited amount available in a microcontroller with limited RAM, such as an ATmega or ATtiny.

This is one reason why the processor logic which starts an ISR also disables further interrupts. And re-enables interrupts after returning. So normally, during an ISR no other ISRs can run: no nested interrupt handling.

Tomorrow I’ll describe how multiple triggers can mess things up for the other type of hardware interrupt, called an edge interrupt – this is the type used by the ATmega’s (and ATtiny’s) “pin-change interrupt” mechanism.

  1. You are not kidding about the way interrupts happen at the most awkward times – occasionally. One of the most complex (yet small) bits of code I ever worked on was replacing a shared-memory 6502 coprocessor which did asyncronous intelligent floppy disk IO (and a custom network link which sorta looked a floppy disks) for the main system 6800 which did word processing – with a single 6801 doing both and running twice as fast (2 MHz, whee!). It required using both regular interrupts and NMI – non-maskable interrupts on the 6801, and especially considering the latter, getting all the possible interactions handled was seriously tricky. It was partly lots of examination and logical thinking about the code – and partly lots of testing with injected network errors and processing loads to find the unanticipated cracks that only happened “occasionally”.

    It was the kind of code that even with some design documentation deserves the infamous “don’t change anything without studying this for a week” caution.

    Obviously violates any modern programming paradigm, but we did some amazing things with tiny processors and slow floppy disks because that was what we had. (All in tight assy of course).

    Sorry, just reminiscing. I look forward to your articles on this in the modern uC context, now that I’m getting back to some low level programming again after a long time floating in the clouds of abstraction. (As you can guess from the processors, that was a long time ago).

  2. Abated breath till it be morrow.

  3. Hi,

    isn’t it a bit more complicated? If I understand Atmega’s (and ATtiny’s) interrupts correctly and I believe I do, you missed one step in your timeline.

    The interrupt is not fired when the event happens, but when the interrupt flag is set and interrupts (SREG.I and the corresponding interrupt mask) are enabled. Look at EIFR register for the level interrupts. (PCIFR for change ints).

    The hardware automatically clears the flag register when the ISR ends. But it is usually set by the proper event even in cases when interrupts are disabled (and you can even set the flags from your program to use interrupt vectors for watching something else if you want).

    So even if you have disabled interrupts during the event, it will still get fired, because the flag got set.

    I anticipate the problem with PCINT you encountered is that the pin change flag gets set, but you do not know the value it changed into.. and when the delayed interrupt is finally executed you read the wrong value.

    • Hi MarSik,

      An interesting post, thank you, I take it that during the delay of the interrupt only one pin change event can be held pending. If multiple pin change events occur the the first or the latest is handled when the delay ends. Is there a flag indicating there was an overflow?

    • Actually with Pin Changes there is no additional information stored with the interrupt.

      So it does not matter how many changes happened on the configured set of pins, it will just set a flag signaling interrupt request to notify you that some pins have changed state (and it does not matter which way or how many times).

      In Atmega328 you have three different sets of PCINTs, one for each port. So you can distinguish three pins easily, but still without knowing the change direction.

  4. Just to expand on “(INTn) is usually pulled low, by convention”….

    This mechanism is in place to allow multiple interrupt sources to share the same interrupt request line – by using a pull-up resistor and negative logic, then source A or source B or source C etc. can pull the signal low. This is referred to as ‘wired-OR’ logic.

    This provides a cheap mechanism to expand the number of interrupt sources visible, but it does assume there is a latched status register(s) or similar available to find out who shouted for attention.

    The interrupt mechanism is expensive on resources, especially if much of the state save required is done by the interrupt service code. Some architectures aimed at more deterministic, real-time processing have clearly-defined interrupt levels with a full set of registers at each level – context switching then becomes comparable to single instruction execution times.

Comments are closed.