Computing stuff tied to the physical world

Seeing glitches

In Hardware on Jan 20, 2012 at 00:01

There’s one dirty little secret about most digital storage oscilloscopes (DSOs) which puts them way behind the capabilities of their analog brethren: screen update rate.

For a scope to present a trace on screen, it has to do a lot of signal processing. After all, 1 GSa/s means its acquiring 1 billion data values per second. And although we’re not able to really see that with our eyes, we do see things that stay on the screen for a few milliseconds.

One trick is to turn on persistence. On analog scopes, that’s a pretty nifty feature whereby the image generated from the beam hitting the screen is made to stay visible for a while. Either via a phosphor coating which keeps on glowing for a while, or by constantly re-sending the same beam pattern to keep the visual display going.

Digital scopes can simulate this. All they need to do is leave the pixels on their LCD display as is, right?

Unfortunately, it’s a lot more complicated than that. If you just refresh the screen a few times a second, then you’re still going to miss a huge amount of detail. Let’s take a periodic 1 MHz signal: to display everything measured, the display needs to be updated 1,000,000 times per second, which means each “sweep” would only remain visible for 1 µs. That won’t do – our slow eyes wouldn’t see a darn thing, especially glitches which occur very infrequently. So what a DSO does, is simulate persistence by merging new sweeps into what’s being shown, and then take old sweeps out of the picture a bit later (try implementing that – efficiently – !).

To test this stuff, I used the following sketch (which is based on this pin flipping trick):

Screen Shot 2012 01 17 at 12 28 39

It generates pulses at ≈ 1 MHz, but every once in 10,000 pulses, it messes up the timing. Note that interrupts are turned off, so this thing is as stable as the clock it’s running on (a JeeNode with a 16 MHz resonator in this case).

One glitch every 10,000 pulses at ≈ 1 MHz means there are 100 glitches per second. That should be constantly visible on the screen, right? Well… not so on my scope. I enabled 10s persistence, i.e. sweeps will fade after 10s:


That’s two glitches (the actual display is a bit erratic, but the position of the glitch pulses is absolutely repeatable).

IOW, this scope is showing only one out of every 500 pulses on the screen, on average!

This is why DSO manufacturers report a “waveform/sec” value in their specs. In this case, we are getting 2,000 waveforms per second, even though a scope could theoretically sweep 500,000 times per second on this.

An analog scope would have no problem whatsoever with this. To show the same two pulses in each sweep, it could trigger a new sweep 500,000 times per second, or maybe even “only” 100,000 but that would still show 10 glitches per second, i.e. a nearly constant visual display of the glitches. Fifty times more often than my DSO, anyway.

That 2,000 wf/s figure is an optimistic one, BTW. With more channels enabled, or things like signal stats calculated or the math function applied, this value can drop substantially. Meaning you’ll see even less glitches.

What this tells me, is that the Hameg needs about 500 µs of processing time to get 1 channel of acquired data onto the screen in its most basic form.

Don’t get me wrong: technically that’s an astonishing achievement. It’s not just copying a few bytes and setting a few pixels. It’s (always!) performing the emulated-persistence trick – because with persistence turned off, I still see a glitch every few seconds, which means that the LCD display is showing the glitch for a substantial fraction of a second – much longer than the 1 µs (or even 500 µs) sweep rate would suggest. Besides, you can see that fading is also implemented, so it’s not just drawing pixels but actually fooling around with color intensities.

There are some scopes which get much higher waverform/sec rates, such as the Agilent 2000X (50,000 wf/s) and the Agilent 3000X (1,000,000 wf/s) series. But they are twice the price, and even have lower specs in other areas.

Does it matter? Not for me. Being aware of this is good, though – far more important than fixing it. Note also that IF – that’s a very big if – you know what you’re looking for, then there are usually other ways to find these things. In this case, triggering on a negative pulse width under 900 ns captures all those glitchy pulses, for example.

As so much in life, it’s all about trade-offs. Analog / digital, brands, and of course… your needs and budget.

  1. This is a good example of “specmanship”. It is not actually possible to determine the probability of capturing low-rate rare repetitive events such as glitches simply from the update rate. The key parameter is the hold off time. After a trigger event, the scope must process the most recent data and then set up for the next trigger – during this time, the scope is blind to any input activity.

    In this area, though analog scopes also have a hold off delay, clever use of persistence/trace brightness can visual events that DSO’s will miss. I recall surprise when hearing a hoot of triumph from under a large fur parka in the corner of the Lab – the hot but triumphant CPU designer emerged from his improvised light tent around the best (and ancient) Tektronix persistence scope after finding an elusive microcode-induced glitch.

  2. I don’t understand the sketch at all. From what I can tell, you’re constantly trying to set bit 4 of PIND – which according to is read only?

    • Ah, sorry – yes, I can see how that might be deeply confusing. Setting PIND is a way to toggle an output pin – in fact it’s the fastest way to do so. See this weblog post.

Comments are closed.