Computing stuff tied to the physical world

Archive for May 2013

Yes, we CAN bus

In Hardware on May 31, 2013 at 00:01

The CAN Bus is a very interesting wired bus design, coming from the automobile industry (and probably built into every European car made today). It’s a bus with an ingenious design, avoiding bus collisions and supporting a good level of real-time responsiveness.

I’ve been intrigued by this for quite some time, and decided to dive in a bit.

There are several interesting design choices in CAN bus:

  • it’s all low-voltage, just 0..5V (even 0..3.3V) is all it takes on each connected node
  • the bus is linear, reaching from 40 m @ 1 Mbit/s to 500 m @ 125 kbit/s, or even longer
  • signalling is based on voltage between two wires, and terminated by 120 Ω on each end
  • signals are self-clocked, with bit-stuffing to insert bit-transitions when needed

But the three most surprising aspects of the CAN bus design are probably the following:

  • the design is such that collisions cannot happen: one of the two senders always wins
  • each CAN bus packet can have at most 8 bytes of data (and is CRC-checked)
  • as described recently, messages have no destination, but only a message ID (type)

What’s also interesting is that – like I2C – this protocol tends to be fully implemented in hardware, and is included in all sorts of (usually ARM-based) microcontrollers. So unlike UARTs, RS485, I2C, and SPI, you simply get complete and valid packets in and out of the peripheral. No need to deal with framing, CRC checking, or timing decisions.

You can almost feel the car-like real-time nature of these design trade-offs:

  • short packets – always! – so the bus is released very quickly, and very often
  • no collisions, i.e. no degradation in bus use and wasted retransmits as it gets busier
  • built-in prioritisation, so specific streams can be sent across with controlled latencies
  • with a 16-bit CRC on each 0..8 byte packet, chances of an undetected error are slim

Since my scope includes hardware CAN bus decoding, I decided to try it out:

SCR37

The message has an ID of 0x101 (message ID’s are either 11 or 29 bits), eight bytes of data (0xAA55AA55FF00FF00), and a CRC checksum 0x1E32. I’m using a 500 KHz bit clock.

If you look closely, you can see that there are never more than 5 identical bits in a row. That’s what bit-stuffing does: insert an opposite bit to avoid longer stretches of identical bits, as this greatly helps deduce exact timings from an incoming bit-stream.

It seems crazy to limit packets to just 8 bytes – what could possibly be done with that, without wasting it all on counters and offsets to send perhaps 4 bytes of real data in each packet? As it turns out, it really isn’t so limiting – it just takes a somewhat different mindset. And the big gain is that multiple information streams end up getting interleaved very naturally. As long as each of them is reasonable, that is: don’t expect to get more than 2 or 3 data streams across a 1 Mb/s bus, each perhaps no more than 100 kb/s. Then again, you can expect these to arrive within a very consistent and predictable time, regardless of what other lower-priority burst traffic is going on.

Neat stuff…

Energy Hack June 15 in Berlin

In News on May 30, 2013 at 00:01

For the German readers… got this announcement in which might be of interest to you:

[…] It will be a day of hacking the open energy data by Berlin’s main energy provider and exploring hardware hacks to manage energy consumption in one’s own household. You can find all the necessary info on www.energyhack.de.

The day will be mostly in German, but since the topic is global and all Germans speak English, we hope to make everyone feel at home.

eh

Some more details, now “auf Deutsch”:

Am Samstag, 15. Juni, veranstaltet die Open Knowledge Foundation Deutschland e.V. in Berlin einen Hackday zum Thema „Energie der Zukunft“, zu dem wir euch alle herzlich einladen.

Ziel des Energie-Hackdays ist es, die abstrakten Themen Stromversorgung und -verbrauch besser greifbar zu machen und Verbraucher zur effizienteren Nutzung von Energie anzuregen. Gemeinsam mit Programmiererinnen, Softwareentwicklern und Open Data-Enthusiasten wollen wir mit den Daten des Berliner Stromnetzes experimentieren und Anwendungen verschiedenster Art bauen. Besonders freuen wir uns über Hardwarehacks, die zum Stromsparen anregen oder sich auf andere Weise mit dem Thema Energie auseinander setzen.

Zur Verfügung stehen unter anderem Echtzeitdaten zur Last und Erzeugung in Berlin sowie durchschnittliche Verbrauchsdaten von Haushalten, die ihr als Open Data in den Datenkatalogen des Netzbetreibers und der Stadt Berlin findet. Außerdem haben wir elektronische Zähler zum „Selbstablesen“. Falls ihr Anregungen oder Ideen für weitere interessante Datensätze habt oder Material zum Hardware-Basteln braucht, das ihr nicht selbst mitbringen könnt, kontaktiert uns!

I won’t have time for it, alas, but if you’re interested, go to www.energyhack.de.

What if you’re lost on this site?

In Musings on May 29, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

With over 1300 posts on this weblog, it’s easy to get lost. Maybe you stumbled onto one of the posts after a web search, and then kept reading. Some people told me they just started reading it all from end to finish (gulp!).

It’s not always easy to follow the brain dump of a quirky Franco-Dutch maverick :)

Let me start by listing the resources related to all this JeeStuff:

  • This daily weblog is my train-of-thought, year-in, year-out. Some projects get started and never end (low-power tweaking?), others get started and never get finished (sad, but true), yet others are me going off on some tangent, and finally there are the occasional series and mini-series – diving a bit deeper into some topic, or trying to explain something (electronics, usually).
  • There’s a chronological index, which I update from time to time using a little script. It lists just the titles and the tags. It’s a quick way to see what sort of topics get covered.
  • Most post are tagged, see the “tag cloud” at the bottom of each page. Clicking on a term leads to the corresponding posts, as one (large) page. This is probably a good way to read about certain topics when you come to this web site for the first time.
  • At the bottom of each weblog page is a list of posts, grouped by month. Frankly, I don’t think it’s that useful – it’s mostly there because WordPress makes it easy to add.
  • And there’s the search field, again at the bottom of each page. It works quite well, but if your search term is too vague, you’ll get a page with a huge list of weblog posts.

Apart from this weblog, which is at jeelabs.org, there is also the community site at jeelabs.net, and the shop at jeelabs.com. It’s a bit unfortunate that they all look different, and that they all use different software packages, but that’s the way it is.

The community site contains a number of areas:

  • The café is a publik wiki, with reference materials, projects, and pointers to other related pages and sites. Note that although it’s a wiki, it is not open for editing without signing up – that’s just to keep out spammers. Everyone is welcome, cordially invited even, to participate.

  • The software I write – with the help and contributions of others – ends up on GitHub so that anyone can browse the code, see what is being added / changed / fixed over time, and also create a fork to hack on. My code tends to be MIT-licensed wherever possible, so it’s all yours to look at, learn from, re-use, whatever.

  • There is documentation at jeelabs.net/pub/docs/ for several of the more important packages and libraries on GitHub. Updating is a manual step here, so it can lag, occasionally. These pages are generated by Doxygen.

  • The hardware area lists all the products which have escaped from JeeLabs, and are ending up all over the world. It’s a reference area, which should be the final word on what each product is and isn’t.

  • There are several forums for discussion, making suggestions, asking questions, and posting notes about anything somehow related (or at least relevant) to JeeLabs.

  • For real-time discussion, there’s a #jeelabs IRC channel, though I rarely leave my IRC client running very long. Doesn’t seem to be used much, but it’s there to be used.

If you’re new to electronics, you could go through the series called Easy electrons. For a write-up about setting up a sensor network at home, see the Dive Into JeeNodes series.

What else? Let me know, please. I find it very hard to get in the mindset of someone reaching this site for the first time. If you are lost, chances are that others will be too – so any tips and suggestions on how to improve this site for new visitors would be a big help.

You can always reach me via the info listed on the “About” page.

Idling in low-power mode

In AVR, Software on May 28, 2013 at 00:01

With a real-time operating system, going into low-power mode is easy. Continuing the recent ChibiOS example, here is a powerUse.ino sketch which illustrates the mechanism:

#include <ChibiOS_AVR.h>
#include <JeeLib.h>

const bool LOWPOWER = true; // set to true to enable low-power sleeping

// must be defined in case we're using the watchdog for low-power waiting
ISR(WDT_vect) { Sleepy::watchdogEvent(); }

static WORKING_AREA(waThread1, 50);

void Thread1 () {
  while (true)
    chThdSleepMilliseconds(1000);
}

void setup () {
  rf12_initialize(1, RF12_868MHZ);
  rf12_sleep(RF12_SLEEP);

  chBegin(mainThread);
}

void mainThread () {
  chThdCreateStatic(waThread1, sizeof (waThread1),
                    NORMALPRIO + 2, (tfunc_t) Thread1, 0);

  while (true)
    loop();
}

void loop () {
  if (LOWPOWER)
    Sleepy::loseSomeTime(16); // minimum watchdog granularity is 16 ms
  else
    delay(16);
}

There’s a separate thread which runs at slightly higher priority than the main thread (NORMALPRIO + 2), but is idle most of the time, and there’s the main thread, which in this case takes the role of the idling thread.

When LOWPOWER is set to false, this sketch runs at full power all the time, drawing about 9 mA. With LOWPOWER set to true, the power consumption drops dramatically, with just an occasional short blip – as seen in this current-consumption scope capture:

SCR35

Once every 16..17 ms, the watchdog wakes the ATmega out of its power-down mode, and a brief amount of activity takes place. As you can see, most of these “blips” take just 18 µs, with a few excursions to 24 and 30 µs. I’ve left the setup running for over 15 minutes with the scope background persistence turned on, and there are no other glitches – ever. Those 6 µs extensions are probably the milliseconds clock timer.

For real-world uses, the idea is that you put all your own code in threads, such as Thread1() above, and call chThdSleepMilliseconds() to wait and re-schedule as needed. There can be a number of these threads, each with their own timing. The lowest-priority thread (the main thread in the example above) then goes into a low-power sleep mode – briefly and repeatedly, thus “soaking” up all unused µC processor cycles in the most energy-efficient manner, yet able to re-activate pending threads quickly.

What I don’t quite understand yet in the above scope capture is the repetition frequency of these pulses. Many pulses are 17 µs apart, i.e. the time Sleepy::loseSomeTime() goes to sleep, but there are also more frequent pulses, spread only 4..9 ms apart at times. I can only guess that this has something to do with the ChibiOS scheduler. That’s the thing with an RTOS: reasoning about the repetitive behavior of such code becomes a lot trickier.

Still… not bad: just a little code on idle and we get low-power behaviour almost for free!

Wireless, the CAN bus, and enzymes

In Musings on May 27, 2013 at 00:01

How’s that for a title to get your attention, eh?

There’s an interesting mechanism in communication, which has kept me intrigued for quite some time now:

  • with JeeNode and RF12-based sensors, wireless packets are often broadcast with only a sender node ID (most network protocols use both a source and a destination)
  • CAN bus is a wired bus protocol from the car industry, its messages do not contain a destination address, there is just a 11- or 29-bit “message ID”

What both these systems do (most of the time, but not exclusively), is to tag transmitted packets with where they came from (or what their “meaning” is) and then just send this out to whoever happens to be interested. No acknowledgements: in the case of wireless, some messages might get lost – with CAN bus, the reliability is considerably higher.

It’s a bit like hormones and other chemicals in our blood stream, added for a specific purpose, but not really addressed to an area in the body. That’s up to various enzymes and other receptors to pick up (I know next to nothing about biology, pardon my ignorance).

Couple of points to note about this:

  • Communicating 1-to-N (i.e. broadcasting) is just as easy as communicating 1-to-1, in fact there is no such thing as privacy in this context – anyone / anything can listen-in on any conversation. The senders won’t know.
  • There is no guaranteed delivery, since the intended targets may not even be around or listening. The best you can do, is look for the effects of the communication, which could be an echo from the receiving end, or some observable side-effect.
  • You can still set up focused interactions, by agreeing on a code / channel to use for a specific purpose: A can say “let’s discuss X”, and B can say “I’ll be listening to topic X on channel C”. Then both A and B could agree to tag all their messages with “C”, and they’ll be off on their own (public) discussion.
  • This mode of communicating via “channels” or “topics” is quite common, once you start looking for it. The MQTT messaging system uses “channels” to support generic data exchange. Or take the human-centric IRC, for example. Or UDP’s multicast.
  • Note that everything which has to do with discovery on a network also must rely on such a “sender-id-centric” approach, since by definition it will be about finding a path to some sender which doesn’t know about us.

Having no one-to-one communication might seem limiting, but it’s not. First of all, the nature of both wireless and busses is such that everything reaches everyone anyway. It’s more about filtering out what we’re not interested in. The transmissions are the same, it’s just the receivers which apply different filtering rules.

But perhaps far more importantly, is that this intrinsic broadcasting behaviour leads to a different way of designing systems. I can add a new wireless sensor node to my setup without having to decide what to do with the measurements yet. Also, I will often set up a second listen-only node for testing, and it just picks up all the packets without affecting my “production” setup. For tests which might interfere, I pick a different net group, since the RF12 driver (and the RFM12B hardware itself) has implicit “origin-id-filtering” built in. When initialised for a certain net group, all other packets automatically get ignored.

Even N-to-1 communication is possible by having multiple nodes send out messages with the same ID (and their distinguishing details elsewhere in the payload). This is not allowed on the CAN bus, btw – there, each sender has to stick to unique IDs.

The approach changes from “hey YOU, let me tell you THIS”, to “I am saying THIS”. If no one is listening, then so be it. If we need to make sure it was received, we could extend the conventions so that B nods by saying “got THIS” and then we just wait for that message (with timeouts and retries, it’s very similar to a traditional ACK mechanism).

It’s a flexible and natural model – normal speech works the same, if you think about it…

PS. The reason this is coming up, is that I’m looking for a robust way way to implement JeeBoot auto-discovery.

90 days on a coin cell

In Hardware on May 26, 2013 at 00:01

Just saw that my JeeNode Micro test setup has been running and “blipping” for 90 days:

Screen Shot 2013-05-25 at 11.08.00

The voltage is starting to drop a bit, and the voltage drop before and after using the radio has increased from 0.08 to 0.16 V (reported with a granularity of 0.02 V), but everything seems to be fine. It has pumped out over 120,000 packets so far.

The other test is a JeeNode Micro with boost regulator, running of one Eneloop AA battery. That battery voltage has also dropped a bit, but as you can see, the boost regulator is doing its thing and still providing the exact same 3.04 V as it did when the test was started.

It’ll be interesting to see how long each of these setups holds out. I have no idea, really. It’s not just a matter of capacity – with the coin cell, it’ll also depend on how long the battery can continue to provide these brief 20..20 mA power bursts for each transmission.

Onwards!

ChibiOS for the Arduino IDE

In AVR, Software on May 25, 2013 at 00:01

A real-time operating system is a fairly tricky piece of software, even with a small RTOS – because of the way it messes with several low-level details of the running code, such as stacks and interrupts. It’s therefore no small feat when everything can be done as a standard add-on library for the Arduino IDE.

But that’s exactly what has been done by Bill Greiman with ChibiOS, in the form of a library called “ChibiOS_AVR” (there’s also an ARM version for the Due & Teensy).

So let’s continue where I left off yesterday and install this thing for use with JeeNodes, eh?

  • download a copy of ChibiOS20130208.zip from this page on Google Code
  • unpack it and inside you’ll find a folder called ChibiOS_AVR
  • move it inside the libraries folder in your IDE sketches folder (next to JeeLib, etc)
  • you might also want to move ChibiOS_ARM and SdFat next to it, for use later
  • other things in that ZIP file are a README file and the HTML documentation
  • that’s it, now re-launch the Arduino IDE to make it recognise the new libraries

That’s really all there is to it. The ChibiOS_AVR folder also contains a dozen examples, each of which is worth looking into and trying out. Keep in mind that there is no LED on a standard JeeNode, and that the blue LED on the JeeNode SMD and JeeNode USB is on pin 9 and has a reverse polarity (“0” will turn it on, “1” will turn it off).

Note: I’m using this with Arduino IDE 1.5.2, but it should also work with IDE 1.0.x

Simple things are still relatively simple with a RTOS, but be prepared to face a whole slew of new concepts and techniques when you really start to dive in. Lots of ways to make tasks and interrupts work together – mutexes, semaphores, events, queues, mailboxes…

Luckily, ChibiOS comes with a lot of documentation, including some general guides and how-to’s. The AVR-specific documentation can be found here (as well as in that ZIP file you just downloaded).

Not sure this is the best place for it, but I’ve put yesterday’s example in JeeLib for now.

I’d like to go into RTOS’s and ChibiOS some more in the weeks ahead, if only to see how wireless communication and low-power sleep modes can be fitted in there.

Just one statistic for now: the context switch latency of ChibiOS on an ATmega328 @ 16 MHz appears to be around 15 µs. Or to put it differently: you can switch between multiple tasks over sixty thousand times a second. Gulp.

Blinking in real-time

In AVR, Software on May 24, 2013 at 00:01

As promised yesterday, here’s an example sketch which uses the ChibiOS RTOS to create a separate task for keeping an LED blinking at 2 Hz, no matter what else the code is doing:

#include <ChibiOS_AVR.h>

static WORKING_AREA(waThread1, 50);

void Thread1 () {
  const uint8_t LED_PIN = 9;
  pinMode(LED_PIN, OUTPUT);
  
  while (1) {
    digitalWrite(LED_PIN, LOW);
    chThdSleepMilliseconds(100);
    digitalWrite(LED_PIN, HIGH);
    chThdSleepMilliseconds(400);
  }
}

void setup () {
  chBegin(mainThread);
}

void mainThread () {
  chThdCreateStatic(waThread1, sizeof (waThread1),
                    NORMALPRIO + 2, (tfunc_t) Thread1, 0);
  while (true)
    loop();
}

void loop () {
  delay(1000);
}

There are several things to note about this approach:

  • there’s now a “Thread1” task, which does all the LED blinking, even the LED pin setup
  • each task needs a working area for its stack, this will consume a bit of memory
  • calls to delay() are forbidden inside threads, they need to play nice and go to sleep
  • only a few changes are needed, compared to the original setup() and loop() code
  • chBegin() is what starts the RTOS going, and mainThread() takes over control
  • to keep things similar to what Arduino does, I decided to call loop() when idling

Note that inside loop() there is a call to delay(), but that’s ok: at some point, the RTOS runs out of other things to do, so we might as well make the main thread similar to what the Arduino does. There is also an idle task – it runs (but does nothing) whenever no other tasks are asking for processor time.

Note that despite the delay call, the LED still blinks in the proper rate. You’re looking at a real multitasking “kernel” running inside the ATmega328 here, and it’s preemptive, which simply means that the RTOS can (and will) decide to break off any current activity, if there is something more important that needs to be done first. This includes suddenly disrupting that delay() call, and letting Thread1 run to keep the LEDs blinking.

In case you’re wondering: this compiles to 3,120 bytes of code – ChibiOS is really tiny.

Stay tuned for details on how to get this working in your projects… it’s very easy!

It’s time for real-time

In Software on May 23, 2013 at 00:01

For some time, I’ve been doodling around with various open-source Real-time operating system (RTOS) options out there. There are quite a few out there to get lost in…

But first, what is an RTOS, and why would you want one?

The RTOS is code which can manage multiple tasks in a computer. You can see what it does by considering what sort of code you’d write if you wanted to periodically read out some sensors, not necessarily all at the same time or equally often. Then, perhaps you want to respond to external events such as a button press of a PIR sensor firing, and let’s also try and report this on the serial port and throw in a command-line configuration interface on that same serial port…

Oh, and in between, let’s go into a low-power mode to save energy.

Such code can be written without RTOS, in fact that’s what I did with a (simpler) example for the roomNode sketch. But it gets tricky, and everything can become a huge tangle of variables, loops, conditions, and before you know it … you end up with spaghetti!

In short, the problem is blocking code – when you write something like this, for example:


void setup () {}

void loop () {
  digitalWrite(LED, HIGH);
  delay(100);
  digitalWrite(LED, LOW);
  delay(400);
}

The delay() calls will put the processor into a busy loop for as long as needed to make the requested number of milliseconds pass. And while this is the case, nothing else can be done by the processor, other than handling hardware interrupts (such as timer ticks).

What if you wanted to respond to button presses? Or make a second LED blink at a different rate at the same time? Or respond to commands on the serial port?

This is why I added a MilliTimer class to JeeLib early on. Let’s rewrite the code:


MilliTimer ledTimer;
bool ledOn;;

void setup () {
  ledTimer.set(1); // start the timer
}

void loop () {
  if (ledTimer.poll()) {
    if (ledOn) {
      digitalWrite(LED, LOW);
      ledTimer.set(400);
    } else {
      digitalWrite(LED, HIGH);
      ledTimer.set(100);
    }
    ledOn = ! ledOn;
  }
  // anything ...
}

It’s a bit more code, but the point is that this implementation is no longer blocking: instead of stopping on a delay() call, we now track the progress of time through the MilliTimer, we keep track of the LED state, and we adjust the time to wait for the next change.

As a result, the comment line at the end gets “executed” all the time, and this is where we can now perform other tasks while the LED is blinking in the background, so to speak.

You can get a lot done this way, but things do tend to become more complicated. The simple flow of each separate activity starts to become a mix of convoluted flows.

With a RTOS, you can create several tasks which appear to all run in parallel. You don’t call delay(), but you tell the RTOS to suspend your task for a certain amount of time (or until a certain event happens, which is the real magic sauce of RTOS’es, actually).

So in pseudo code, we can now write our app as:

  TASK 1:
    turn LED on
    wait 100 ms
    turn LED off
    wait 400 ms
    repeat

  MAIN:
    start task 1
    do other stuff (including starting more tasks)

All the logic related to making the LED blink has been moved “out of the way”.

Tomorrow I’ll expand on this, using an RTOS which works fine in the Arduino IDE.

What if the sun doesn’t shine?

In Musings on May 22, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

Slightly different question this time – not so much about investigating, but about coming up with some ideas. Because, now that solar energy is being collected here at JeeLabs and winter is over, there’s a fairly obvious pattern appearing:

Screen Shot 2013-05-14 at 12.47.42

Surplus solar energy during the day, but none in the evenings and at night for cooking + lighting (it looks like the heater is still kicking in at the end of the day, BTW).

This particular example shows that the amount of surplus energy would be more or less what’s needed in the evening – if only there were a way to store this energy for 6 hours…

Looking at some counters over that same period, I can see that the amount of energy is about 2.5 kWh. The challenge is to store this amount of energy locally. Some thoughts:

  • A 12 V lead-acid battery could be used, with 2.5 kWh corresponding to some 208 Ah.
  • But that’s a lower bound: let’s assume 90% conversion efficiency in both directions, i.e. 81% for charge + discharge (i.e. 19% losses) – we’ll now need a 257 Ah battery.
  • But the lifetime of lead-acid batteries is only good if you don’t discharge them too far. So-called deep cycle batteries are designed specifically for cases like these, where the charge/discharge is going to happen day in day out. To use them optimally, you shouldn’t discharge them over 50%, so we’ll need a battery twice as large: 514 Ah.

Let’s see… three of these 12V 230 Ah units could easily do the job:

Screen Shot 2013-05-14 at 13.14.23

Note that the cost of the batteries alone will be €2,000 and their total weight 200 kg!

There’s an interesting article about the energy shortage after the Fukushima disaster, including a good diagram about a somewhat similar issue (lowering evening peak use):

2-large-fighting-blackouts-japan-residential-pv-and-energy-storage-market-flourishing

Although driven by a much harsher reality in that article, I wouldn’t be surprised to see new “one-day storage” solutions come out of all this, usable in the rest of the world as well.

For winter-time, I suppose one could heat up a large water tank, and then re-use it for heating in the evening. Except, ehm, that there’s a lot less surplus energy in winter.

Are there any other viable “semi off-grid” options out there? A flywheel in the basement?

PS. New milestone reached yesterday: total solar production so far has caught up with the consumption here at JeeLabs during that same period (since end October, that is).

MPPT hunting

In Hardware on May 21, 2013 at 00:01

Solar panels are funny power sources: for each panel, if you draw no power, the voltage will rise to 15..40 V (depending on the type of panel), and when you short them out, a current of 5..12 A will flow (again, depending on type). My panels will do 30V @ 8A.

Note that in both cases just described, the power output will be zero: power = volts x amps, so when either one is zero, there’s no energy transfer! – to get power out of a solar panel, you have to adjust those parameters somewhere in between. And what’s even worse, that optimal point depends on the amount of sunlight hitting the panels…

That’s where MPPT comes in, which stands for Maximum Power Point Tracking. Here’s a graph, copied from www.kitenergie.com, with thanks:

MPPT_knee_diagram

As you draw more current, there’s a “knee” at which the predominantly voltage-controlled output drops, until the panel is asked to supply more than it has, after which the output voltage drops very rapidly.

Power is the product of V and A, which is equivalent to the surface of the area left of and under the current output point on the curve.

But how do you adjust the power demand to match that optimal point in the first place?

The trick is to vary the demand a bit (i.e. the current drawn) and then to closely watch what the voltage is doing. What we’re after is the slope of the line – or in mathematical terms, its derivative. If it’s too flat, we should increase the load current, if it’s too steep, we should back off a bit. By oscillating, we can estimate the slope – and that’s exactly what my inverter seems to be doing here (but only on down-slopes, as far as I can tell):

Screen Shot 2013-05-14 at 15.35.03

As the PV output changes due to the sun intensity and incidence angle changing, the SMA SB5000TL inverter adjusts the load it places on the panels to get the most juice out of ’em.

Neat, eh?

Update – I just came across a related post at Dangerous Prototypessynchronicity!

Tricky graphs – part 2

In Software on May 20, 2013 at 00:01

As noted in a recent post, graphs can be very tricky – “intuitive” plots are a funny thing!

Screen Shot 2013-05-14 at 10.35.43

I’ve started implementing the display of multiple series in one graph (which is surprisingly complex with Dygraphs, because of the way it wants the data points to be organised). And as you can see, it works – here with solar power generation measured two different ways:

  • using the pulse counter downstairs, reported up to every 3 seconds (green)
  • using the values read out from the SB5000TL inverter via Bluetooth (blue)

In principle, these represent the same information, and they do match more-or-less, but the problem mentioned before makes the rectangles show up after each point in time:

Screen Shot 2013-05-14 at 10.40.11

Using a tip from a recent comment, I hacked in a way to shift each rectangle to the left:

Screen Shot 2013-05-14 at 10.40.34

Now the intuition should be that the area in the coarser green rectangles should be the same as the blue area they overlap. But as you can see, that’s not the case. Note that the latter is the proper way to represent what was measured: a measurement corresponds to the area preceding it, i.e. that hack is an improvement.

So what’s going on, eh?

Nice puzzle. The explanation is that the second set of readings is incomplete: the smaRelay sketch only obtains a reading from the inverter ever 5 minutes or so, but that’s merely an instantaneous power estimate, not an averaged reading since the previous readout!

So there’s not really enough data in the second series to produce an estimate over each 5-minute period. All we have is a sample of the power generation, taken once every so often.

Conclusion: there’s no way to draw the green rectangles to match our intuition – not with the data as it is being collected right now anyway. In fact, it’ probably more appropriate to draw an interpolated line graph for that second series, or better still: dots

Update – Oops, the graphs are coloured differently, my apologies for the extra confusion.

Supply noise sensitivity

In Hardware on May 19, 2013 at 00:01

Yesterday’s post showed how with 3 resistors, one capacitor, and a P-MOSFET, you can set up a circuit to measure battery voltage with a voltage divider, even for voltages above VCC.

The whole point of this is that it can be switched off completely, drawing no current between measurements.

While trying this out, I started with a 1 MΩ pull-up on the P-MOSFET gate, and got this:

SCR27

A very odd switch-off pattern, looked like an oscillation of some kind. Even with with the 100x faster switch-off using a 10 kΩ pull-up instead, the problem persisted:

SCR26

This turned out to be a problem with the power supply. I was using a little USB plug with a switching regulator. These tend to work fine, but they do create a bit of “ripple voltage”, i.e. the 5V output is not exactly 5V DC. Here are the fluctuations, typical of units like these:

SCR28

In other words: that little ripple was greatly amplified near the point where the P-MOSFET was starting to turn off, thus creating a regular but highly exaggerated turn-off pattern. Because – in a certain range – MOSFETs act like amplifiers, just like regular transistors.

It all went away when I switched to the lab supply, but it sure took some head-scratching…

Anyway, in real use this won’t matter, since the whole point is to use this with batteries.

Zero-power measurement – part 2

In Hardware on May 18, 2013 at 00:01

After a great suggestion by Max, on yesterday’s post, here’s a another circuit to try:

JC's Grid, page 73

It adds a capacitor and a resistor, but it allows using a P-MOSFET and a divider ratio which can now use the entire ADC range, not just 1 V or so as in yesterday’s circuit. Note however that if VCC is not fixed to the same value under all conditions, then the ADC’s reference voltage can float, and use of the 1.1V bandgap may still be needed.

Here’s the voltage at the top of the divider, showing how it switches on and off:

SCR24

That’s with the pull-up resistor value R set to 1 MΩ, which takes 208 ms to turn the MOSFET back off. We don’t need that long, a 10 kΩ resistor for R will do fine:

SCR25

That still gives us 2 ms to measure the supply level. Note that turn-off is automatic. DIO needs to be turned high again, but that can happen later. In my test code, I left it low for 1s to, then high for 7s.

Here’s a neat set of superimposed measurements (using persistence), while varying the high voltage from 3.5 to 12.0 V in 0.5 V steps:

SCR31

Warning: for 12V, the divider ratio must be changed so the centre tap stays under VCC.

Note that with higher voltages, the MOSFET will turn off sooner – this is because there is now more current flowing through the pull-up resistor. But still plenty of time left to measure: 1 ms is more than enough for an ADC.

Tomorrow, an example of how these measurements can sometimes go awry…

Zero-power battery measurement

In Hardware on May 17, 2013 at 00:01

As promised, here’s a circuit which can be used to measure a voltage higher than VCC without drawing any current while not measuring:

Screen Shot 2013-05-15 at 13.40.54

Besides the fact that this needs an N-FET + I/O pin, there are several finicky details.

First of all, note that the following circuit will not drop the power consumption to zero:

Screen Shot 2013-05-15 at 14.38.42

The idea in itself is great: set DIO to logic “0” before performing a measurement, acting as GND level for the resistor divider (10 + 10 kΩ would be fine here). Then, to switch it off, set DIO to an input, so that the pin becomes high-impedance.

The problem is that the pin divider is still connected and that the AIO pin cannot float any higher than VCC + 0.6 (the drop over the internal ESD protection diode). The top resistor remains connected between PWR and VCC + 0.6, therefore it’s still leaking some current.

That also explains why the first circuit does better: the MOSFET disconnects all I/O pins from that PWR line, so that there is just a resistor from AIO to ground (which is harmless).

But there’s a catch: we need to be able to turn the N-channel MOSFET on and off, which means we need to be able to apply a voltage to its gate which is a few volts above the drain pin (the bottom one, attached to AIO). With a resistive divider of 10 + 10 kΩ on a 6V PWR line, that voltage will immediately rise to 3V, and there’s no way the DIO pin can keep the MOSFET on (it can only go up to logic “1”, i.e. 3.3V).

The solution is to use a different divider ratio: say 50 + 10 kΩ. Then, a 6V PWR level leads to a 1V level on the AIO pin, i.e. on the drain of the MOSFET. With DIO set to “1”, that means the MOSFETs gate will be 2.3V above the drain – enough to keep it turned on.

BTW, all this tinkering over the past few days has left me with a bunch a funky headers :)

DSC_4454

Anyway, to summarise the zero-power battery monitor:

  • to work with 6V PWR, use a 50 (or 47) kΩ top resistor and 10 kΩ for the bottom one
  • use an N-channel MOSFET with low turn-on voltage (called a “logic level MOSFET”)
  • to measure the voltage, set DIO to “1”
  • measure the voltage on the AIO pin, where 0..1V will correspond to 0..6V on PWR
  • to turn off the divider, set DIO to “0”

As you can see, this approach requires an active component to switch things and an extra I/O pin, but then you do end up with a circuit which can completely switch off.

For simple uses, I’d just use yesterday’s setup – sub-microamp is usually good enough!

Measuring the battery without draining it

In Hardware on May 16, 2013 at 00:01

In yesterday’s post, a resistive voltage divider was used to measure the battery voltage – any voltage for that matter, as long as the divider resistor values are chosen properly.

With a 6V battery, a 10 + 10 kΩ divider draws 0.3 ma, i.e. 300 µA. Can we do better?

Sure: 100+100 kΩ draws 30 µA, 1+1 MΩ draws 3 µA, and 10+10 MΩ draws just 0.3 µA.

Unfortunately there are limits, preventing the use of really high resistor divider values.

The ATmega328 datasheet recommends that the output impedance of the circuit connected to the ADC input pin be 10 kΩ or less for good results. With higher values, there is less current available to charge the ADC’s sample-and-hold capacitor, meaning that it will take longer for the ADC to report a stable value (reading it out more than once may be needed). And then there’s the leakage current which every pin has – it’s specified in the datasheet as ± 1 µA max in or out of any I/O pin. This means that a 1+1 MΩ divider may not only take longer to read out, but also that the actual value read may not be accurate – no matter how long we wait or how often we repeat the measurement.

So let’s find out!

The divider I’m going to use is the same as yesterday, but with higher resistor values.

Let’s go all out and try 10 + 10 MΩ. I’ll use the following sketch, which reads out AIO1..4, and sends out a 4-byte packet with the top 8 bits of each ADC value every 8 seconds:

#include <JeeLib.h>

byte payload[4];

void setup () {
  rf12_initialize(22, RF12_868MHZ, 5);
  DIDR0 = 0x0F; // disable the digital inputs on analog 0..3
}

void loop () {
  for (byte i = 0; i < 4; ++i) {
    analogRead(i);                    // ignore first reading
    payload[i] = analogRead(i) >> 2;  // report upper 8 bits
  }

  rf12_sendNow(0, payload, sizeof payload);
  delay(8000);
}

This means that a reported value N corresponds to N / 255 * 3.3V.

With 5V as supply, this is what comes out:

L 10:18:14.311 usb-A40117UK OK 22 193 220 206 196
L 10:18:22.675 usb-A40117UK OK 22 193 189 186 187
L 10:18:31.026 usb-A40117UK OK 22 193 141 149 162
L 10:18:39.382 usb-A40117UK OK 22 193 174 167 164
L 10:18:47.741 usb-A40117UK OK 22 193 209 185 175

The 193 comes from AIO1, which has the 10 + 10 kΩ divider, and reports 2.50V – spot on.

But as you can see, the second value is all over the map (ignore the 3rd and 4th, they are floating). The reason for this is that the 10 MΩ resistors are so high that all sorts of noise gets picked up and “measured”.

With a 1 + 1 MΩ divider, things do improve, but the current draw increases to 2.5 µA:

L 09:21:25.557 usb-A40117UK OK 22 198 200 192 186
L 09:21:33.907 usb-A40117UK OK 22 198 192 182 177
L 09:21:42.256 usb-A40117UK OK 22 197 199 188 183
L 09:21:50.606 usb-A40117UK OK 22 197 195 187 183
L 09:21:58.965 usb-A40117UK OK 22 197 197 186 181
L 09:22:07.315 usb-A40117UK OK 22 198 198 190 184

Can we do better? Sure. The trick is to add a small capacitor in parallel with the lower resistor. Here’s a test using 10 + 10 MΩ again, with a 0.1 µF cap between AIO2 and GND:

DSC_4453

Results – at 5V we get 196, i.e. 2.54V:

L 10:30:27.768 usb-A40117UK OK 22 198 196 189 186
L 10:30:36.118 usb-A40117UK OK 22 198 196 188 183
L 10:30:44.478 usb-A40117UK OK 22 198 196 186 182
L 10:30:52.842 usb-A40117UK OK 22 198 196 189 185
L 10:31:01.186 usb-A40117UK OK 22 197 196 186 181

At 4V we get 157, i.e. 2.03V:

L 10:33:31.552 usb-A40117UK OK 22 158 157 158 161
L 10:33:39.902 usb-A40117UK OK 22 158 157 156 157
L 10:33:48.246 usb-A40117UK OK 22 158 157 159 161
L 10:33:56.611 usb-A40117UK OK 22 158 157 157 159
L 10:34:04.959 usb-A40117UK OK 22 159 157 158 161

At 6V we get 235, i.e. 3.04V:

L 10:47:26.658 usb-A40117UK OK 22 237 235 222 210
L 10:47:35.023 usb-A40117UK OK 22 237 235 210 199
L 10:47:43.373 usb-A40117UK OK 22 236 235 222 210
L 10:47:51.755 usb-A40117UK OK 22 237 235 208 194
L 10:48:00.080 usb-A40117UK OK 22 236 235 220 209

Perfect!

Note how the floating AIO3 and AIO4 pins tend to follow the levels on AIO1 and AIO2. My hunch is that the ADC’s sample-and-hold circuit is now working in reverse: when AIO3 is read, the S&H switches on, and levels the charge on the unconnected pin (which still has a tiny amount of parasitic capacitance) and the internal capacitance.

The current draw through this permanently-connected resistor divider with charge cap will be very low indeed: 0.3 µA at 6V (Ohm’s law: 6V / 20 MΩ). This sort of leakage current is probably fine in most cases, and gives us the ability to check the battery level in a wireless node, even with battery voltages above VCC.

Tomorrow I’ll explore a setup which draws no current in sleep mode. Just for kicks…

What if we want to know the battery state?

In Hardware on May 15, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

One useful task for wireless sensor nodes, is to be able to determine the state of the battery: is it full? is it nearly depleted? how much life is there left in them?

With a boost converter such as the AA Power Board, things are fairly easy because the battery voltage is below the supply voltage – just hook it up to an analog input pin, and use the built-in ADC with a call such as:

word millivolts = map(analogRead(0), 0, 1023, 0, 3300);

This assumes that the ATmega is running on a stable 3.3V supply, which acts as reference for the ADC.

If that isn’t the case, i.e. if the ATmega is running directly off 2 AA batteries or a coin cell, then the ADC cannot use the supply voltage as reference. Reading out VCC through the ADC will always return 1023, i.e. the maximum value, since its reference is also VCC – so this can not tell us anything about the absolute voltage level.

There’s a trick around this, as described in a previous post: measure a known voltage with the ADC and then deduce the reference voltage from it. As it so happens, the ATmega has a 1.1V “bandgap” voltage which is accurate enough for this purpose.

The third scenario is that we’re running off a voltage higher than 3.3V, and that the ATmega is powered by it through a voltage regulator, providing a stable 3.3V. So now, the ADC has a stable reference voltage, but we end up with a new problem: the voltage we want to measure is higher than 3.3V!

Let’s say we have a rechargeable 6V lead-acid battery and we want to get a warning before it runs down completely (which is very bad for battery life). So let’s assume we want to measure the voltage and trigger on that voltage dropping to 5.4V.

We can’t just hook up the battery voltage to an analog input pin, but we could use a voltage divider made up of two equal resistors. I used two 10 kΩ resistors and mounted them on a 6-pin header – very convenient for use with a JeeNode:

DSC_4452

Now, only half the battery voltage will be present on the analog input pin (because both resistor values are the same in this example). So the battery voltage calculation now becomes a variant of the previous formula:

word millivolts = map(analogRead(0), 0, 1023, 0, 3300) * 2;

But there is a drawback with this approach: it draws some current, and it draws it all the time. In the case of 2x 10 kΩ resistors on a 6V battery, the current draw is (Ohm’s law kicking in!): 6 V / 20,000 Ω = 0.0003 A = 0.3 mA. On a lead-acid battery, that’s probably no problem at all, but on smaller batteries and when you’re trying to conserve as much energy as possible, 0.3 mA is huge!

Can we raise the resistor values and lower the current consumption of this voltage divider that way? Yes, but not indefinitely – more on that tomorrow…

Energy, power, current, charge

In Hardware on May 14, 2013 at 00:01

The International System of Units, or SI from the French Système International is a wonderfully clever refinement of the original metric system.

Took me a while to get all this clear, but it really helps to understand electrical “units”:

  • power says something about intensity: volts times amperes, the unit is watt
  • energy says something about effort: power times duration, the unit is watt-second
  • current says something about rate: charge per time unit, the unit is ampere
  • charge says something about pressure: more charge raises volts, the unit is coulomb

Of course, some units get expressed differently – that’s just to scale things for practical use:

  • a kilowatt (kW) is 1000 watts
  • a watt-hour (Wh) is 3600 watt-seconds
  • a kilowatt-hour (kWh) is 1000 watt-hour
  • a milli-ampere (mA) is 1/1000 of an ampere
  • a micro-coulomb (µC) is 1/1000000 of a coulomb

But there are several more useful equivalences:

  • When a 1.5 V battery is specified as 2000 mAh (i.e. 2 Ah), then it can deliver 1.5 x 2 = 3 Wh of energy – why? because you can multiply and divide units just like you can with their quantities, so V x Ah = V x A x h = W x h = Wh
  • Another unit of energy is the “joule” – which is just another name for watt-second. Or to put it differently: a watt is one joule per second, which shows that a watt is a rate.
  • A joule is also tied to mechanical energy: one joule is one newton-meter, where the “newton” is the unit of force. A newton is what it takes to accelerate 1 kg of mass by 1 m/s2 (i.e. increase the velocity by 1 m/s in one second – are you still with me?).
  • So the watt also represents a mechanical intensity (i.e. strength). Just like one horsepower, which is defined as 746 W, presumably the strength of a single horse…
  • Got a car with a 100 Hp engine? It can generate 74.6 kW of power, i.e. accelerate a 1000 kg weight by 74.6 m/s2, which is ≈ 20 km/h speed increase every second, or in more popular terms: 0..100 km/h in 5 seconds (assuming no losses). But I digress…

The point is that all those SI units really make life easy. And they’re 100% logical…

Measurement intervals and graphs

In Software on May 13, 2013 at 00:01

One more post about graphs in this little mini-series … have a look at this example:

Screen Shot 2013-05-10 at 19.33.51

Apart from some minor glitches, it’s in fact an accurate representation of the measurement data. Yet there’s something odd: some parts of the graph are more granular than others!

The reason for this is actually completely unrelated to the issues described yesterday.

What is happening is that the measurement is based on measuring the time between pulses in my pulse counters, which generate 2000 pulses per kWh. A single pulse represents 0.5 Wh of energy. So if we get one pulse per hour, then we know that the average consumption during that period was 0.5 W. And when we get one pulse per second, then the average (but near-instantaneous) power consumption over that second must have been 1800 W.

So the proper way to calculate the actual power consumption, is to use this formula:

    power = 1,800,000 / time-since-last-pulse-in-milliseconds

Which is exactly how I’ve been measuring power for the past few years here at JeeLabs. The pulses represent energy consumption (i.e. kWh), whereas the time between pulses represents estimated actual power use (i.e. W).

This type of measurement has the nice benefit of being more accurate at lower power levels (because we then divide by a larger and relatively more accurate number of milliseconds).

But this is at the same time also somewhat of a drawback: at low power levels, pulses are not coming in very often. In fact, at 100 W, we can expect one pulse every 18 seconds. And that’s exactly what the above graph is showing: less frequent pulses at low power levels.

Still, the graph is absolutely correct: the shaded area corresponds exactly to the energy consumption (within the counter’s measurement tolerances, evidently). And line drawn as boundary at the top of the area represents the best estimate we have of instantaneous power consumption across the entire time period.

Odd, but accurate. This effect goes away once aggregated over longer period of time.

Tricky graphs

In Software on May 12, 2013 at 00:01

Yesterday’s post introduced the new graph style I’ve added to HouseMon. As mentioned, I am using step-style graphs for the power (i.e. Watt) based displays. Zooming in a bit on the one shown yesterday, we get:

Screen Shot 2013-05-10 at 19.21.18

There’s something really odd going on here, as can be seen in the blocky lines up until about 15:00. The blocks indicate that there are many missing data points in the measurement data (normally there should be several measurements per minute). But in itself these rectangular shapes are in fact very accurate: the area is Watt times duration, which is the amount of energy (usually expressed in Joules, Wh, or kWh).

So rectangles are in fact exactly right for graphing power levels: we want to represent each measurement as an area which matches the amount of energy used. If there are fewer measurements, the rectangles will be wider, but the area will still represent our best estimate of the energy consumption.

But not all is well – here’s a more detailed example:

Screen Shot 2013-05-10 at 19.17.03

The cursor in the screenshot indicates that the power usage is 100.1 W at 19:16:18. The problem is that the rectangle drawn is wrong: it should precede the cursor position, not follow it. Because the measurement is the result of sampling power and returning the cumulative power level at the end of that measurement period. So the rectangle which is 100.1 W high should really be drawn from 19:18:00 to 19:18:16, i.e. from the time of the previous measurement.

I have yet to find a graphing system which gets this right.

Tomorrow: another oddity, not in the graph, but related to how power is measured…

New (Dy)graphs in HouseMon

In Software on May 11, 2013 at 00:01

I’ve switched to the Dygraphs package in HouseMon (see the development branch). Here’s what it now looks like:

Screen Shot 2013-05-10 at 19.15.08

(click for the full-scale resolution images)

Several reasons to move away from Flotr2, actually – it supports more line-chart modes, it’s totally geared towards time-series data (no need to adjust/format anything), and it has splendid cursor and zooming support out of the box. Oh, and swipe support on tablets!

Zooming in both X and Y directions can be done by dragging the mouse inside the graph:

Screen Shot 2013-05-10 at 19.16.19

You’re looking at step-style line drawing, by the way. There are some quirks in this dataset which I need to figure out, but this is a much better representation for energy consumption than connecting dots by straight lines (or what would be even worse: bezier curves!).

Temperatures and other “non-rate” entities are better represented by interpolation:

Screen Shot 2013-05-10 at 19.17.40

BTW, it’s surprisingly tricky to graphically present measurement data – more tomorrow…

Maxing out the Hameg scope

In AVR, Hardware on May 10, 2013 at 00:01

Yesterday’s post was about how test equipment can differ not only in terms of hardware, but also the software/firmware that comes with it (anyone hacking on the Owons or Rigols yet to make the software more feature-full?).

Here’s another example, where I’m using just about all the bells and whistles of the Hameg HMO series scopes – not for the heck of it, but because it really can help gain more insight about the circuit being examined.

This is my second attempt at understanding what sort of start up currents need to be available for the new JeeNode Micro v3 to properly power up:

SCR12

I’m applying a 0..2V power up ramp (yellow line) as power supply, using a 1 Hz sawtooth signal. This again simulates an energy harvesting setup where the power supply slowly ramps up (the real thing would actually rise far more slowly, i.e. when using a solar cell + supercap, for example). The current consumed by the JNµ v3 (blue line) is measured by measuring the voltage drop across a 10 Ω resistor – as usual.

The current consumption starts at about 0.85V and rises until the power supply reaches about 1.4V. At that point, the current consumption is about 77 µA. Then the ATtiny84A comes out of reset, enters a very brief high-current mode (much higher than the peak shown, but this is averaged out), and then goes into ultra low-power sleep mode. The sketch running on the JNµ is the latest power_down.ino, here in simplified form:

#include <JeeLib.h>

void setup () {
    cli();
    Sleepy::powerDown();
}

void loop () {}

Note that since this is the new JNµ v3, the RFM12B module never even gets powered up, so there’s no need to initialise the radio and then put it in sleep mode.

The red line uses the Hameg’s advanced math features to perform digital filtering on top of the averaging already performed during acquisition: the averaging keeps the power-up spike visible (albeit distorted), at the cost of leaving some residual noise in the blue trace, while the IIR digital low-pass filter applied to that result then makes it possible to estimate the 77 µA current draw just before the ATtiny84A starts running.

Here’s the zoomed-in view, showing the interesting segment in even more detail:

SCR14

The IIR filtering seen here is slightly different, with a little spike due to the following power-up spike, so the 86 µA reported here is slightly on the high side.

Note how the Hameg’s storage, high sensitivity, averaging, adjustable units display, variable vertical scale, math functions, on-screen measurements, on-screen cursors, and zooming all come together to produce a pretty informative screen shot of what is going on. Frankly, I wouldn’t know how to obtain this level of info in any other way.

So what’s all this fuss about measuring that 77 µA level?

Well, this is how much current the JNµ draws before it starts running its code. There’s nothing we can do to reduce this power consumption further until it reaches this point. In the case of energy harvesting, the supply – no matter how it’s generated – will have to be able to briefly deliver at least 77 µA to overcome the startup requirements. If it doesn’t, then the supply voltage (presumably a supercap or rechargeable battery) will never rise further, even though a JNµ can easily draw less than a tenth of that current once it has started up and goes into ultra-low power with brief occasional wake-ups to do the real work.

What I’m taking away from this, is that a solar energy setup will need to provide at least 0.1 mA somewhere during the day to get a JNµ started up. Once that happens, and as long as there is enough power to supply the average current needed, the node can then run at lower power levels. But whenever the supercap or battery runs out, another period with enough light is needed to generate that 0.1 mA again.

It all sounds ridiculously low, and in a way it is: 0.1 mA could be supplied for over two years by 3 AA batteries. The reason for going through all this trouble, is that I’d really like to find a way to run off indoor solar energy light levels. Even if it only works near a window, it would completely remove the need for batteries. It would allow me to just sprinkle such nodes where needed and collect data … forever.

Oscilloscope firmware

In Hardware on May 9, 2013 at 00:01

Oscilloscopes are very complex instruments. The “front end” is all about being able to capture a huge range of signals at a huge rate of speeds. This is what lets you hook up the same probe to AC mains one day, and pick up millivolt signals another day, and to collect many minutes of data on a single screen vs displaying the shape of a multi-MHz wave. This isn’t just about capture, at least as important is the triggering part: how to decide what to pick up for display on the screen.

For low sampling rates, it’s very easy to use an ADC and just collect some data points – as shown in this older weblog post, even AC mains, although triggering can be an issue.

With the Xminilab presented recently, a lot of this has been solved in software, supporting a pretty impressive range of options, even for triggering. The Xminilab is particularly interesting because the full source code is available.

But for serious work, you’ll need an Owon or Rigol scope. These can sample at up to 1 Gsa/s, i.e. one billion samples per second. Truly, truly capable front-ends, able to handle very wide voltage and acquisition speed ranges.

The Hameg HMO2024 is more expensive, and many of its specs are not much better than the Owon (worse even, in some cases: a smaller display size and less sample memory).

The devil is in the details. Here’s a recent screen from the HMO2024 (borders cropped):

SCR11

And here’s my first cut at acquiring the same info on the Owon (click for full size):

20130216_364130

Let me add that I now have lots of experience with the Hameg, and only just started using the Owon, so there might be relevant features I’ve failed to set up in an optimal fashion.

A couple of quick observations:

  • This is not a “typical” measurement setup: a very slow, low-amplitude signal is nothing like the usual measurements one would come across, with higher signal levels, and faster sampling rates. Then again, that’s part of the whole point of an oscilloscope: it’s so versatile that you end up using it in lots of situations!
  • As you can see, the Owon has a lot more pixels to display a signal on, so I was able to increase the voltage sensitivity one notch to get more detail, and capture a bit longer.
  • Some differences are obvious but not that important: the Owon provides less information on-screen about the current settings, and it does not use anti-aliasing for the traces (i.e. intensity variations to produce a fake “sharpening” effect of steep lines).

The two major differences are that: 1) the Hameg lets me apply additional digital signal processing to effectively reduce the random variations and smooth out the signal (this is done after capture, but before drawing things on-screen, i.e. all in software/firmware), and 2) the Hameg includes support for a “reference trace”, i.e. storing a previous trace in its built-in memory, and displaying it in white next to a new capture – to compare power consumption with and without WiFi, in this case.

Note that the Owon capture depth was set to 1,000 samples instead of the maximum 10 Msa, otherwise the screen would have shown a very wide red trace, almost completely swamping out the signal shown on screen. With this reduced setting, the current consumption is still fairly easy to estimate, despite the lack of low-pass filtering.

Is this a show-stopper for the Owon? Not really. It still gives a pretty good impression of the current consumption pattern during starup of the Carambola 2. If you really wanted to improve on this, you could insert an analog filter (a trivial RC filter with just 2 passive components would do). With a bit of extra work, I’m sure you can get at least as good a current consumption graph on the Owon.

The trade-off is (recurring) convenience and setup time vs. (up-front) equipment cost.

PS. The Rigol DS1052E does have a low-pass filter – every scope has different trade-offs!

PPS. For a great view into oscilloscope development over the past 5 years, see Dave Jones’ comparison video of the Rigol DS1052E and the new – phenomenal!DS2000 series.

What if I turn the chip around?

In Hardware on May 8, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

Ok, you’re all excited, you’ve built some electronic circuit – either by assembling a kit, or all on a breadboard, or perhaps you’ve even go so far as to design and create a custom PCB.

Any non-trivial circuit will have polarised components on it, whether capacitors, diodes, transistors, regulators, or… the most common one in oh so many varieties: a “chip”, with 6..40 pins, or sometimes even more.

Mr. Murphy loves chips. Because sooner or later, you’ll connect one the wrong way around. Even if you know what you’re doing, sometimes the orientation marker on a chip is fairly hard to see, especially on the smaller SMD types.

So what happens if we put things in the wrong way around?

Obvious answer: it depends (on the chip).

Comforting answer: more often than not, nothing will happen, the thing will get hot, and it’ll still work fine once you fix the problem, i.e. turn the chip around and reconnect it.

The good news is that it’s not so easy to really damage most chips, with a few precautions:

  • use a “weak” power supply, i.e. one which can’t put out to much current, as current leads to heat, and heat is usually the cause of component damage – a lab power supply with adjustable “current limiting” set to a low value is a very good idea
  • keep your hands near the ON/OFF switch when powering up a circuit for the first time, keep your eyes open, and … use your nose: bad stuff due to heat often shows itself as smoke (by then, it’s often too late), and as components getting far too hot, and starting to smell a bit
  • for low-voltage circuits, and this includes almost all digital circuits: place your fingers on several of the key components right after turning power on: if you sense anything getting hot, turn off the power – NOW!
  • sensing heat is an excellent way to save a project from serious damage: we can easily tell if something heats up to 50 °C or more, yet most silicon-based chips will be able to heat up way beyond that before actually getting damaged (125..175 °C) – so as long as you turn the power off quickly enough, chances are that nothing really will break down, and chips and resistors will often start to smell – a useful warning sign!

Note that analog circuits tend to get damaged much more easily. Put a transistor the wrong way around, and it’ll probably go to never-never land the moment power is applied.

One reason digital chips are so resilient, is the fact that they are full of ESD diodes. These tend to be on each of the I/O pins of a chip, as protection against Electrostatic Discharge. Here’s what a typical I/O pin circuit on a digital chip looks like:

JC's Grid, page 72

Nothing happens under normal conditions, since the diodes are all in blocking mode. When the I/O pin voltage rises above VCC or drops below GND, however, the diodes start to conduct, while trying to remove the charge, so that the voltage levels never reach values which might damage the sensitive oxide isolation (that’s the “O” in CMOS and MOSFET).

Now have a look at what happens when a chip gets powered up with bad voltages on two of the I/O pins (the light-blue parts are not conducting and can be ignored):

JC's Grid, page 72 copy

The way to look at this is that the pin(s) with the highest voltage will start feeding into the (internal) VCC connections, and the pins with the lowest voltage will start drawing current from the (internal) GND connections. Or, to put it a different way – some I/O pins will act as VCC and GND supply lines, albeit with some internal ESD diodes in between:

JC's Grid, page 72 copy 2

In this diagram, VCC and GND are fed from pins which were not intended as such!

As you can see, the diodes now start conducting as well, drawing a certain amount of current. If these currents are not higher than the diodes can handle (usually at least a few mA per diode), then the chip will act more or less like a short to the rest of the circuit. With a bit of luck, your power supply will decide to lower its output voltage and enter “current limiting” mode. The result: nothing works, but nothing truly dramatic happens either. It just gets hot and all the voltages end up being completely wrong.

Sooo… next time you power up your new project for the first time: stay alert, use your fingers, be ready to cut power, and… relax. If it doesn’t work right away (it hardly ever does!), you’ll usually have time to figure out the problems, fix them, and get going after all.

Note that there are no guarantees (things do occasionally break), but usually it’s fixable.

Carambola 2 power consumption

In Hardware, Linux on May 7, 2013 at 00:01

The Carambola 2 mentioned yesterday is based on a SoC design which uses amazingly little power – considering that it’s running a full Linux-based OpenWrt setup.

There are a couple of ways to measure power consumption. If all you’re after is the average power on idle, then all you need to do is insert a multimeter in the power supply line and set it in the appropriate milliamp range. Wait a minute or so for the system to start up, and you’ll see that the Carambola 2 draws about 72 mA @ 5V, i.e. roughly a third of a watt.

If you have a lab power supply, you can simply read the power consumption on its display.

But given an oscilloscope, it’s actually much more informative to see what the power consumption graph is, i.e. over time. This will show the startup power use and also allows seeing more detail, since these systems often periodically cycle through different activities.

The setup for “seeing” power consumption is always the same: just insert a small resistor in series with the “Device Under Test”, and measure the voltage drop over that resistor:

JC's Grid, page 51

Except that in this case, we need to use a smaller resistor to keep the voltage drop within bounds. Given that the expected currents will be over 100 mA, a 100 Ω resistor would completely mess up the setup. I found a 0.1 Ω SMD resistor in my lab supplies, so that’s what I used – mounting it on a 2-pin header for convenience:

DSC_4448

With 0.1 Ω, a 100 mA current produces a voltage of 10 mV. This should have a negligible effect on the power supplied to the Carambola 2 (a 1 Ω resistor should also work fine).

Here’s the result on the scope – white is the default setup, yellow is with WiFi enabled:

SCR11

Sure takes all the guesswork out of what the power consumption is doing on startup, eh?

Embedded Linux – Carambola 2

In Hardware, Linux on May 6, 2013 at 00:01

This has got to be one of the lowest-cost and simplest embedded Linux boards out there:

DSC_4447

It’s the Carambola 2 by 8devices.com:

Screen Shot 2013-05-04 at 11.02.06

The 28 x 38 mm (!) bare board is €19 excl VAT and shipping, and the development bundle (as shown above) is €33. The latter has a Carambola2 permanently soldered onto it, with 2 Ethernet ports, a slave USB / console / power port, a USB host port, a WiFi chip antenna (which is no longer on the base board, unlike the original Carambola), and a switching power supply to generate 3.3V from the USB’s 5V.

The processor is a MIPS-based Atheros chip, and with 64 MB ram and 11 MB of available flash space, there is ample room to pre-populate this board with a lot of files and software.

The convenience of the development setup, is that it includes an FTDI chip, so it comes up as a USB serial connection – you just need to find out what port it’s on, connect to it at 115200 baud via a terminal utility such as “screen” on Mac or Linux, and you’ll be hacking around in OpenWrt Linux in no time.

Note that this setup is very different from a Raspberry Pi: MIPS ≠ ARM, for one. The RPi has a lot more performance and RAM, has hardware floating point, and is more like a complete (portable) computer with its HDMI video out port. The benefit of the Carambola 2 is its built-in WiFi, built-in flash, and its low power – more on that tomorrow.

Meet the Owon SDS 7102V – part 2

In Hardware on May 5, 2013 at 00:01

Today’s post continues where we left off yesterday. Here are the front-panel controls:

DSC_4443

Nice and tidy. Absolutely effective, as far as I could establish in my first impressions. As with all modern scopes, there are lots of features behind all those buttons, and many of them lead to “soft menu’s”, i.e. menu’s shown on screen (on three sides sometimes: right, left, and bottom). That’s what the right and bottom buttons next to the screen are for. There’s one “multipurpose” rotary encoder knob, which is used when selecting from the occasional menu popping up on the left.

The only downside is that you can end up moving your hands around a lot while setting things up and while making adjustments. Coming from a different brand, I had some trouble remembering where averaging, FFT, trigger settings, etc. were, but that’s bound to get easier over time as muscle memory sets in. Because operating any complex instrument with lots of knobs and features really is about motions and muscle memory. It just takes a bit of time and practice.

One remarkable feature of this scope is its very deep 10 megasamples acquisition depth (it’s adjustable, from 1,000 samples up). This makes it very easy to take a single snapshot of an event, and then to zoom in to see specific events in full detail.

One use would be to decode serial communication signals such as UARTs and I2C data packets. There is no built-in decoding, so this needs to be done manually. Then again, you can save all 10 million samples to a USB stick so with some software it would be possible to perform such decoding automatically on a standard PC or Mac, albeit after-the-fact.

Power consumption is very low: 0.77W standby, 18W when turned on.

You might be wondering how this oscilloscope compares to the Xminilab and the Hameg HMO series – which are about a fifth and five times as expensive, respectively. But with such an extreme price range, it’s impossible to answer this question other than: the more you pay, the more you get. Pretty obvious, and also pretty useless as guideline, I’m afraid.

Would I buy the Xminilab if I had no more than $100 to spend? Yes. While it’s limited and does require a lot more ingenuity and patience, it can still help to understand what’s going on, and to address problems that couldn’t be solved without a scope.

Would I recommend the Owon for serious electronics use? Definitely. It lets you capture all the info you need, and “see” what’s going on – both analog and digitally – for frequencies up to dozens of MHz. With much larger display & more memory than the Rigol DS1052E.

Would I purchase a Hameg HMO series again, even though it’s so darn expensive? Yes. The software, the math features, the logic analyser, and the serial decoding – it all adds up, yet it’s still half the price of the “low end” Agilent models. And, not to be ignored: its (cropped but informative) screenshots are perfect for the 604 pixel width of this weblog!

I’ll explore the capabilities of the Owon SDS 7102V scope in more practical scenarios in the weeks to come. Stay tuned…

Meet the Owon SDS 7102V

In Hardware on May 4, 2013 at 00:01

Here’s another “loaner” from David Menting, this time it’s his scope, the Owon SDS 7102V – which is sales-speak for a dual-channel 100 MHz digital storage oscilloscope:

DSC_4440

This unit is available in the Netherlands from EleShop, for € 450 incl VAT, which makes it only marginally more expensive than the ubiquitous Rigol DS1052E with 320×240 display and 50 MHz bandwidth.

This thing is amazingly thin (total size is 7 x 34 x 16 cm), yet packs an 800 x 600 pixel color LCD screen to present a really detailed display (click to see the full size image):

20130206_765416

In a way, more is better. But keep in mind that the 8-bit ADC’s typically used in modern “affordable” scopes can only measure 256 different voltage levels full-scale. To really benefit from 512 or more vertical pixels resolution, you either need a 9-bit ADC or some sort of oversampling and averaging. Having said that, I would definitely consider 320×240 as low end nowadays – this screen is a huge improvement, in displaying much finer detail as well as in helping estimate voltage levels at a glance.

Here’s an example of just how much screen real-estate this scope has:

20130206_765722

You might recognise the two waveforms above as the 10 MHz and 25 MHz signals generated by my signal generator – same as used in this recent weblog post.

Tomorrow, I’ll show you the front panel and I’ll add some comparative notes…

Instrument limits

In Hardware on May 3, 2013 at 00:01

Last week’s post illustrated some limitations of electronic measuring equipment. In this case, I was using the TTi TG2511 Arbitrary Waveform Generator (which I have yet to use for “arbitrary” waveforms) and the Hameg HMO2024 Digital Storage Oscilloscope.

The TG2511’s rise and fall times are specified in the neighbourhood of 10 ns, which has a fairly atrocious effect on a 25 MHz “square wave” signal:

SCR01

(the scope’s own rise time is under 2 ns)

Both are excellent instruments, but already fairly high-end for hobbyist use. To put it in perspective: the total cost of this sort of equipment is more than a hundred JeeNodes with sensors! Add to that the fact that you only need the higher specs of these instruments once in a while (how often depends of course on your level and depth of interest), and it’s pretty obvious that it can be very hard to justify such expenses.

I’ve always been annoyed by this. And I’ve always been on the lookout for alternatives:

DSC_4438

DSC_2780.jpg

That’s the Xminilab, mentioned recently, and a sine-wave generator from eBay. The total cost for both is around €100.

Unfortunately, lower-end equipment really does have lower-end specifications. The measurements made yesterday could not have been done with the above, for example: sine waves are not square waves, and the 2 megasamples/second of the Xminilab scope is not fast enough to analyse rise times at 1 MHz, let alone 10 MHz.

Tomorrow, I’ll explore (“review” is too big a word for it) a more affordable modern oscilloscope, to show what can and cannot be done with it.

Autotransformer

In Hardware on May 2, 2013 at 00:01

The other day, someone gave me an autotransformer – a hefty 10 kg of metal and wires:

DSC_4444

Made by Philips, probably well over half a century ago (even before Philips had a logo?):

DSC_4445

AC mains did not include grounding at the time, just 2 banana jacks spaced 19 mm apart:

DSC_4446

So what does it do? Well, an autotransformer (a.k.a. Variac) allows you to generate an adjustable AC voltage from the fixed AC mains voltage. At the time, AC mains was 220V – nowadays, it’s 230V in Europe, so the output should now reach 260/220*230 ≈ 272 VAC.

Here’s the schematic, similar to the one printed on the side of this device:

300px-Tapped_autotransformer.svg

(this isn’t fully variable, like the unit above, but the taps are a first approximation)

One way to explain what’s going on – at least as first approximation – is that it works like a transformer, but with a variable number of turns on the secondary side. Think of the incoming voltage as generating an alternating magnetic field of a certain strength, with X Volts per turn. The “tap” (which is a mechanical wiper) makes contact with one of the turns of all the turns laid out in a circular fashion, creating a circuit with a variable number of turns. The more turns, the higher the output voltage.

The intriguing bit is that the output voltage can actually exceed the input voltage, by adding a few more spare turns at the top – or equivalently: by placing the input voltage on a tap and not entirely at the end of the coil.

Note that the output of such an autotransformer is not isolated from the input, unlike regular transformers with separate primary and secondary coils.

The other difference is that part of the energy is not transferred as magnetic flux, but directly through the shared windings. It merely acts “more or less” like a regular transformer, in practical use.

I’m very pleased with this gift, which will allow me to explore the effects of a varying AC mains voltage on all sorts of appliances, power supplies, etc. – from very low voltages to somewhat over the normal 230 VAC.

What if the supply is under 3.3V?

In AVR, Hardware on May 1, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

To follow up on a great suggestion from Martyn, here’s a post about the different trade-offs and implications of running an ATmega at lower voltages.

The standard Arduino Uno, and all models before it, have always operated the ATmega at 5.0V – which used to be the standard TTL levels used in the 7400 series of chips used in the 1960’s and 1970’s. The key benefit of a single standardised voltage level being that it made it possible to combine different chips from different vendors.

To this day, even though most semiconductor logic has evolved from bipolar junction transistor to CMOS, the voltage level has often been kept at 5V, with slightly adjusted – but compatible – voltage levels for “0” and “1”, respectively.

Nowadays, chips operate at lower voltages because it leads to lower power consumption and because it is a better fit for batteries and LiPo cells. In fact, lots of new chips operate at 3.3V and will not even tolerate 5.0V.

The ATmega328p is specified to run at a very wide range, from 1.8V all the way up to 5.5V. Which is great for ultra low-power use, supporting different battery options, and even for energy harvesting scenario’ such as a solar panel charging a supercap, for example.

But there are still many trade-offs to be aware of!

The first one is the system clock rate, which is limited (see also this older post):

Screen Shot 2013-04-30 at 21.59.07

If you look closely, you’ll see that 16 MHz is out of spec for 3.3V, the way the JeeNode is running, that is. In practice, this has never caused any known problems, but lowering the voltage further might just be too much.

The good news is that it’s not really the crystal oscillator which is causing problems, but the main circuitry of the ATmega, and that there’s a very easy fix for it: when running at voltages below 3.3V, you should set the ATmega’s pre-scaler to 2, causing the system clock to run at 8 MHz. When running even lower, perhaps under 2.4 V or so, set the pre-scaler to 4, i.e. run the system clock at 4 MHz. This also explains the presence of the divide-by-8 fuse bit: when you need to start up at low voltages, you can force the ATmega to always power up with a clock pre-scaler of 8, and then adjust the pre-scaler under software control after power-up, once the voltage has been verified to be sufficient. Without this setting, an ATmega would not be able to reliably start up at 1.8V, even if it’s meant to run much faster most of the time.

Note that the RF12 driver will still work at 4 MHz, but not less: the interrupt service time will be too slow for proper operation at any slower rate.

Another important issue to be aware of when running a JeeNode at voltages under 3.3V, is that the MCP1702 voltage regulator will no longer be able to regulate the incoming voltage. It can only reduce the input voltage for regulation, so when there is no “headroom” left, the regulator will just pass whatever is left, minus a small “dropout” voltage difference. Hence its name LDO: Low-DropOut.

The problem is that all LDO regulators start consuming (i.e. wasting) more idle current in this situation. See this weblog post for the measured values – which can be substantial!

To avoid this, you should really disconnect the LDO altogether, or at least its ground pin.

A third aspect of running at lower voltages, is that you need to verify that all the parts of the circuit continue to work. This applies to sensors as well as to the RFM12B radio – which should only be operated between 2.2V and 3.8V.

Actually, some experiments a while back showed that the radio could work down to 1.85V, but I suspect that things like transmit power will be greatly reduced at such supply levels.

Lastly, when the supply voltage is lowered, you need to keep some secondary effects in mind: the ADC will operate against a lower reference voltage as well, so its scale will change. One ADC step will be just under 3.3 mV when the supply is 3.3V, but this drops to under 2.0 mV per step with a 2.0V supply.

To summarise: yes, an ATmega/ATtiny can run at voltages below 3.3V, and even an entire JeeNode can, but you need to reduce the system clock by switching the pre-scaler to 2 or 4, and you need to make sure that all parts of your setup can handle these lower voltages.