Computing stuff tied to the physical world

Archive for June 2012

New HYT131 sensor

In Hardware, Software on Jun 30, 2012 at 00:01

The SHT11 sensor used on the room board is a bit pricey, and the bulk source I used in the past no longer offers them. The good news is that there’s a HYT131 sensor with the same accuracy and range. And the same pinout:

DSC 3353

This sensor will be included in all Room Board kits from now on.

It requires a different bit of software to read out, but the good news is that this is now standard I2C and that the code no longer needs to do all sorts of floating point calculations. Here’s a test sketch I wrote to access it:

Screen Shot 2012 06 30 at 00 23 28

And here’s some sample output (measurements increase when I touch the sensor):

Screen Shot 2012 06 28 at 15 39 55

As you can see, this reports both temperature and humidity with 0.1 resolution. The output gets converted to a float for display purposes, but apart from that no floating point code is needed to use this sensor. This means that the HYT131 sensor is also much better suited for use with the memory-limited JeeNode Micro.

It’ll take a little extra work to integrate this into the roomNode sketch, but as far as I’m concerned that’ll have to wait until after the summer break. Which – as far as this weblog goes – will start tomorrow.

One more post tomorrow, and then we “Northern Hemispherians” all get to play outside for a change :)

Update – I’ve uploaded the datasheet here.

Assembling the EmonTX

In Hardware on Jun 29, 2012 at 00:01

The guys at OpenEnergyMonitorhi Glyn and Trystan! – have been working on a number of open source energy monitoring kits for some time now. With solar panels coming here soon, I thought it’d be nice to try out their EmonTX unit – which is partly derived from a bunch of stuff here at JeeLabs. Here’s the kit I got recently:

DSC 3351

Following these excellent instructions, assembly was a snap (I added the 868 MHz RFM12B wireless module):

DSC 3352

Whee, assembling kits is fun! :)

I had some 30A current clamps from SeeedStudio lying around anyway, so that’s what I’ll be using.

The transformer is a 9 VAC type, to help the system detect zero crossings, so that real power factors can be calculated. Unfortunately, this transformer doesn’t (yet) power the system (but it now looks like it might in a future version), so this thing also needs either FTDI or USB to power it.

Here are my first updated measurement results, using the voltage_and_current.ino sample sketch in EmonLib:

    0.27 4.17 260.39 0.02 0.06 
    -0.02 0.71 260.77 0.00 -0.03 
    0.03 0.71 260.74 0.00 0.04 
    -0.02 0.71 260.56 0.00 -0.03 
    0.02 0.71 260.63 0.00 0.03 
    -100.93 105.81 260.88 0.41 -0.95 
    -97.68 102.16 260.94 0.39 -0.96 
    -99.07 104.42 260.98 0.40 -0.95 
    -97.15 102.57 260.74 0.39 -0.95 
    -97.69 102.29 260.91 0.39 -0.95 

The values printed out are:

  • realPower
  • apparentPower
  • Vrms
  • Irms
  • powerFactor

These readings were made with a clamp on one wire of a 25W lightbulb load – first off, then on. The mains voltage estimated from the 9V transformer is a bit high – it’s usually about 230V around here. My plan is to measure and report two independent power consumers and one producer (the solar panel inverter), so I’ll dive into this in more detail anyway. But that’ll have to wait until after the summer break.

Speaking of which: the June discount ends tomorrow, just so you know…

Update – I have disconnected the burden resistors, since the SCT-013-030 has one built in. See comments.

TK – Measuring milli-ohms

In Hardware on Jun 28, 2012 at 00:01

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Today, I’d like to present a nifty new arrival here at JeeLabs, called the Half Ohm:

DSC 3348

It’s a brilliant little tool (with a cute name). What it does is convert milliohm to millivolt, and it works roughly in the range 0 .. 500 mΩ. Hence the name. On the back is a coin cell, and there’s a tiny on-off switch. Luckily, it’s hard to forget to turn the thing off because there’s a bright red LED while it’s powered up.

Milliohms are tricky. Fortunately, Jaanus Kalde, who created this tool has it all explained on his website.

So what can you do with a milliohm meter? Well, measure the resistance of your test leads, for one:

DSC 3349

That’s 22.8 mΩ, i.e. 0.0228 Ω. Not surprising, because (almost) every conductor has some resistance.

Being able to measure such low resistances can be extremely useful to find shorts. For example, when using longer test leads, I can see that their own resistance is 74 mΩ. And with that knowledge, we can measure PCB traces:

DSC 3350

Turns out that in this case I got about 112 mΩ, which means that this little 5 cm stretch of copper on the PCB has 37 mΩ resistance. And sure enough, the other ground pins have less resistance when when the path is shorter, and more when the path is longer. This is very logical – note also that between the GND pins I’ll be measuring relatively low values because of the relatively “fat” ground plane, which reduces overall DC resistance.

To find shorts, we simply measure the resistance between any two points (with no power connected, of course). If the measured value is close to 75 mΩ, then it’s a short. If it’s well above say 150 mΩ, then it’s definitely not a short.

To locate a short circuit, we can now simply move probes towards the direction of lowest resistance.

Looks like the Half Ohm will be a great help for this type of hard-to-isolate problems!

PS. No need to do the subtraction in your head if you have a multimeter which supports relative measurements.

Edge interrupts

In AVR, Hardware, Software on Jun 27, 2012 at 00:01

To continue yesterday’s discussion about level interrupts, the other variant is “edge-triggered” interrupts:

JC s Grid page 22

In this case, each change of the input signal will trigger an interrupt.

One problem with this is that it’s possible to miss input signal changes. Imagine applying a 1 MHz signal, for example: there is no way for an ATmega to keep up with such a rate of interrupts – each one will take several µs.

The underlying problem is a different one: with level interrupts, there is sort of a handshake taking place: the external device generates an interrupt and “latches” that state. The ISR then “somehow” informs the device that it has seen the interrupt, at which point the device releases the interrupt signal. The effect of this is that things always happen in lock step, even if it takes a lot longer before the ISR gets called (as with interrupts disabled), or if the ISR itself takes some time to process the information.

With edge-triggered interrupts, there’s no handshake. All we know is that at least one edge triggered an interrupt.

With “pin-change interrupts” in microcontrollers such as the ATmega and the ATtiny, things get even more complicated, because several pins can all generate the same interrupt (any of the PD0..PD7 pins, for example). And we also don’t get told whether the pin changed from 0 -> 1 or from 1 -> 0.

The ATmega328 datasheet has this to say about interrupt handling (page 14):

Screen Shot 2012 06 26 at 23 08 46

(note that the “Global Interrupt Enable” is what’s being controlled by the cli() and sei() instructions)

Here are some details about pin-change interrupts (from page 74 of the datasheet):

Screen Shot 2012 06 26 at 23 13 01

The way I read all the above, is that a pin change interrupt gets cleared the moment its associated ISR is called.

What is not clear, however, is what happens when another pin change occurs before the ISR returns. Does this get latched and generate a second interrupt later on, or is the whole thing lost? (that would seem to be a design flaw)

For the RF12 driver, to be able to use pin-change interrupts instead of the standard “INT0” interrupt (used as level interrupt), the following is needed:

  • every 1 -> 0 pin change needs to generate an interrupt so the RFM12B can be serviced
  • every 0 -> 1 pin change can be ignored

The current code in the RF12 library is as follows:

Screen Shot 2012 06 26 at 23 19 59

I made that change from an “if” to a “while” recently, but I’m not convinced it is correct (or that it even matters here). The reasoning is that servicing the RFM12B will clear the interrupt, and hence immediately cause the pin to go back high. This happens even before rf12_interrupt() returns, so the while loop will not run a second time.

The above code is definitely flawed in the general case when more I/O pins could generate the same pin change interrupt, but for now I’ve ruled that out (I think), by initializing the pin change interrupts as follows:

Screen Shot 2012 06 26 at 23 23 28

In prose:

  • make the RFM12B interrupt pin an input
  • enable the pull-up resistor
  • allow only that pin to trigger pin-change interrupts
  • as last step, enable that pin change interrupt

Anyway – I haven’t yet figured out why the RF12 driver doesn’t work reliably with pin-change interrupts. It’s a bit odd, because things do seem to work most of the time, at least in the setup I tried here. But that’s the whole thing with interrupts: they may well work exactly as intended 99.999% of the time. Until the interrupt happens in some particular spot where the code cannot safely be interrupted and things get messed up … very tricky stuff!

Level interrupts

In Hardware, Software on Jun 26, 2012 at 00:01

The ATmega’s pin-change interrupt has been nagging at me for some time. It’s a tricky beast, and I’d like to understand it well to try and figure out an issue I’m having with it in the RF12 library.

Interrupts are the interface between real world events and software. The idea is simple: instead of constantly having to poll whether an input signal changes, or some other real-world event occurs (such as a hardware count-down timer reaching zero), we want the processor to “somehow” detect that event and run some code for us.

Such code is called an Interrupt Service Routine (ISR).

The mechanism is very useful, because this is an effective way to reduce power consumption: go to sleep, and let an interrupt wake up the processor again. And because we don’t have to keep checking for the event all the time.

It’s also extremely hard to do these things right, because – again – the ISR can be triggered any time. Sometimes, we really don’t want interrupts to get in our way – think of timing loops, based on the execution of a carefully chosen number of instructions. Or when we’re messing with data which is also used by the ISR – for example: if the ISR adds an element to a software queue, and we want to remove that element later on.

The solution is to “disable” interrupts, briefly. This is what “cli()” and “sei()” do: clear the “interrupt enable” and set it again – note the double negation: cli() prevents interrupts from being serviced, i.e. an ISR from being run.

But this is where it starts to get hairy. Usually we just want to prevent an interrupt to happen now – but we still want it to happen. And this is where level-interrupts and edge-interrupts differ.

A level-interrupt triggers as long a an I/O signal has a certain level (0 or 1) and works as follows:

JC s Grid page 22

Here’s what happens at each of those 4 points in time:

  1. an external event triggers the interrupt by changing a signal (it’s usually pulled low, by convention)
  2. the processor detects this and starts the ISR, as soon as its last instruction finishes
  3. the ISR must clear the source of the interrupt in some way, which causes the signal to go high again
  4. finally, the ISR returns, after which the processor resumes what it had been doing before

The delay from (1) to (3) is called the interrupt latency. This value can be extremely important, because the worst case determines how quickly our system responds to external interrupts. In the case of the RFM12B wireless module, for example, and the way it is normally set up by the RF12 code, we need to make sure that the latency remains under 160 µs. The ISR must be called within 160 µs – always! – else we lose data being sent or received.

The beauty of level interrupts, is that they can deal with occasional cli() .. sei() interrupt disabling intervals. If interrupts are disabled when (1) happens, then (2) will not be started. Instead, (2) will be started the moment we call sei() to enable interrupts again. It’s quite normal to see interrupts being serviced right after they are enabled!

The thing about these external events is that they can happen at the most awkward time. In fact, take it from me that such events will happen at the worst possible time – occasionally. It’s essential to think all the cases through.

For example: what happens if an interrupt were to occur while an ISR is currently running?

There are many tricky details. For one, an ISR tends to require quite a bit of stack space, because that’s where it saves the state of the running system when it starts, and then restores that state from when it returns. If we supported nested interrupts, then stack space would at least double and could easily grow beyond the limited amount available in a microcontroller with limited RAM, such as an ATmega or ATtiny.

This is one reason why the processor logic which starts an ISR also disables further interrupts. And re-enables interrupts after returning. So normally, during an ISR no other ISRs can run: no nested interrupt handling.

Tomorrow I’ll describe how multiple triggers can mess things up for the other type of hardware interrupt, called an edge interrupt – this is the type used by the ATmega’s (and ATtiny’s) “pin-change interrupt” mechanism.

… and pies

In Hardware on Jun 25, 2012 at 00:01

This post follows up on yesterday’s first dive into the Raspberry Pi computer.

I switched to a beta of the next Debian release, called Wheezy. Full screen detect out of the box now:

DSC 3343

Those stripes are artifacts – that’s Moiré kicking in when using CCD pixels to take a picture of TFT LCD pixels.

The idea is to turn this thing into a permanent fixture on the back of this panel – right above my workbench:

DSC 3346

This is using a separate 5V and a 12V supply for now, but I’d like to explore a range of other options:

  • no keyboard or mouse normally attached
  • using a USB WiFi dongle instead of wired Ethernet
  • powered from a single 12V @ 2A DC power brick
  • add a step-down switching regulator to generate 5V
  • maybe: relays to control power to display and computer
  • a PIR motion sensor plus light sensor to detect presence?
  • a JeeNode USB, perhaps used as central node or as OOK relay
  • and maybe also a rechargeable 6V or 12V battery as UPS

The latter depends on whether this setup will become part of the permanent home automation system at JeeLabs.

As switcher I’ll use a no-name brand from eBay – it delivers 5V at over 1 A and draws about 8 mA without load:

DSC 3345

Lots of pesky little details need to be worked out, such as how to get a Sitecom WLA-1000 USB WiFi dongle working, how to set up what is essentially “kiosk mode”, and how to control the display backlight. I’d also like to hook up an RFM12B directly to the main board, to see how convenient this is and what can be done with it.

There’s a nice article at about setting up something quite similar. Long live the sharing culture!

Total system cost should be roughly €100..150, since I recovered the screen from an old Dell laptop.


Raspberries …

In Hardware on Jun 24, 2012 at 00:01

Recently got a Raspberry Pi here. Add keyboard, mouse, display, and USB power adapter and there it goes:

DSC 3341

Oh yes, I did have to put Debian 6 (Squeeze) on a 2 GB SD memory card first, using these instructions.

Note that my 1280×720 screen size isn’t auto-detected, but it’s good enough for now. This is the whole system:

DSC 3342

Next step – switch over to Ethernet:

  • plug in an Ethernet cable, it automatically registers via DHCP
  • log in via SSH, get rid of the greeting (simply create ~/.hushlogin)
  • set up password-less SSH access, yada, yada, yada

Done. Now I can unplug the keyboard and mouse, and insert a JeeLink. It’s recognized right out of the box:

Screen Shot 2012 06 23 at 15 13 48

Does it work? Let’s check with JeeMon (using a runtime built for ARM, of course):

Here’s a quick look at some stats and then a check that JeeMon works:

Screen Shot 2012 06 23 at 15 09 54

Looks good. Now let’s see if the JeeLink is accessible:

Screen Shot 2012 06 23 at 15 15 54

Yup. Done. Total setup time: 5 minutes (plus another 10 for the SD card). Well done, Raspberry Pi!

This draws 14 W in total, including 11 W for the display (when on) and 0.4 W when Ethernet is plugged in.

Low power – µA’s in perspective

In AVR, Hardware on Jun 23, 2012 at 00:01

Ultra-low power computing is a recurring topic on this weblog. Hey – it’s useful, it’s non-trivial, and it’s fun!

So far all the experiments, projects, and products have been with the ATmega from Atmel. It all started with the ATmega168, but since some time it’s now all centered around the ATmega328P, where “P” stands for pico power.

There’s a good reason to use the ATmega, of course: it’s compatible with the Arduino and with the Arduino IDE.

With an ATmega328 powered by 3.3V, the lowest practical current consumption is about 4 µA – that’s with the watchdog enabled to get us back out of sleep mode. Without the internal watchdog, i.e. if we were to rely on the RFM12B’s wake-up timer, that power-down current consumption would drop considerably – to about 0.1 µA:

Screen Shot 2012 06 22 at 22 03 30

Whoa, that’s a factor 40 less! Looks like a major battery-life improvement could be achieved that way!

Ahem… not so fast, please.

As always, the answer is a resounding “that depends” – because there are other power consumers involved, and you have to look at the whole picture to understand the impact of all these specs and behaviors.

First of all, let’s assume that this is a transmit-only sensor node, and that it needs to transmit once a minute. Let’s also assume that sending a packet takes at most 6 ms. The transmitter power consumption is 25 mA, so we have a 10,000:1 sleep/send ratio, meaning that the average current consumption of the transmitter will be 2.5 µA.

Then there’s the voltage regulator. In some scenarios, it could be omitted – but the MCP1702 and MCP1703 used on JeeNodes were selected specifically for their extremely low quiescent current draw of 2 µA.

The RFM12B wireless radio module itself will draw between 0.3 µA and 2.3 µA when powered down, depending on whether the wake-up timer and/or the low-battery detector are enabled.

That’s about 5 to 7 µA so far. So you can see that a 0.1 µA vs 4 µA difference does matter, but not dramatically.

I’ve been looking at some other chips, such as ATXmega, PIC, MSP430, and Energy Micro’s ARM. It’s undeniable that those ATmega328’s are really not the lowest power option out there. The 8-bit PIC18LF25K22 can keep its watchdog running with only 0.3 µA, and the 16-bit MSP430G2453 can do the same at 0.5 µA. Even the 32-bit ARM EFM32TG110 only needs 1 µA to keep an RTC going. And they add lots of other really neat extra features.

In terms of low power there are at two more considerations: other peripherals and battery size / self-discharge.

In a Room Node, there’s normally a PIR sensor to detect and report motion. By its very nature, such a sensor cannot be shut off. It cannot even be pulsed, because a PIR needs a substantial amount of time to stabilize (half a minute or more). So there’s really no other option than to keep it powered on at all times. Well, perhaps you could turn it off at night, but only if you really don’t care what happens then :)

Trouble is: most PIR sensors draw a “lot” of current. Some over 1 mA, but the most common ones draw more like 150..200 µA. The PIR sensor I’ve found for JeeLabs is particularly economical, but it still draws 50..60 µA.

This means that the power consumption of the ATmega really becomes almost irrelevant. Even in watchdog mode.

The other variable in the equation is battery self-discharge. A modern rechargeable Eneloop cell is quoted as retaining 85% of its charge over 2 years. Let’s assume its full charge is 2000 mAh, then that’s 300 mAh loss over 2 years, which is equivalent to about 17 µA of continuous self-discharge.

Again, the 0.1 µA vs 4 µA won’t really make such a dramatic difference, given this figure. Definitely not 40-fold!

As you can see, every microamp saved will make a difference, but in the grand scheme of things, it won’t double a battery’s lifetime. There’s no silver bullet, and that Atmel ATmega328 remains a neat Arduino-compatible option.

That doesn’t mean I’m not peeking at other processors – even those that don’t have a multi-platform IDE :)

Structured data

In Software, Musings on Jun 22, 2012 at 00:01

As hinted at yesterday, I intend to use the ZeroMQ library as foundation for building stuff on. ZeroMQ bills itself as “The Intelligent Transport Layer”, and frankly, I’m inclined to agree. Platform and vendor agnostic. Small. Fast.

So now we’ve got ourselves a pipe. What do we push through it? Water? Gas? Electrons?

Heh – none of the above. I’m going to push data / messages through it, structured data that is.

The next can of worms: how does a sender encode structured data, and how does a receiver interpret those bytes? Have a look at this Comparison of data serialization formats for a comprehensive overview (thanks, Wikipedia!).

Yikes, too many options! This is almost the dreaded language debate all over again…

Ok, I’ve travelled the world, I’ve looked around, I’ve pondered on all the options, and I’ve weighed the ins and outs of ’em all. In the name of choosing a practical and durable solution, and to create an infrastructure I can build upon. In the end, I’ve picked a serialization format which most people may have never heard of: Bencode.

Not XML, not JSON, not ASN.1, not, well… not anything “common”, “standard”, or “popular” – sorry.

Let me explain, by describing the process I went through:

  • While the JeeBus project ran, two years ago, everything was based on Tcl, which has implicit and automatic serialization built-in. So evidently, this was selected as mechanism at the time (using Tequila).

  • But that more or less constrains all inter-operability to Tcl (similar to using pickling in Python, or even – to some extent – JSON in JavaScript). All other languages would be second-rate citizens. Not good enough.

  • XML and ASN.1 were rejected outright. Way too much complexity, serving no clear purpose in this context.

  • Also on the horizon: JSON, a simple serialization format which happens to be just about the native source code format for data structures in JavaScript. It is rapidly displacing XML in various scenarios.

  • But JSON is too complex for really low-end use, and requires relatively much effort and memory to parse. It’s based on reserved characters and an escape character mechanism. And it doesn’t support binary data.

  • Next in the line-up: Bernstein’s netstrings. Very elegant in its simplicity, and requiring no escape convention to get arbitrary binary data across. It supports pre-allocation of memory in the receiver, so datasets of truly arbitrary size can safely be transferred.

  • But netstrings are a too limited: only strings, no structure. Zed Shaw extended the concept and came up with tagged netstrings, with sufficient richness to represent a few basic datatypes, as well as lists (arrays) and dictionaries (associative arrays). Still very clean, and now also with exactly the necessary functionality.

  • (Tagged) netstrings are delightfully simple to construct and to parse. Even an ATmega could do it.

  • But netstrings suffer from memory buffering problems when used with nested data structures. Everything sent needs to be prefixed with a byte count. That means you have to either buffer or generate the resulting byte sequence twice when transmitting data. And when parsed on the receiver end, nested data structures require either a lot of temporary buffer space or a lot of cleverness in the reconstruction algorithm.

  • Which brings me to Bencode, as used in the – gasp! – Bittorrent protocol. It does not suffer from netstring’s nested size-prefix problems or nested decoding memory use. It has the interesting property that any structured data has exactly one representation in Bencode. And it’s trivially easy to generate and parse.

Bencode can easily be used with any programming language (there are lots of implementations of it, new ones are easy to add), and with any storage or communication mechanism. As for the Bittorent tie-in… who cares?

So there you have it. I haven’t written a single line of code yet (first time ever, but it’s the truth!), and already some major choices have been set in stone. This is what I meant when I said that programming language choice needs to be put in perspective: the language is not the essence, the data is. Data is the center of our information universe – programming languages still come and go. I’ve had it with stifling programming language choices.

Does that mean everybody will have to deal with ZeroMQ and Bencode? Luckily: no. We – you, me, anyone – can create bridges and interfaces to the rest of the world in any way we like. I think HouseAgent is an interesting development (hi Maarten, hi Marco :) – and it now uses ZeroMQ, so that might be easy to tie into. Others will be using Homeseer, or XTension, or Domotiga, or MisterHouse, or even… JeeMon? But the point is, I’m not going to make a decision that way – the center of my universe will be structured data. With ZeroMQ and Bencode as glue.

And from there, anything is possible. Including all of the above. Or anything else. Freedom of choice!

Update – if the Bencode format were relaxed to allow whitespace between all elements, then it could actually be pretty-printed in an indented fashion and become very readable. Might be a useful option for debugging.

TK – Programming

In Software on Jun 21, 2012 at 00:01

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Time for a different subject. All the tools discussed so far have been about electronics and computing hardware.

But what about the flip side of that computing coin – software?

As mentioned recently, software has no “fixed point”. There’s no single center of its universe. Everything is split across the dimension of programming language choice. We’re operating in a computing world divided by language barriers – just like in the real world.

Here’s the language divide, as seen on GitHub (graph extracted from this site):

Screen Shot 2012 06 20 at 19 51 36

Here’s another one, from the TIOBE Programming Community Index:

Screen Shot 2012 06 20 at 22 28 27

(note the complete lack of correlation between these two examples)

It’s easy to get carried away by this. Is “my” language up? Or is it down? How does language X compare to Y?


Programming language choice (as in real life, with natural languages) has huge implications, because to get to know a language really well, you have to spend 10,000 hours working with it. Maybe 9,863 if you try really hard.

As we learn something, we get better at it. As we get better at something, we become more productive with it.

So… everyone picks one (or perhaps a few) of the above languages and goes through the same process. We learn, we evolve, and we gain new competences. And then we find out that it’s a rabbit hole: languages do not inter-operate at a very low level. One of the best ways to inter-operate with other software these days is probably something called ZeroMQ: a carefully designed low-fat interface at the network-communication level.

The analogy with real-world spoken languages is intriguing: we all eat bread, no matter what our nationality is or which language we speak (bear with me, I’m simplifying a bit). We can walk into a shop in another country, and we’ll figure out a way to obtain some bread, because the goods and the monetary exchange structure are both bound to be the very similar. Language will be a stumbling block, but not a show stopper. We won’t starve.

In the same way, you can think of information exchanges as bread. If we define appropriate data structures and clear mappings to bits and bytes, then we can get them from one system to the other via libraries such as ZeroMQ.

Which brings me to the point I’m trying to make here: programming language choice is no longer a key issue!

What matters, are the high-level data structures we come up with and the protocols (in a loosely defined manner) we use for the interactions. The bread is what it’s about (data). Money is needed to make things happen (e.g. ZeroMQ), and programming languages are going to differ and change over time anyway – so who cares.

We should stop trying to convince each other that everything needs to be written in one programming language. Humanity has had plenty of time to deal with similar natural language hurdles, and look where we stand today…

I feel reasonably qualified to make statements about languages. I speak four natural languages more or less fluently, and I’ve programmed in at least half a dozen programming languages for over two years each (some for over a decade, and with three I think may have passed that 10,000 hour mark). In both contexts, I tend to favor the less widespread languages. It’s a personal choice and it works really well for me. I get stuff done.

Then again, this weblog is written in English, and I spend quite a bit of my time and energy writing in C. That more or less says it all, really: English is the lingua franca of the (Western) internet, and C is the universal language used to implement just about everything on top with. That’s what de facto standards are about!

So what will I pick to program in for Physical Computing, embedded micros, laptops, and the web? The jury is still out on that, but chances are that it will not be any of the first 12 languages in either of those two lists above.

But no worries. We’ll still be able to talk to each other and both have fun, and the software I write will be usable regardless of your mother’s tongue – or your father’s programming language :)

Let’s focus on our software design structures, and our data exchange formats. The rest is too ephemeral.


In News on Jun 20, 2012 at 00:01

As you may have seen in a number of discussions on the forum (such as this one), things are still in flux w.r.t. documentation of software / libraries / hardware coming out of JeeLabs.

I’ve been agonizing for ages about this. It’s a recurring theme and it drives me up the wall about once a year.

Like with so many things, ya’ can’t get very high (or far) if you keep changing shoulders to stand on…

The good news is that the wait is over. All documentation and collaborative editing is going to be done with Redmine, the same system which has been driving the Café for some time now. But a few things will change:

  • Redmine has now evolved to version 2.0.3, and I’ll be using subversion to easily track updates
  • page formatting will switch from Markdown to Textile plus Redmine’s own extensions
  • anyone can register to participate, but the process involves an administrator (goodbye, spammers)
  • the system supports generating PDF’s, so we can have good web docs and good paper-like docs
  • better support for page hierarchies and automatic page lists, i.e. more high-level structure

Also, I found an excellent theme for the wiki, which gives the whole thing a clean look and nice layout. Like so:

Screen Shot 2012 06 19 at 22 30 35

Clean, minimal, and compatible with all modern browsers, as far as I can tell.

But this isn’t about good looks at all, really. That’s just an enabler, to finally make it worthwhile to pour tons and tons of my time into this.

And now that I have your attention…

The above has been set up, but it’s very, very early days. Things may get (slightly) worse before they get better – i.e. I’m not going to do much more maintenance on the current Café pages at – neither the hardware pages, nor the software documentation, nor any of the other wiki pages or user-contributed info.

The reason to announce this here anyway, is that I want to make a really serious effort to get it right. I would like Physical Computing software and hardware such as from JeeLabs to be maximally fun to explore, easy to get acquainted with, fully open to adopt and tweak, and truly, truly, truly effective and practical. No fluff, no nonsense, but a rich resource which improves over time.

I love writing (heck, 1000+ posts ought to have made that clear by now) and as I said, I’m willing to pour lots of time and effort into this. But I can’t do it alone. You can help by telling me what sort of info you need, where you’re coming from, what style and structure would work well for you, and you can help point out the errors, the gaps, the omissions, the mistakes… or anything you don’t agree with and consider substantial enough to bring up.

You can of course also help a lot more than that, by participating in this initiative to get a really good collaborative documentation site going (I’m willing to beg on my knees, bribe you in some innocent way, or pile up the compliments if that helps). Everybody is busy, but I think there is value in trying to coordinate efforts like this.

To put it all in perspective: this new documentation site is not “my” site (other than providing the infrastruture). Even though I’ll probably be one of the main contributors, it’s not anyone’s site, in that nothing on it should be written in first-person form. No I’s and me’s to wonder about who said what. It needs to become everyone’s site, a live knowledge base, with full and honest attribution to everyone who volunteers to get involved.

Let’s get the focus on audience right. Let’s get the structure right. And let’s get the content right. In that order.

Here’s the bad news (yeah, I know, should normally have started off with this) …

No spotlights on this endeavor for the next three months. No fame. No riches. Only blood, sweat, and tears.

This weblog post will remain the only one to draw attention to this documentation challenge. I’m inviting you to participate and help shape things. I hope some of you will find a suitable amount of time, right now or later on.

Note that I haven’t mentioned “code” until now. That’s not because code is irrelevant. On the contrary – part of this work will be to write new code, redo things done so far, and if possible even to “lead by example”. The same goes for technical documentation and for tutorials which can go far beyond just telling what the code does. And for automatically generated documentation from comments or other text files. It all has a place.

But it’s easy to get swamped by it all – as I’ve been for so long – and never reach a practical point. Best thing for me to do now, is to try and pick a single direction for documentation and run with it. You’re welcome to tie your own interests and efforts into this – I’m sure we can figure out ways to make things work nicely together.

One last point – this isn’t really limited to software or hardware from JeeLabs. To me, this whole Jeelabs thing is just an umbrella to go off and play with “Computing Stuff tied to the Physical World”, wherever that leads to. I use this as basis to try and stay focused (hah!) and keep aiming in a somewhat coherent (hah again!) direction.

Wanna help make the above happen? Email me some thoughts and I’ll set up editor access for you.

Disk storage

In Hardware on Jun 19, 2012 at 00:01

I recently stumbled across this ad in Byte Magazine of August 1980:

What a bargain! – Now compare it to this one at NewEgg (yeah, no enclosure or power supply, I know):

Screen Shot 2012 06 17 at 02 23 12

(note how this drive has more RAM included as cache even than the total storage on that 1980’s disk!)

Let’s ignore inflation and try to compare storage prices across this 32-year stretch:

  • $4995 for 26 MB is $192 per megabyte
  • $170 for 3 TB is $57 per terabyte – six extra zero’s
  • in other words: storage has become ≈ 3.37 million times cheaper

Then again, hard drives are so passé … it’s all SSD and cloud storage, nowadays.

The amazing bit is not merely the staggering size increase and price reduction, but the fact that this happened within less than a lifetime. Bill’s, Steve’s – anyone over 50 will have witnessed this, basically.

Might be useful to think about this when putting our work in context of… a few years down the road from now.

Simple digital oscillator

In Hardware on Jun 18, 2012 at 00:01

As described in this recent post, it should be possible to create a simple fixed frequency oscillator using just a few low-cost components. This could then be used as interrupt source to wake up an ATmega every millisecond or so.

Here’s a first attempt, based on a widely-used circuit, as described in Fairchild’s Application Note 118:

Screen Shot 2012 06 16 at 13 08 02

I used a CD4049 hex inverter, since I had them within easy reach:

DSC 3336

The two resistors are 10 kΩ, the capacitor is 0.1 µF – and here’s what it does:


The yellow trace is VOUT, the blue trace is V1. Pretty stable oscillation at 456 Hz.

Unfortunately, the current draw is a bit high with these components: 140 µA idle, and 450 µA when oscillating! There would be no point, yesterday’s approach will take half as much current using just a single 0.1 µF cap.

If someone has a tip for a simple 0.5 .. 1 KHz oscillator which consumes much less power, please let me know…

Low-power waiting – interrupts

In AVR, Hardware on Jun 17, 2012 at 00:01

Following yesterday’s trial, here is the code which uses the pin-change interrupt to run in a continuous cycle:

Screen Shot 2012 06 15 at 17 57 02

The main loop is empty, since everything now runs on interrupts. The output signal is the same, so it’s working.

Could we somehow increase the cycle frequency, to get a higher time resolution? Sure we can!

The above code discharges the cap to 0V, but if we were to discharge it a bit less, it’d reach that 1.66V “1” level more quickly. And sure enough, changing the “50” loop constant to “10” increase the rate to 500 Hz, a 2 ms cycle:


As you can see, the cap is no longer being discharged all the way to 0V. A shorter discharge cycle than this is not practical however, since the voltage does need to drop to a definite “0” level for this whole “cap trick” to work.

So how do we make this consume less power? Piece of cake: turn the radio off and go to sleep in the main loop!

Screen Shot 2012 06 15 at 19 50 10

The reason this works, is that the whole setup continuously generates (and processes) pin-change interrupts. As a result, this JeeNode SMD now draws about 0.23 mA and wakes up every 2 ms using nothing more than a single 0.1 µF cap tied to an I/O pin. Mission accomplished – let’s declare (a small) victory!

PS. Exercise for the reader: you could also use this trick to create a simple capacitance meter :)

Low-power waiting

In AVR, Hardware on Jun 16, 2012 at 00:01

This continues where yesterday left off, trying to wait less than 16 milliseconds using as little power as possible.

First off, I’m seeing a lot of variation, which I can’t explain yet. I decided to use a standard JeeNode SMD, including regulator and RFM12B radio, since that will be closer to the most common configuration anyway.

Strangely enough, this sketch now generates a 704 µS toggle time instead of 224 µs, i.e. 44 processor cycles per loop() iteration. I don’t know what changed since yesterday, and that alone is a bit worrying…

The other surprise is that power consumption varies quite a bit between different units. On one JN SMD, I see 1.35 mA, on another it’s only 0.86 mA. Again: I have no idea (yet) what causes this fairly substantial variation.

How do we reduce power consumption further? The watchdog timer is not an option for sleep times under 16 ms.

The key point is to find some suitable interrupt source, and then go into a low-power mode with all the clock/timing circuitry turned off (in CMOS chips, most of the energy is consumed during signal transitions!).

Couple of options:

  1. run the ATmega off its internal 8 MHz RC clock and add a 32 KHz crystal
  2. add extra circuitry to generate a low-frequency pulse and use pin-change interrupts
  3. connect the RFM12B’s clock pin to the IRQ pin and use Timer 2 as divider
  4. add a simple RC to an I/O pins and interrupt on it’s charge cycle
  5. use the RFM12B’s built-in wake-up timer – to be explored in a separate weblog post

Option 1) has as drawback that you can’t run with standard fuse settings anymore: the clock will have to be the not-so-accurate 8 MHz RC clock and the crystal oscillator needs to be set appropriately. It does seem like this would allow short-duration low-power waiting with a granularity of ≈ 30 µs.

Option 2) needs some external components, such as perhaps a low-power 7555 CMOS timer. This would probably still consume about 0.1 mA – pretty low, but not super-low. Or maybe a 74HC4060, for perhaps 0.01 mA (10 µA) power consumption.

Option 3) may sound like a good idea, since Timer 2 can run while the rest of the ATmega’s clock circuits are switch off. But now you have to keep the RFM12B’s 10 MHz crystal running, which draws 0.6 mA … not a great improvement.

Option 4) seems like an option worth trying. The idea is to connect these components to a spare I/O pin:

JC s Grid page 17

By pulling the I/O pin low as an output, the capacitor gets discharged. When turning the pin into a high-impedance input, the cap starts charging until it triggers a transition from a “0” to a “1” input, which could be used as pin-change interrupts. The resistor value R needs to be chosen such that the charge time is near to what we’re after for our sleep time, say 1 millisecond. Longer times can then be achieved by repeating the process.

It might seem odd to do all this for what is just one thousandths of a second, but keep in mind that during this time the ATmega can be placed in deep-sleep mode, consuming only a few µA’s. It will wake up quickly, and can then either restart another sleep cycle or resume its work. This is basically the same as a watchdog interrupt.

Let’s first try this using the internal pull-up resistor instead, and find out what sort of time delays we get:

Screen Shot 2012 06 15 at 17 22 53

(several typo’s: the “80 µs” comment in the above screen shot should be “15 µs” – see explanation below)

This code continuously discharges the 0.1 µF capacitor connected to DIO1, then waits until it charges up again:


With the internal pull-up we get a 3.4 ms cycle time. By adding an extra external resistor, this could be shortened. The benefit of using only the internal pull-up, is that it also allows us to completely switch off this circuit.

We can see that this ATmega switches to “1” when the voltage rises to 1.66V, and that its internal pull-up resistor turns out to be about 49 kΩ (I determined this the lazy way, by tweaking values on this RC calculator page).

Note that discharge also takes a certain amount of time, i.e. when the output pin is set to “0”, we have to keep it there a bit. Looks like discharge takes about 15 µs, so I adjusted the asm volatile (“nop”) loop to cycle 50 times:


In other words, this sketch is pulling the IO pin low for ≈ 15 µs, then releases it and waits until the internal pull-up resistor charges the 0.1 µF capacitor back to a “1” level. Then this cycle starts all over again. Easy Peasy!

So there you have it: a single 0.1 µF capacitor is all it takes to measure time in 3.4 µs ms increments, roughly. Current consumption should be somewhat under 67 µA, i.e. the current flowing through a 49 kΩ resistor @ 3.3V.

Tomorrow, I’ll rewrite the sketch to use pin-change interrupts and go into low-power mode… stay tuned!

Waiting without the watchdog

In AVR, Hardware on Jun 15, 2012 at 00:01

The watchdog timer in the ATmega has as shortest interval about 16..18 milliseconds. Using the watchdog is one of the best ways to “wait” without consuming much power: a fully powered ATmega running at 16 MHz (and 3.3V) draws about 7 mA, whereas it drops thousandfold when put to sleep, waiting for the watchdog to wake it up again.

The trouble is: you can’t wait less than that minimum watchdog timer cycle.

What if we wanted to wait say 3 ms between transmitting a packet and turning on the receiver to pick up an ACK?

Short time delays may also be needed when doing time-controlled transmissions. If low power consumption is essential, then it becomes important to turn the transmitter and receiver on at exactly the right time, since these are the major power consumers. And to wait with minimal power consumption in between…

One approach is to slow down the clock by enabling the built-in pre-scaler, but this has only a limited effect:

Screen Shot 2012 06 14 at 16 39 58

The loop toggles an I/O bit to allow verification of the reduced clock rate. The above code draws about 1.6 mA, whereas the same code running at full speed (16 MHz) draws about 8.8 mA. Note that these measurements ended up a bit lower, but that was as 3.3V – I’m running from a battery pack in these tests, i.e. at about 4.0V.

The I/O pin toggles at 2.23 KHz in slow mode, vs 573 KHz at full speed, which indicates that the system clock was indeed running 256 times slower. A 2.23 KHz rate is equivalent to a 224 µs toggle cycle, which means the system needs 14 processor cycles (16 µs each) to run through this loop() code. Most of it is the call/return stack overhead.

So basically, we could now wait a multiple of 16 µs while consuming about 1.6 mA – that’s still a “lot” of current!

Tomorrow, I’ll explore ways to reduce this power consumption further…

TK – Measuring distance

In Musings on Jun 14, 2012 at 00:01

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Ehm, well, not quite :) … here’s how people defined and measured distances some 35 centuries ago:


It’s a stone, roughly 1 meter long, which can be found in the Istanbul Archaeology Museum. In more detail:

P5043851  Version 2

P5043851  Version 3

Not terribly convenient. I prefer something like this – have had one of them around here at JeeLabs for ages:

Screen Shot 2012 06 13 at 22 54 28

Then again, both of these measuring devices are quite a long shot (heh) from today’s laser rangefinders:

Screen Shot 2012 06 13 at 22 45 39

For about €82 at Conradno, I don’t have stock options, they are privately owned :) – you get these specs:

Screen Shot 2012 06 13 at 22 49 29

That’s 2 mm accuracy from 0.5 to 50 meters, i.e. one part in 25,000 (0.004%). Pretty amazing technology, considering that it’s based on measuring the time it takes a brief pulse to travel with (almost) the speed of light!

But you’ll need a 9V battery to make this thing work – everything needs electricity in today’s “modern” world.

Code vs. power consumption

In AVR, Hardware on Jun 13, 2012 at 00:01

I’ve been wondering for some time whether the power consumption of an ATmega varies depending on the code it is running. Obviously, sleep modes and clock rate changes have a major impact – but how about plain loops?

To test this, I uploaded the following sketch into a JeeNode:

Screen Shot 2012 06 12 at 21 34 50

Interrupts were turned off to prevent the normal 1024 µs timer tick from firing and running in between. And I’m using “volatile” variables to make sure the compiler doesn’t optimize these calculations away (as they’re not used).

The result is that the code pattern does indeed show up in the chip’s total current consumption:


The value displayed is the voltage measured over a 10 Ω resistor in series with VCC (the JeeNode I used had no regulator, and was running directly off a 3x AA battery pack @ 3.95V).

What you can see is that the power consumption cycles between about 8.4 mA and 8.8 mA, just by being in different parts of the code. Surprising perhaps, but it’s clearly visible!

The shifts in the second loop are very slow – due to the fact that the ATmega has no barrel shifter. It has to use a little loop to shift by N bits. To get a nice picture, those shifts are performed only 5,000 times instead of 50,000.

The high power consumption is during the multiplication loop, the low consumption is during the shift loop.

In the end, I had to use a lot of tricks to create the above oscilloscope capture, because there was a substantial amount of 50 Hz hum on the measured input signal. Since the repetition rate of the signal I was interested in was not locked to that 50 Hz AC mains signal, most of the “noise” went away by averaging the signal over 128 triggers.

The other trick was to use the scope’s “DC offset” facility to lower the signal by 80 mV. This allows bumping the input sensitivity all the way up to 2 mV/div without the trace running off the screen. An alternative would be to use AC coupling on the input, but then I’d lose the information about the actual DC levels being measured.

What else can we deduce from the above screen shot?

  • loop 1 takes 93.48 ms for 50,000 iterations, so one cycle runs in ≈ 1.8 µs
  • loop 2 takes 108.52 ms for 5,000 iterations, so one cycle runs in ≈ 21.8 µs

As you can see, shifts by a variable number of bits do take quite a lot of time on an ATmega, relatively speaking!

Update – As noted in the comments, a shift by “321” ends up being done modulo 256, i.e. 65 times. If I change the shift to 3, the run times drop to being comparable to a multiply. The power consumption effect remains.

Jumping to conclusions

In Hardware on Jun 12, 2012 at 00:01

Yesterday, I made an effort to remove some glitches which I thought were due to the switching regulator used inside the ±15V DC-DC converter. To be honest: it didn’t really make any sense when I saw this on the scope…

It’s always easy to draw some conclusion – it’s a bit harder to avoid uttering complete nonsense…

Here’s the signal with the power supply turned off, and only the ground cable still connected:


Clearly the same signal – it appears even when the 1 meter ground cable isn’t tied to anything: it’s an antenna!

In other words: I’m probably just picking up one of the FM transmitters in this region. I should of course have turned on the scope’s built-in 20 MHz bandwidth filter. There was no reason to look for frequencies that high with a switcher in the 60..100 KHz region. And the fact that neither a bypass capacitor nor various inductors made much difference should have been a clue. How embarrassing.

Onwards, quickly!

Switching glitches

In Hardware on Jun 11, 2012 at 00:01

The dual power supply described yesterday had a nasty spike every 5 µs. I tried damping them with one of these:


(they are called “varkensneus” – pigs nose – in Dutch, ’cause that’s what they look like, seen from the end)

But the results were not very substantial when adding one to the supply output. When I added one on both the input and the output of the 7812 regulator, things did improve a bit further:


The yellow trace is the output with ferrite core between the DC-DC converter output and the 7812 linear regulator input, and the blue trace is from a second ferrite core added at the end, i.e. the linear regulator’s output pin.

Note the scale of this oscilloscope capture: 10 nsec/div, so this is a 100 MHz high frequency signal of about ± 200 mV. The second ferrite core almost halves these spike’s amplitudes.

In conclusion: these are very brief ± 100 mV glitches, super-imposed on the +12V supply output voltage – i.e. about ±1% of the regulated supply output voltage. The glitches don’t change much with a 1 kΩ load, i.e. 12 mA.

It’s an artifact of the switching inside the DC-DC converter – looks like there’s not much more I can do about it!

Dual power supply

In Hardware on Jun 10, 2012 at 00:01

To generate a sine wave of ±10V for the Component Tester project, I’m going to need a suitable power supply.

The first option would be to take a dual-windings 12 VAC transformer, add a bridge rectifier, two beefy electrolytic capacitors, and violá: ±12V, right?

Not so fast… this is called an unregulated supply. It has a couple of drawbacks: 1) the voltages will actually be considerably larger than ±12V, 2) the voltages will change depending on the current drawn, and 3) the voltages can have a lot of residual ripple voltage. Let’s go through each of these:

  1. Voltage levels – a 12 VAC transformer generates a 50 Hz alternating current (60 Hz in the US) with an RMS voltage of about 12 VAC. For a sine wave, this corresponds to a peak voltage which is 1.414 times as high, i.e. 17 Volts peak to peak. With a bridge rectifier, you end up topping each of the two caps to 17V DC.
  2. Regulation – or rather: lack thereof. Since the input is a sine wave which only peaks at 17V, the caps will be charged up to this value only a couple of dozen times per second. In between, current drawn will simply discharge them, causing the voltage to drop. Large current = much lower voltage.
  3. Ripple voltage – this variation on the power supply is called ripple. It’ll be either the same frequency of AC mains, or double that value – depending on the rectification circuit used. So that’s a 50..120 Hz signal on top of what was supposed to be a fixed supply voltage (that’s why bad audio amplifiers can “hum”).

There’s a very simple solution to all these issues: add 2 linear regulators to generate a far more stable supply voltage (one for the positive and one for the negative supply). The most widely used regulator chips are the 78xx series (+) and the 79xx series (-). You give them a few more Volts than what they are designed to deliver, add a few caps for electrical stability, and that’s it. In this case, we need one 7812 and one 7912 to get ±12V.

But I’m not so fond of power line transformers in my circuits, because you have to hook them up to AC mains on one side – that’s 230 VAC, needing lots of care to prevent accidents. Besides, we only need a few dozen milliamps for this Component Tester anyway.

So instead, I decided to use a DC-to-DC converter – a nifty little device which takes DC in and transforms it to another level of DC. The nice thing is that there are “dual” variants which can automate both positive and negative voltages at the same time.

I picked the Traco TMA0515D, which generates up to 30 mA @ ±15V, using just 5V as input. Its efficiency is specified as about 80%, so the 900 mW it supplies will need about 1.125W of input power. At 5V, that translates to 225 mA, well within range of a USB port – how convenient!

Here is the circuit I’ve built up:

JC s Grid page 20

As you can see, it uses very few components. And the output is galvanically isolated from the input supply – nice!

Such DC-DC converters are surprisingly small, at least for low-power units like this one (black block on the left):

DSC 3304

With a bit of forethought, almost everything can be connected together with its own wires:

DSC 3305

It worked as expected (caveat: the 78xx and 79xx pinouts are different!), but there were two small surprises:

  • the unloaded DC-DC converter output was about ±25V, these units are clearly not internally regulated!
  • the outputs from this assembled unit are indeed + and – 12V, but with some residual switching noise:


That DC-DC converter appears to be based on a 100 KHz switching regulator (5 µs between on and off transitions), and these spikes are making it all the way to the output pins, straight through those linear regulators!

It probably won’t matter for a component tester operating at 50..1000 Hz, but this too should be fairly easy to fix – by inserting a couple of ferrite beads for example: small inductors which filter out such high frequency “spikes”.

With analog circuitry, stable and smooth power supplies tend to be a lot more important!

Solar top-up, full sun

In Hardware on Jun 9, 2012 at 00:01

Yesterday’s setup described a circuit with the JeeNode running on an AA Power Board and a little solar cell to top up the charge when there was sufficient light.

Since today was a warm day here with lots of sunlight, I thought it’d be nice to establish an outdoor baseline:

DSC 3302

From left to right (see how useful it is to have a whole bunch of multimeters?):

  • the voltage of the solar cell is 2.7V right now
  • the current supplied by it is 0.76 mA
  • the current drawn from the AA cell is 20 µA
  • IOW: less than an hour of sunlight is enough for a day

This is running the radioBlip2 sketch (including the recent survival tweaks), with the same modified JeeNode (no regulator, 100 µF cap) as used in many recent experiments here at JeeLabs.

Note that these values add up reasonably well:

  • the 2.7V from the solar cell ends up being distributed as follows: 0.65V forward drop over the 1N4148 diode, 0.75V voltage drop over the 1 kΩ resistor, and 1.3V over the (almost fully-charged) Eneloop
  • battery draw is 20 µA, and I’ve independently measured about 4.8 µA idle current draw from the JeeNode (i.e. w/o the MCP1702 regulator), so losses and efficiencies are actually quite good

Here’s the same setup in the shade (still bright sunlight outside) – sorry for the bad readouts:

DSC 3303

The cell voltage now drops to 2.0V and the current it supplies is down to ≈ 0.14 mA.

For comparison, some indoor charge currents from the solar cell:

  • near the white LED strips at my workbench: 60 µA
  • behind the window, but not in the sun: 80 µA
  • behind the window, in (modest) sunlight: 400 µA

With a Room Board attached and a permanent indoor setup, these figures will change, but it all looks promising!

Eneloop with solar top-up

In Hardware on Jun 8, 2012 at 00:01

Here’s another idea in the continuing search for long autonomous JeeNode run times:

JC s Grid page 19

The basic circuit is an Eneloop AA(A) cell, driving the AA Power Board to boost its voltage to 3.3V. There’s a 1 kΩ in series with the battery, as well as a Schottky diode to limit the voltage drop to about 0.3V during times “high” current consumption. I’ll explain why later on.

On the input side is a really simple circuit: a solar cell with a series diode, simply feeding the Eneloop battery when there is solar energy available.

The solar cell I’m using is that same 4.5V @ 1 mA cell I’ve been using all the time in my recent experiments. It is surprisingly good at generating some electricity indoor, even behind the coated double-glazing we have here.

The 1 kΩ resistor in series will let me measure the actual current flowing across it – 1 µA will read out as a 1 mV drop (that Ohm’s law again, of course!). So with a charging current of up to say 200 µA, this conveniently matches the 200 mV lowest range of most multimeters. And 0.2V is not a dramatic voltage drop, so the circuit should continue to work – almost the same as without those measurement resistors included.

A similar 1 kΩ resistor has also been inserted between the battery and the AA Power Board, but in this case we have to be more careful: a JeeNode will briefly pull 25 mA while in transmit mode, and the 1 kΩ resistor would effectively shut off input power with such currents. So I added a diode with minimal forward drop in parallel – it’ll interfere with my readings, but I’m really only interested in the ultra-low power consumption phases.

Here’s my “flying circus” concoction:

DSC 3301

I’ve added some wires to easily allow clipping various meters on.

Now, clearly, 4V is way over the 1.3V nominal of an Eneloop battery. But here’s why this setup should still work:

  • this solar cell is so feeble that its voltage will collapse when drawing more than a fraction of a milliamp
  • solar cells may be shorted out – doing so switches them from constant-voltage to constant current mode

As for the Eneloop, my assumption is that it doesn’t really care much about being overcharged at these very low power levels. In the worst case of continuous sunshine for days on end, it’ll be fed at most 1 mA, even when full. That will probably just lead to a tiny amount of internal heating.

So let’s try and predict how this will work out, in terms of battery lifetimes…

I’ll take a JeeNode + Room Board as reference point, which draws about 60 µA continuous, on average (50 µA for the PIR, which needs to remain always-on). That’s on the 3.3V side of the AA Power board. So with a (somewhat depleted) AA battery @ 1.1V, than means the battery would have to supply 180 µA with a perfect boost regulator.

Unfortunately, perfect boost regulators are a bit hard to get. The chip on the AA Power Board does reasonably well, with about 20 µA idle and about 60..70% conversion efficiency. Let’s just batch those together as 50% efficiency, then the continuous power draw for a Room Node would be about 360 µA. Let’s round that up to 400 µA.

An Eneloop AA battery has about 1900 mAh capacity, but it loses some energy due to self-discharge. The claim is that it retains 85% over 2 years, so this battery can effectively give us about 1600 mAh of power.

The outcome of this little exercise, is that we ought to get some 4000 hours run-time out of one fully-charged AA cell, i.e. 166 days, almost six months. Not bad, but a little lower than I would have liked to see.

If the solar cell were to generate 4 hours per day @ 0.5 mA, when averaged over an entire year (that might be optimistic), then that’s 4 x 365 x 0.5 = 730 mAh. That comes down to an average current of 83 µA.

IOW, roughly one fifth of the total power needs could be supplied by the solar cell. Not enough for total autonomy, but still, it’s a start. Note that most of these last figures were pulled out of thin air at this stage: I don’t know yet!

Yet another idea would be to add an extra diode from the solar cell straight to the JeeNode +3V pin. IOW, when there is sufficient sunlight, we off-load the boost circuit altogether and charge up a capacitor of say 100..1000 µF on the JeeNode itself. No more losses, other than the AA Power Board’s quiescent current consumption.

TK – Cheap power supply

In Hardware on Jun 7, 2012 at 00:01

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

(I won’t call this a “lab power supply, for reasons explained below)

In a weblog post a while ago, I took apart a standard computer power supply unit (PSU). Now, instead, let’s do the opposite and turn it into a useful tool for experimentation with electrical circuits:

DSC 3330

What you see here is a neat little way to repurpose any standard ATX power supply. Just snip off most of the wires, except for that 20-pin connector, and assemble this neat little ATX adapter board by Benjamin Jordan:

DSC 3331

It’s available as a simple kit with a few basic components and all the connectors and binding posts.

The reason this is convenient is that it makes it somewhat easier to work with an ATX power supply (especially if it gets mounted on or near that power supply). There are push-buttons to toggle the supply on and off (except for the 5V standby voltage on the rightmost blue post, which is always on). There’s a LED to indicate whether the power supply is on (red) or in standby mode (green), and there’s an orange LED to indicate that power is OK.

All the main voltages are nicely arranged on binding posts, with matching ground return posts (all tied together internally), and there are holes to get to those same voltages via alligator clips – this is clearly for experimentation!

The is no high voltage anywhere, so the stuff is completely safe in terms of voltage. But there is still a risk:

DSC 3332

The currents available in most PC power supplies are phenomenal: 25 and 35 Amps on the 3.3V and 5V voltage rails, respectively. That’s what modern CPU’s and memory chips and all the supporting logic need, nowadays. The +12V supply is also pretty powerful, and normally used for all those terabyte disk drives people seem to be using.

This means that no matter how we touch it, it wont hurt us – anything under 40V is considered safe since our skin resistance prevents any serious amount of current flowing. But an electrical short circuit can (and will!) still easily vaporize thin copper traces on a low-power PCB. In other words: this is totally safe in terms of voltage, but the currents caused by shorts can generate sparks and enough heat to destroy components, wires, and PCB’s.

The above PCB itself is ok – its wide and thick gold-plated copper traces were designed to carry heavy currents.

What this means is that this setup is indeed a very cheap way to get lots of useful voltages for experimentation, but that it’s not the same thing as a “laboratory power supply” which also needs to have adjustable current limits.

Here are the voltages I measured coming out of this thing:

  • +5V standby, actual value, unloaded: 5.16 V
  • +3.3V, actual value, unloaded: 3.39 V
  • +5V, actual value, unloaded: 5.19 V
  • +12V, actual value, unloaded: 12.01 V
  • -12V, actual value, unloaded: -11.35 V

Close enough, and more importantly: most are slightly high. That means we could add very precise low-dropout regulators to get the voltages exactly right or we could add a current sensing circuit and limiter, to get that extra feature needed to turn this into a cheap yet beefy lab power supply.

Goodbye JeeMon

In Software, Musings on Jun 6, 2012 at 00:01

As long-time readers will know, I’ve been working on and off on a project called JeeMon, which bills itself as:

JeeMon is a portable runtime for Physical Computing and Home Automation.

This also includes a couple of related projects, called JeeRev and JeeBus.

JeeMon packs a lot of functionality: first of all a programming language (Tcl) with built-in networking, event framework, internationalization, unlimited precision arithmetic, thread support, regular expressions, state triggers, introspection, coroutines, and more. But also a full GUI (Tk) and database (Metakit). It’s cross-platform, and it requires no installation, due to the fact that it’s based on a mechanism called Starkits.

Screen Shot 2012 05 23 at 19 26 10

I’ve built several version of this thing over the years, also for small ARM Linux boards, and due to its size, this thing really can go where most other scripting languages simply don’t fit – well under 1 Mb if you leave out Tk.

One of (many) things which never escaped into the wild, a complete Mac application which runs out of the box:


JeeMon was designed to be the substrate of a fairly generic event-based / networked “switchboard”. Middleware that sits between, well… everything really. With the platform-independent JeeRev being the collection of code to make the platform-dependent JeeMon core fly.

Many man-years have gone into this project, which included a group of students working together to create a first iteration of what is now called JeeBus 2010.

And now, I’m pulling the plug – development of JeeMon, JeeRev, and JeeBus has ended.

There are two reasons, both related to the Tcl programming language on which these projects were based:

  • Tcl is not keeping up with what’s happening in the software world
  • the general perception of what Tcl is about simply doesn’t match reality

The first issue is shared with a language such as Lisp, e.g. SBCL: brilliant concepts, implemented incredibly well, but so far ahead of the curve at the time that somehow, somewhere along the line, its curators stopped looking out the window to see the sweeping changes taking place out there. Things started off really well, at the cutting edge of what software was about – and then the center of the universe moved. To mobile and embedded systems, for one.

The second issue is that to this day, many people with programming experience have essentially no clue what Tcl is about. Some say it has no datatypes, has no standard OO system, is inefficient, is hard to read, and is not being used anymore. All of it is refutable, but it’s clearly a lost battle when the debate is about lack of drawbacks instead of advantages and trade-offs. The mix of functional programming with side-effects, automatic copy-on-write data sharing, cycle-free reference counting, implicit dual internal data representations, integrated event handling and async I/O, threads without race conditions, the Lisp’ish code-is-data equivalence… it all works together to hide a huge amount of detail from the programmer, yet I doubt that many people have ever heard about any of this. See also Paul Graham’s essay, in particular about what he calls the “Blub paradox”.

I don’t want to elaborate much further on all this, because it would frustrate me even more than it already does after my agonizing decision to move away from JeeMon. And I’d probably just step on other people’s toes anyway.

Because of all this, JeeMon never did get much traction, let alone evolve much via contributions from others.

Note that this isn’t about popularity but about momentum and relevance. And JeeMon now has neither.

If I had the time, I’d again try to design a new programming environment from scratch and have yet another go at databases. I’d really love to spend another decade on that – these topics are fascinating, and so far from “done”. Rattling the cage, combining existing ideas and adding new ones into the mix is such an addictive game to play.

But I don’t. You can’t build a Physical Computing house if you keep redesigning the hammer (or the nails!).

So, adieu JeeMon – you’ve been a fantastic learning experience (which I get to keep). I’ll fondly remember you.

TD – New Solid State Disk

In Hardware on Jun 5, 2012 at 00:01

Welcome to the Tuesday Teardown series, about looking inside the technology around us.

After the recent server troubles (scroll down a bit), I had to replace one of the 500 GB Hitachi drives in the Mini.

I decided to switch to a 128 GB SSD for the system disk, with up to 6x faster transfer rates:

DSC 3233

It came with an interesting USB-to-SATA adapter included. Which looks like this inside:

DSC 3231

And on the bottom:

DSC 3232

(sorry: no teardown of the SSD, it’s probably just a bunch of black squares anyway!)

The scary part was replacing the disk in the Mac Mini’s “unibody” Aluminium case – as explained on YouTube.

But I definitely wanted to keep the server setup in a single enclosure. First lots of disk formatting, re-shuffling, and copying and then I just went ahead and did it. The good news: it worked. The system disk is now solid state!

DSC 3235

I had hoped that the most accessible drive would have to be replaced, but unfortunately it was the top one (when the Mini is placed on it feet) – so a full dismantling was required – look at all those custom-shaped parts:

DSC 3236

The other thing I did was to add an external 2 TB 2.5″ USB drive, to hold all Time Machine backups for both these server disks as well as two other Macs here at JeeLabs. This drive wil spin up once an hour, as TM does its thing.

Summary: the JeeLabs server now maintains a good up to date image of the entire system disk at all times, ready to switch to, and everything gets backed up to an external USB drive once an hour (these backups usually only take a minute or so, due to the way Time Machine works). All four VM’s get daily backups to the cloud, as well as now being included in Time Machine (Parallels takes care to avoid huge amounts of disk file copying).

That means all the essentials will be stored in at least three places. I think I can go back to the real work, at last.

There’s plenty of room for growth: 8 GB of RAM and less than half of the system disk space used so far.


Boost revisited

In Hardware on Jun 4, 2012 at 00:01

The AS1323 boost converter mentioned a while back claims an extra-ordinarily low 1.65 µA idle current when unloaded. At the time, I wasn’s able to actually verify that, so I’ve decided to dive in again:

Screen Shot 2012 05 17 at 15 28 15

A very simple circuit, but still quite awkard to test, due to its size:

DSC 3224

Bottom right is incoming power, bottom left is boosted 3.3V output voltage. Input voltage is 1.65V for easy math.

The good news is that it works, and it shows me an average current draw of 4.29 µA:


The yellow line is the output voltags, with its characteristic boost-decay cycle. For reference: the top of the graph is at 3.45V, so the output voltage is roughly between 3.30 and 3.36V (it rises a bit with rising supply voltage).

The blue line is the voltage over a resistor inserted between supply ground and booster ground. I’m using 10 Ω, 100Ω, or 1 kΩ, depending on expected current draw (to avoid a large burden voltage). So this is the input current.

The red line is the accumulated current, but it’s not so important, since the scope also calculates the mean value.

Note that there’s some 50 Hz hum in the current measurement, and hence also in its integral (red line).

Aha! – and here’s the dirty little secret: the idle current is specified in terms of the output voltage, not the input voltage! So in case of a 1.65V -> 3.3V idle setup, you need to double the current (since we’re generating it from an input half as large as the 3.3V out), and you need to account for conversion losses!

IOW, for 100% efficiency, you’d expect 1.6 µA * (3.3V / 1.65V) = 3.2 µA idle current. Since the above shows an average current draw of 4.29 µA, this is about 75% efficient.

Not bad. But not that much better than the LTC3525 used on the AA Power Board, which was ≈ 20 µA, IIRC.

More worrying is the current draw when loaded with 10 µA, which is more similar to what a sleeping JeeNode would draw, with its wireless radio and some sensors attached:


Couple of points to note, before we jump to conclusions: the boost regulator is now cycling at a bit higher frequency of 50 Hz. Also, I’ve dropped the incoming voltage to a more realistic 1.1V, i.e. 1/3rd of the output.

With a perfect circuit, this means the input current should be around 30 µA, but it ends up being about 52 µA, i.e. 57% efficiency. I have no idea why the efficiency is so low, would have expected about 70% from the datasheet.

Further tests with 1.65V in show that 1 µA out draws 6.72 µA, 10 µA out draws 29.6 µA, 100 µA out draws 261 µA, 1 mA out draws 2.51 mA, and 10 mA out draws 30.9 mA. Not quite the 80..90% efficiency from the datasheet.

My hunch is that the construction is affecting optimal operation, and that better component choices may need to be made – I just grabbed some SMD caps and a 10 µH SMD inductor I had lying around. More testing needed…

For maximum battery life, the one thing which really matters is the current draw while the JeeNode is asleep, since this is the state it spends most of its time in. So minimal consumption with 5..10 µA out is what I’m after.

To keep things in perspective: 50 µA average current drawn from one 2000 mAh AA cell should last over 4 years. A JeeNode with Room Board & PIR (drawing 50 µA, i.e. 200 µA from the battery) should still last almost a year.

Update – when revisiting the AA Power Board, I now see that it uses 25 µA from 1.1V with no load, and 59 µA with 10 µA load (down to 44 µA @ 1.5V in). The above circuit works (but does not start) down to 0.4V, whereas the AA Power Board works down to 0.7V – low voltages are not really that useful, since they increase the current draw and die quickly thereafter. Another difference is that the above circuit will work up to 2.3V (officially only 2.0V), and the AA Power Board up to at least 6V (which is out of spec), switching into step-down mode in this case.

Component Tester quality

In Hardware on Jun 3, 2012 at 00:01

Will all this effort to create a good sine wave for use as Component Tester, and all the testing on it, I might as well put the CT to the test which started it all, i.e. the one built into my Hameg HMO2024 oscilloscope.

Nothing easier than that – just hook up a probe to the signal it generates on the front panel, and do an FFT on it:


(yeah, a green trace for a change – the other channels were tied up for another experiment…)

Ok, but not stellar: 1.6% harmonic distortion on the 3rd harmonic – more than with the Phase Shift Oscillator.

The frequency is also about 10% high, although that’s usually not so important for a Component Tester.

PS. The ∆L units should be “dB”, not “dBm” – Hameg says they’ll fix this oversight in the next firmware update.

Gravity + Pressure + GLCD

In Software on Jun 2, 2012 at 00:01

For those interested in the above combination, I’ve added a little demo combining the Gravity Plug, Pressure Plug, and Graphics Board – oh and a JeeNode of course (it’s actually an experimental JeeNode USB v4 …):

DSC 3310

The display shows the current X, Y, and Z accelerometer values as raw readings, plus temperature and pressure:

DSC 3313

Both plugs use I2C, and could have been combined on one port. Note that the backlight is off in this example.

The sketch is called – somewhat presumptuouslyglcd_imu.ino. And it’s now on GitHub, as part of GLCDlib:

Screen Shot 2012 05 29 at 13 33 02

The GraphicsBoard class extends the GLCD_ST7565 class with print(…) calls and could be useful in general.

Get the GitHub version to see the details – omitted here for brevity.

Discount month

In News on Jun 1, 2012 at 00:01

It’s that warm summer time of year again – at least on this hemisphere…

Time to kick off a new discount month in the JeeLabs web shop

Screen Shot 2012 05 31 at 08 43 09

During the month of June, everyone who has previously purchased something from the JeeLabs shop (i.e. before June 1st 2012) can use discount code “JEE2012GO” to get 12% off all items.

And while I’m at it, let me list some other important dates for this period, here at JeeLabs:

  • June 1st – summer sale kicks off with this post

  • June 30th – sale ends at midnight, 0:00 CEST time

  • July 1st – weblog is suspended during the summer

  • July 15th – shop enters “summer fulfillment mode”

  • August 16th – shop resumes normal operation

  • August 31st – daily weblog resumes

Most of these are more or less the same as in the past years. The 2-month summer hiatus on the daily weblog gives me time to recharge and put things in perspective, and gives you some time off to enjoy other activities :)

The summer fulfillment mode is new. I’m currently making arrangements to keep the shop open this time, by having new orders and support handled by someone else. More details will follow once everything is in place.

Anyway – now you know what’s coming as far as JeeLabs and yours truly is concerned for the summer period.