Computing stuff tied to the physical world

Search for “wireless”

Flashback – RFM12B wireless

In AVR, Hardware, Software on Sep 29, 2013 at 00:01

After the ATmega µC, the second fascinating discovery in 2008 was the availability of very low-cost wireless modules, powerful enough to get some information across the house:


It would take another few months until I settled on the RFM12B wireless module by HopeRF, but the uses were quickly falling into place – I had always wanted to track the energy consumption in the house, to try and identify the main energy consumers. That knowledge might then help reduce our yearly energy consumption – either by making changes to the house, or – as it turned out – by simply adjusting our behaviour a bit.

Here is the mouse trap which collected energy metering data at JeeLabs for several years:


This is also when I found Modern Devices’s Real Bare Bone Board Arduino clone by Paul Badger – all the good stuff of an Arduino, without the per-board FTDI interface, and with a much smaller form factor.

Yet another month would pass to get a decent interrupt-driven driver working, and some more tweaks to make transmission interrupt-based as well. The advantage of such as design is that you get the benefits of a multi-tasking setup without all the overhead: the RF12 driver does all its time-critical work in the background, while the main loop() can continue to use delay() calls and blocking I/O (including serial port transmission).

In February 2009, I started installing the RF12demo code on each ATmega, as a quick way to test and try out wireless. As it turned out, that sketch has become quite useful as central receiving node, even “in production” – and I still use it as interface for HouseMon.

In April 2009, a small but important change was made to the packet format, allowing more robust use of multiple netgroups. Without this change, a bit error in the netgroup byte will alter the packet in a way which is not caught by the CRC checksum, making it a valid packet in another netgroup. This is no big deal if you only use a single netgroup, but will make a difference when multiple netgroups are in use in the same area.

Apart from this change, the RF12 driver and the RFM12B modules have been remarkably reliable, with many nodes here communicating for years on end without a single hick-up.

I still find it pretty amazing that simple low-power wireless networking is available at such a low cost, with very limited software involvement, and suitable for so many low-speed data collection and signalling uses in and around the house. To me, wireless continues to feel like magic after all these years: things happening instantly and invisibly across a distance, using physical properties which we cannot sense or detect…

What if you’re out of wireless range?

In Hardware on Jun 5, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

Ok, so you’ve got some JeeNodes up and running, all talking to each other or to a central node via the wireless RFM12B module. Or… maybe not: the signal is too weak! Now what?

There are several approaches you can try to improve wireless range:

  • optimise your existing antenna(s)
  • lower the data rate and reduce the bandwidth
  • use a more advanced type of antenna
  • use a directional antenna
  • install a repeater of some kind

Let’s go through each of these in turn.

First thing to try is to optimise the little wire “whip” antenna’s that come standard with a JeeNode. Make sure the antenna wire is 82 mm long (that’s for 868 MHz), is sticking up (or sideways) perpendicular to the board, and check that both antenna’s are pointing more or less in the same direction (but not in the direction of the other node: the RF field is circular around the wire, not on top or below).

One thing to keep in mind with these weak signals, is that salty bags of water (us people, that is) tend to absorb RF energy, so these radios work better with us out of the way. Be sure to take a step back while tweaking and hunting for the best orientation!

If that doesn’t help enough, you can do one more thing without messing with electronics or hardware: reduce the datarate of the transmitter and receiver (they have to match). See the RFM12B Command Calculator for settings you can change. To reduce the data rate by two thirds, call rf12_control(0xC614) after the call to rf12_initialize(), for example. The bad news is that you have to do this in all the nodes which communicate with each other – all the data rates have to match!

This in itself won’t extend the range by that much, but with lower data rates you can also reduce the bandwidth in the receiver (with rf12_control(0x94C2)). You can think of this approach as: speaking more slowly and listening more closely. The effects should be quite noticeable. Radio amateurs have been using this technique to get halfway around the world on mere milliwats, using a system called QRSS.

If that doesn’t give you the desired range – here are a few more tricks, but they all require extra hardware: improve the antenna, use “directional” antennas, or use a repeater.

Here’s an example of an improved omni-directional antenna design, as seen on eBay:


And here’s a directional “Yagi” antenna, which needs to be aimed fairly accurately:


I haven’t tried either of these (you can build them yourself), but the omni-directional one was mentioned and described in Frank Benschop’s presentation on JeeDay. He reported getting quite good results, once all the antenna + cabling quirks were resolved.

If neither of these are an option, then the last trick you can try is to add a relay / repeater node to your network, as described in this weblog post some time ago. This will double the range if you place that node in the middle of the two nodes which can’t reach each other, but it adds some complexity to the packet addressing mechanism.

Wireless, the CAN bus, and enzymes

In Musings on May 27, 2013 at 00:01

How’s that for a title to get your attention, eh?

There’s an interesting mechanism in communication, which has kept me intrigued for quite some time now:

  • with JeeNode and RF12-based sensors, wireless packets are often broadcast with only a sender node ID (most network protocols use both a source and a destination)
  • CAN bus is a wired bus protocol from the car industry, its messages do not contain a destination address, there is just a 11- or 29-bit “message ID”

What both these systems do (most of the time, but not exclusively), is to tag transmitted packets with where they came from (or what their “meaning” is) and then just send this out to whoever happens to be interested. No acknowledgements: in the case of wireless, some messages might get lost – with CAN bus, the reliability is considerably higher.

It’s a bit like hormones and other chemicals in our blood stream, added for a specific purpose, but not really addressed to an area in the body. That’s up to various enzymes and other receptors to pick up (I know next to nothing about biology, pardon my ignorance).

Couple of points to note about this:

  • Communicating 1-to-N (i.e. broadcasting) is just as easy as communicating 1-to-1, in fact there is no such thing as privacy in this context – anyone / anything can listen-in on any conversation. The senders won’t know.
  • There is no guaranteed delivery, since the intended targets may not even be around or listening. The best you can do, is look for the effects of the communication, which could be an echo from the receiving end, or some observable side-effect.
  • You can still set up focused interactions, by agreeing on a code / channel to use for a specific purpose: A can say “let’s discuss X”, and B can say “I’ll be listening to topic X on channel C”. Then both A and B could agree to tag all their messages with “C”, and they’ll be off on their own (public) discussion.
  • This mode of communicating via “channels” or “topics” is quite common, once you start looking for it. The MQTT messaging system uses “channels” to support generic data exchange. Or take the human-centric IRC, for example. Or UDP’s multicast.
  • Note that everything which has to do with discovery on a network also must rely on such a “sender-id-centric” approach, since by definition it will be about finding a path to some sender which doesn’t know about us.

Having no one-to-one communication might seem limiting, but it’s not. First of all, the nature of both wireless and busses is such that everything reaches everyone anyway. It’s more about filtering out what we’re not interested in. The transmissions are the same, it’s just the receivers which apply different filtering rules.

But perhaps far more importantly, is that this intrinsic broadcasting behaviour leads to a different way of designing systems. I can add a new wireless sensor node to my setup without having to decide what to do with the measurements yet. Also, I will often set up a second listen-only node for testing, and it just picks up all the packets without affecting my “production” setup. For tests which might interfere, I pick a different net group, since the RF12 driver (and the RFM12B hardware itself) has implicit “origin-id-filtering” built in. When initialised for a certain net group, all other packets automatically get ignored.

Even N-to-1 communication is possible by having multiple nodes send out messages with the same ID (and their distinguishing details elsewhere in the payload). This is not allowed on the CAN bus, btw – there, each sender has to stick to unique IDs.

The approach changes from “hey YOU, let me tell you THIS”, to “I am saying THIS”. If no one is listening, then so be it. If we need to make sure it was received, we could extend the conventions so that B nods by saying “got THIS” and then we just wait for that message (with timeouts and retries, it’s very similar to a traditional ACK mechanism).

It’s a flexible and natural model – normal speech works the same, if you think about it…

PS. The reason this is coming up, is that I’m looking for a robust way way to implement JeeBoot auto-discovery.

Meet the wireless JeeBoot

In Software on Oct 31, 2012 at 00:01

This has been a long time coming, and the recent Elektro:camp meet-up has finally pushed me to figure out the remaining details and get it all working (on foam board!):

DSC 4220

Bottom middle is a JeeLink, which acts as a “boot server” for the other two nodes. The JeeNode USB on the left and the JeeNode SMD on the right (with AA Power Board) both now have a new boot loader installed, called JeeBoot, which supports over-the-air uploading via the RFM12B wireless module.

The check for new firmware happens when pressing reset on a remote node (not on power-up!). This mechanism is already quite secure, since you need physical access to the node to re-flash it. Real authentication could be added later.

The whole JeeBoot loader is currently a mere 1.5 KB, including a custom version of the RF12 driver code. Like every ATmega boot loader, it is stored in the upper part of flash memory and cannot be damaged by sketches running amok. The code does not yet include proper retry logic and better low-power modes while waiting for incoming data, but that should fit in the remaining 0.5 KB. The boot loader could be expanded to 4 KB if need be, but right now this thing is small enough to fit even in an ATmega168, with plenty of room left for a decent sketch.

The boot algorithm is a bit unconventional. The mechanism is driven entirely from the remote nodes, with the central server merely listening and responding to incoming requests in a state-less fashion. This approach should offer better support for low-power scenarios. If no new code is available, or if the server does not respond quickly, the remote node continues by launching the current sketch. If the current sketch does not match its stored size and CRC (perhaps because of an incomplete or failed previous upload attempt), then the node retries until it has a valid sketch. That last part hasn’t been fully implemented yet.

The boot server node can be a JeeLink, which has enough memory on board to store different sketches for different remote nodes (not all of them need to be running the same code). But it could also be another RFM12B-based setup, such as a small Linux box or PC.

This first test server has just two fixed tiny sketches built in: fast blink and slow blink. It alternately sends either one or the other, which is enough to verify that the process works. Each time any node is reset, it’ll be updated with one of these two sketches. A far more elaborate server sketch will be needed for a full-fledged over-the-air updatable WSN.

But hey, it’s a start, and the hardest part is now done!

Maximum speed wireless transfers

In Software on Nov 26, 2011 at 00:01

Prompted by a question of the forum, I wanted to go bit into the way you can collect data from multiple JeeNodes as quickly as possible.

Warning: I’m completely disregarding the “1% rule” on 868 MHz, which says that a device should not be sending more than 1% of the time, so that other devices have a good chance of getting through as well (even if they are used for completely unrelated tasks). This rule is what keeps the 868 MHz band relatively clean – no one is allowed to “flood”. Which is exactly what I’m going to do in this test…

Ok, first of all note that all devices on the 868 MHz wireless ISM band have to share that frequency. It only works if at most one device is transmitting at a time. Many simple OOK transmitters, such as weather sensor nodes, don’t do that: they just send out their packet when they feel like it. Fortunately, most of them do so relatively infrequently, once every few minutes or so. And due to the 1% rule, most transmissions will be ok – since the 868 MHz band is available most of the time.

This changes when you start to try and push as much information across as you can. With one sender, it’s easy: just ignore the rule and send as much as you can. With the default RF12 settings, you should be able to get a few hundred small packets per second across. With occasional loss due to a collision with another sender.

But how do you get the maximum amount of data across from say three different nodes?

It won’t work to let them all send at will. It’s also a bit complicated to make them work in perfect sync, with each of them keeping accurate track of time and taking turns in the right order.

Here’s a simpler idea to “arbitrate media access”, as this is called: let the central node poll each of the remote nodes, and let each remote node then send out an ACK with the “requested” data only when asked.

I decided to give it a go with two simple sketches. One is the poller, which sits at the center and tries to obtain as many packets as it can:

Screen Shot 2011 11 24 at 12 47 39

It cycles over each of the remote node ID’s, sends them a packet, and waits briefly for a reply to come in. Note that the packet sent out is empty – it just needs to trigger the remote node to send an ACK with the actual payload.

The remote nodes each run a copy of the pollee sketch, which is even simpler:

Screen Shot 2011 11 24 at 12 37 07

They just wait for an empty incoming packet addressed to them, and reply with the data they want to get across. I just send the node ID and the current time in milliseconds.

Here is the result, with one poller and three pollee’s:

    1: 36974
    2: 269401
    3: 10128
    1: 36992
    2: 269417
    3: 10145
    1: 37009
    2: 269434
    3: 10163

As you can see, each node gets one packet across about once every 17 ms (this will slow down if more data needs to be sent). So that’s 6 short packets flying through the air every 17 ms, i.e. ≈ 350 packets per second.

There are ways to take this further, at the cost of extra complexity. One idea (called TDMA), is to send out one poll packet to line up the remote’s clocks, and then have them send their payload a specific amount of time later. IOW, each node gets its own “time slot”. This reduces the 6 packets to 4, in the case of 3 remote nodes.

No more collisions, but again: this will block every other transmission attempted on the 868 MHz band!

Wireless mousetrap

In Hardware on Oct 22, 2010 at 00:01

Mathias Johansson recently sent me a description of his project which is just too neat to pass up. So here goes – photos by him and most of the text is also adapted from what he told me in a few emails.

I’ll let Mathias introduce his project:

It is late autumn here in Sweden and the mice starts to search for a winter home. They do normally stay outside modern constructions but I have a croft that is over 100 years old and they tend to like my attic. Mice are pretty cute, and I wish them no harm, but they damage my ceilings! Therefore I have to catch them in traps and transport them deep into the forest and release them near the brink of a stream.

Some properties of each of the three traps built so far:

  • Does not harm the mouse
  • Immediately reports which trap is closed on a webpage
  • Disables itself if the alarm system is turned on for longer periods of absence

Here’s the mousetrap, ready to go:

Mouse Trap

With a guest…

Mouse in Trap

And here the status page indicating which traps have been sprung:


The schematic was implemented on what looks like a mini breadboard, here’s the Fritzing version of it:

Musetrap v0 1 bb

The infrared receiver was salvaged from a BoeBot and is used to detect the presence of a guest to spring the trap. Note that by the time it detects something, the tail of the mouse will be mostly inside the trap, so this is a gentle as it gets – well, if you’re a mouse…

Mathias concludes with:

Feel free to publish the pictures (and text) in your “success story” forum or elsewhere on your web-page if you think they are of interest to others or inspire new uses for the JeeNode.

Thanks, Mathias, for sharing your ideas and your delightful rodent-friendly project!

Switching from direct serial to wireless

In Software on Jun 5, 2010 at 00:01

The recent weblog post about the BMP085 sensor used on the Pressure Plug sends its readings out to the serial port once a second. It also included a few extra lines of code to send the results wirelessly via the RF12 driver. I added that code to illustrate how easy it is to go from a wired hookup to a wireless hookup with JeeNodes:

Screen Shot 2010 05 26 at 21.36.01

Sending it is only half the story – now I need pluck it out of the air again. I could have used the RF12demo sketch with – say – a JeeLink to pick up these transmissions, but then I’d get something like this back:

Screen Shot 2010 05 26 at 21.33.22

I.e. 6 bytes of raw data. One way to deal with this is to write some code on the other end of the serial port, i.e. on the receiving workstation, to decode the reported temperature and pressure values. That’s what I’ve been doing with JeeMon on several occasions. Then again, not everyone wants to use JeeMon, probably.

Another way is to simply create a special-purpose sketch to report the proper values. Such as this one:

Screen Shot 2010 06 24 at 18.42.53

Sample output:

Screen Shot 2010 05 26 at 21.27.22

I used the same format for the output as the “bmp085demo.pde” sketch, but since the two raw data values are not included in the packets, I just report 0’s for them. As you can see, the results is more or less the same as having the Pressure Plug attached directly.

The ACK logic in this sketch looks a bit complicated due to the bit masking that’s going on. Basically, the goal is to only send an ACK back if the sending node requests one. And since we assume that the sending node used a broadcast, we can then extract the sending node ID from the received packet, and send an ACK just to that node.

Tomorrow, I’ll describe how all this can be streamlined and simplified.

Wireless works, sort of…

In Software on Feb 10, 2010 at 00:01

Ah, now we’re getting somewhere!

This is my current test setup:

Screen shot 2010-02-07 at 21.23.55.png

The JeeNode on the printer side implements a packet pass-through system, receiving command packets from the JeeLink and sending back response packets. Here is the sketch which does all the work:

Screen shot 2010-02-08 at 01.02.16.png

On the Mac/PC side, some Tcl code was added to go through the RF12demo text-mode protocol to send and receive arbitrary data, using the “a” command. Still work in progress, but the basic transport encapsulation works.

The tricky part is timing … it always is with this sort of real-time control stuff. Unfortunately, the current G3 software on the CupCake isn’t quite as responsive as defined in the specs. Some responses take way over 80 milliseconds to come back from the motherboard. This is the case when scanning the SD card, as well as when stopping the extruder motor.

So what this sketch does is wait up to 500 ms for a reply to come in. Even if there isn’t one, an acknowledgement packet will be sent back. The new code on the Mac in turn waits up to 1 second for that ack to come back.

If no ack came back, then there was an error in the wireless connection (this can happen either during the request or during the ack, there is no way to tell!). Probably best thing to do would be to resend the command.

If an empty ack came back, then the response packet did not arrive within 500 ms. In this case, we could send an empty command and wait for its ack. This hasn’t been implemented yet, but it will allow dealing with even the slowest responses, simply by polling a few more times with an “empty command”.

But hey – it works, and the output is the same as before:

Screen shot 2010-02-07 at 21.26.39.png

This is probably the first wirelessly controllable CupCake in the world :)

What I should mention though is that this doesn’t yet work reliably due to those very loose timing behaviors and the fact that packet errors are not yet dealt with. Test runs fail occasionally – mostly in the SD card access code, i.e. while grabbing all the filenames with NEXT_FILENAME.

Wireless CupCake

In Hardware on Feb 8, 2010 at 00:01

JeeCake, the CupCake 3D printer here, is an interesting mix of machine, electronics, and software.

The RepRap Motherboard is the main on-board controller, based on an ATmega644. It drives 3 stepper motors, communicates with the Extruder board, and talks to a desktop computer via its FTDI interface and a USB cable.

On the PC side (which can be Windows, Mac, or Linux), there is a Processing-/Arduino-like package called ReplicatorG to control the machine. It takes G-code, which is not related to Google in any way, but rather an ancient CNC control language from the 60’s.

ReplicatorG then converts this to a binary RepRap 3G protocol, as documented here and then uses that to drive the CupCake. Everything from moving the axes, adjusting the nozzle temperature, controlling the extruder motor, to writing settings in the machine’s EEPROM memory.

The neat part is that the v1.2 motherboard has an SD card adapter on board, and that the CupCake can run unattended by getting its detailed build instructions from files stored on an SD card.

Except for one little detail: there are no controls or displays on the CupCake, other than a few LEDs. This is not enough to select which file to print, adjust the zero position, or start a build. It looks like a new “Generation 4” design is on the way, including an interface board with an LCD screen and some push buttons.

Right now, you have to connect the machine via USB, start everything up via ReplicatorG, and then you can yank the USB plug and it’ll happily continue printing what it started, right to the end (well, except that on Mac OS X 10.6, yanking USB cables can lead to kernel panics – looks like a serious bug in the FTDI USB driver).

Anyway, I don’t really care for controls, or even displays on the CupCake. All I want is some way to control a unit sitting on the other side of the room, or in a nearby room. It can be fairly noisy due to some sort of occasional wood panel resonance, and having it printing right next to me is not really my idea of fun.

Which is where JeeNodes come in: wouldn’t it be nice to be able to control the CupCake via wireless? The protocol is already perfectly suited for it, since it uses packets of max 32 bytes – well within the 66-byte limit of the RF12 driver. And if the object being printed is already on the SD card, then only a few packets need to be exchanged to get going. During printing, some status info could be sent back – again very low rate stuff, easily within the JeeNode’s wireless constraints.

Here’s the idea:


Hook up a JeeNode to take the place of the USB cable, and let it behave as a RecpicatorG control program.

The interface needs to swap RX and TX for this, and because the CupCake’s signal levels are at 5V, two 1 kΩ resistors need to be inserted to prevent excessive current. One more detail is that the power pin on the FTDI connector is not connected, since the motherboard is powered off its own PC supply. So a separate wire needs to be added to power the JeeNode off the ISP connector right under the FTDI connector:


The one wire not shown here is the ground wire, running from top right to bottom left on the other side of this little custom interface board.

It turns out that the motherboard is actually powered from the 5V standby supply pin, so it’s always powered, even when the power supply is in standby mode.

I’ve started writing some code on the PC/Mac side to control the CupCake without using ReplicatorG. Here’s some sample code in Tcl which works when connected directly through USB:

Screen shot 2010-02-07 at 15.05.08.png

And here’s the corresponding output:

Screen shot 2010-02-07 at 15.04.35.png

As you can see, it can access all sorts of status info, read the file names on the SD card, and control the machine. This is part of a larger project, the beginnings of which are now in the subversion code repository.

It takes a lot of work to make hookups like these work, because there are so many different bits and pieces (literally) involved. The next step is to see if the JeeNode can indeed communicate with – and control – the JeeCake, and then code needs to be written to replace the current direct-USB connection by a packet-based wireless hookup through a JeeLink + JeeNode.

Oh, and then I need to create some sort of little on-screen control panel to adjust the nozzle temperature, jog the Z axis up and down, pick a file to print, and start the print job! Not to mention making it robust and secure…

Wireless Light Sensor – POF 71

In AVR, Hardware, Software on Dec 8, 2009 at 00:01

After last week’s Hello World POF to get started, here is a new Project On Foam:


A battery-powered wireless light sensor node. This is POF 71, and it’s fully documented on the wiki.

This project goes through setting up the Ports and RF12 libraries, setting up a central JeeNode or JeeLink, and constructing the light sensor node.

It also describes how to keep the node configuration in EEPROM, how to make a sensor node more responsive, and how to get power consumption down for battery use.

The POF includes code examples and uses the easy transmission mechanism, with the final responsive / low-power sketch requiring just a few dozen lines of code, including comments. The sketch compiles to under 5 Kbyte, leaving lots and lots of room to extend it for your own use.

All suggestions welcome. Anyone who wants to participate in these POFs, or in the wiki in general, just send me an email with the user name you’d like to use. I’m only restricting edit access to the wiki to prevent spamming.

Wireless power monitoring

In Hardware on Jan 25, 2009 at 16:47

Looks like lots of hobbyists are starting to explore the world of tracking energy usage in the home in real time.

Here’s a recent weblog entry by LadyAda, called Wattcher. It uses off-the shelf components and modifies them (could that be called a hardware mash-up?).

Great, very similar to what I’m after – but I hope to get there at a lower cost…

Wireless RFM12B Module

In Hardware on Dec 10, 2008 at 15:36

A new 868 MHz module came in today, which uses FSK as modulation method:


This is a tiny 16×16 mm RFM12B module. It contains a transceiver and can be connected with a 4-pin SPI bus plus an IRQ pin to send/receive bytes using interrupts. Documentation is a bit sketchy, but there are a few different code samples on the net.

This version is for SMD mounting (with 2mm pins, again). But it’s easy to add a few wires and use this in a 0.1” grid breadboard. The antenna used here is an 82 mm straight wire.

Wireless at 433 MHz

In AVR, Hardware on Nov 29, 2008 at 01:03

This was an experiment to learn about low-power / low-range wireless communication using a 433 Mhz transmitter / receiver set from Conrad. The transmitter was tied to an RBBB board and accessed via an FTDI-USB cable:


The receiver was mounted on a proto shield with breadboard, on top of a standard Arduino:


The software for this requires some attention due to the crude communication system.

Basically, the transmitter is turned on and off by a serial bit stream, i.e. this is not FM or even AM, just the presence and absence of a signal. To make this work properly and get say 20 bytes of data across, you have to go through the following sequence:

  • turn on the signal for a few milliseconds so the receiver adjust its AFC (assuming it has one)
  • send a unique bit pattern so the receiver can synchronize to receive data as individual bytes
  • send the data bytes, i.e. the “payload”
  • send a 2-byte CRC so the receiver can verify proper reception
  • turn off the signal for at least a few dozen milliseconds to avoid hogging the radio channel
  • re-send the whole packet one or more times to deal with interference and collisions

Even then, correct reception is not guaranteed – that would require a transceiver setup with two-way acknowledgement.

Data is sent using Manchester code, a phase change trick which keeps the on and off times equal, on average. The signalling rate is only 1000 baud (i.e. 1 millisecond per bit) and even then the error rate is quite substantial.

The CRC is calculated via standard code from the avr-libc library. A little trick is used to simplify the code: when the two CRC bytes are appended in little-endian format, the receiver can calculate its CRC including these bytes and verify that the result is zero.

The C test code is in the download area – for both the transmission and the reception version. It uses busy loops for (rough) timing. An interrupt-driven version using one of the 168’s hardware counters would have been preferable, to generate a jitter-free signal and to be able to handle other tasks during reception.

The results indicate that the transmitter really needs an external antenna wire to cross more than a meter (!) or so of air. With antenna, the signal barely makes it through one (reinforced) concrete wall in the house, which is insufficient for my purposes, but an 868 MHz version will probably overcome this limitation.

Most bare µCs are clueless

In on Nov 11, 2015 at 00:00

Small low-cost embedded micro-controllers are amazing devices. Programmable in C/C++, 32 bits of power, and a huge step up from how old micro- and mini-computers worked!

But whenever you power them up, some of these chips are ridiculously limited. Sure, there is a lot of flash memory, and they’ll start running the code in there the moment power is applied – but what about that very first time, before there is code at all in there?

How do we get our great new code into that empty chip?

Laptop to chip

There are two common solutions:

  1. The µC has hardware support for ISP, allowing you (or your supplier) to use an ISP-programmer to store some code in flash for you. This is how ATmega’s work.

  2. The µC includes a ROM, containing a “boot loader”, i.e. code which lets you add your own code into its flash memory. This is what all ARM µC’s support.

Either way, bare µC chips fresh from the factory each have their own specific way of allowing us to put our own code into flash memory.

With STM32 µCs, which is what we’re interested in here, there is always a boot loader in ROM present. The simplest chips only support a serial port in this mode, the more elaborate ones also support USB, I2C, SPI, CAN bus, and even Ethernet – depending on available hardware features.

In the case of the STM32F103 series, things get a little hairier on the low end: even though the 64K and 128K models support USB and CAN bus, and have several serial ports, the built-in ROM boot loader still only supports one (specific!) serial port. To get these chips to do anything for us, we need to first bring ourselves down to their level and… talk TTL 3.3V serial, using STM’s USART boot loader protocol – as defined in this document (PDF).

It would be awful to have to always connect those serial I/O pins to a serial port on our development computer, though. After all, these chips have USB, and who knows, they might even be used remotely in a Wireless Sensor Network.

That’s where “boot loaders” come in, once again: we can store a small amount of our code inside flash memory, which always runs on power up, and which terminates by jumping to a different area in flash memory. This secondary boot loader can be whatever we like. The most common ones talk to us in the way we prefer, will respond to requests to erase certain parts of flash memory for us, and will store whatever new code we send them into the just-erased parts of flash memory. As long as the boot loader takes care to never overwrite itself, it’ll always be the first thing that starts up after power up or after a reset.

All the ATmega-based Arduino’s are shipped with a boot loader, which talks to the (main) serial port, and reprograms the chip as needed. If no upload requests come in, the boot loader ends by jumping to “user code”. If that code crashes, we can reset the chip, and re-upload new code to fix the problem. Or to add more features. We control it via serial I/O.

For the STM32 chips, we really want to have a similar mechanism, using the USB port. That way, a single USB cable will be all we need to power our board, upload new code into it as needed, and communicate with our “sketch” once it starts running. Very convenient.

So far so good. But what about those initially-empty chips? And what if the boot loader gets overwritten by accident, because of some mistake or malice in the user code? Could we lock ourselves out, so the chip can’t be re-programmed? Can we read the code, once uploaded?

And what about that very impressive-sounding technique called “hardware debugging”? What’s JTAG and why does this term keep popping up? And what’s SWD? Should we care?

One step at a time, please! The bootstrap process has only just started…

[Back to article index]

Meet the RF Node Watcher

In on Nov 9, 2015 at 00:00

All of the tinkering so far is nice, but let’s raise the stakes a bit and create a little setup to test all this, so we can end up with this funny geeky gadget called the RF Node Watcher:

DSC 5184

That’s the main µC board, an RFM69 radio for the 868 MHz ISM band (connected over SPI), a push button, and a fun little 128×64 OLED graphics display (connected over I2C).

The back side has a bit of wiring soldered on to connect it all together:

DSC 5217

Here’s a demo of the OLED in action (full code in the Embello repository on GitHub):

    #include <SPI.h>
    #include <Wire.h>
    #include <Adafruit_GFX.h>
    #include <Adafruit_SSD1306.h>

    Adafruit_SSD1306 oled;

    static const uint8_t logo_64x64[] = { ... };

    void setup()   {                
        // generate OLED supply internally from 3.3V
        oled.begin(SSD1306_SWITCHCAPVCC, 0x3C);

        oled.drawBitmap(32, 00,  logo_64x64, 64, 64, 1);

    void loop() {}

As you can see, it uses an existing library to drive the OLED. Here is the result:

IMG 0197

Running a little demo for the RFM69 wireless radio is deceptively simple, because most of the hard work has already been done, and this code was simply ported from the LPC8xx:

    #include <SPI.h>
    #include "spi.h"
    #include "rf69.h"

    RF69<SpiDev> rf;

    void setup () {

        rf.init(1, 42, 8686);

    void loop () {
        uint8_t buffer [70];
        int n = rf.receive(buffer, sizeof buffer);
        if (n >= 0) {
            Serial.print("got #");
            for (int i = 0; i < n; ++i) {
                Serial.print(' ');

The output is reported over the USB serial port.

And as final “pièce de résistance” for now, there’s a sketch called watcher, which combines everything into one big demo, using the components on this board. It listens for incoming packets and shows the last four of them in a tiny compact list on that OLED display:

IMG 0198

Only useful for geeks who need to “see” all those packets flying through the air, but hey!

[Back to article index]

A few simple sketches for ARM

In on Nov 8, 2015 at 00:00

Let’s look at some code, to see what the IDE w/ Arduino-STM add-on is capable of.

First, the fabulously famous “Hello world” of Physical Computing, i.e. blinking an LED:

    const int LED = PA1; // inverted logic

    void setup () {
        pinMode(LED, OUTPUT);

    void loop () {
        digitalWrite(LED, LOW);     // on!
        digitalWrite(LED, HIGH);    // off!

This is virtually identical to the code on Arduino hardware. The only difference is that pin numbers are more easily identified by the names (“PA1”) normally used with ARM µCs.

Here’s how to send a periodic message to the serial port (that’s the USB port in this case!):

    void setup () {

    void loop () {

Again, no surprises. Want to turn an LED on with a button? No sweat:

    const int LED = PA1;    // inverted logic
    const int BUTTON = PB0; // inverted logic

    void setup () {
        pinMode(LED, OUTPUT);
        pinMode(BUTTON, INPUT_PULLUP);

    void loop () {
        digitalWrite(LED, digitalRead(BUTTON));

Making the LED fade using PWM and the Arduino’s analogWrite() function:

    const int LED = PA1; // inverted logic

    void setup () {
        pinMode(LED, OUTPUT);

    void loop () {
        // 255 is fully off, 0 is maximally on
        for (int i = 255; i >= 0; --i) {
            analogWrite(LED, i);
        for (int i = 0; i <= 255; ++i) {
            analogWrite(LED, i);

Read out an analog value with the 12-bit ADC and report it to the serial port? Peanuts:

    void setup () {

    void loop () {

As you can see, things work in virtually the same way on STM32 boards as what you have been used to on ATmega’s, such as Arduino and JeeNode boards. Easy sailing!

Access to the built-in I2C and SPI peripherals is equally simple, and so is the connection of an RFM69 wireless radio module. To be described next in a neat little project…

[Back to article index]

Getting back in the groove

In Musings on Sep 30, 2015 at 00:01

This will be the last post in “summer mode”. Next week, I’ll start posting again with articles that will end up in the Jee Book, as before – i.e. trying to create a coherent story again.

The first step has just been completed: clearing up my workspace at JeeLabs. Two days ago, every flat surface in this area was covered with piles of “stuff”. Now it’s cleaned up:

IMG 0154

On the menu for the rest of this year: new products, and lots of explorations / experiments in Physical Computing, I hope. I have an idea of where to go, but no definitive plans. There is a lot going on, and there’s a lot of duplication when you surf around on the web. But this weblog will always be about trying out new things, not just repeating what others are doing.

My focus will remain aimed at “Computing stuff tied to the physical world” as the JeeLabs byline says, in essentially two ways: 1) to improve our living environment in and around the house, and 2) to have fun and tinker with low-cost hardware and open source software.

For one, I’d like to replace the wireless sensor network I’ve been running here, or at least gradually evolve all of the nodes to new ARM-based designs. Not for the sake of change but to introduce new ideas and features, get even better battery lifetimes, and help me further in my quest to reduce energy consumption. I’d also like to replace my HouseMon 0.6 setup which has been running here for years now, but with virtually no change or evolution.

An idea I’d love to work on is to sprinkle lots of new room-node like sensors around the house, to find out where the heat is going – then correlate it to outside temperature and wind direction, for example. Is there some window we can replace, or some other measure we could take to reduce our (still substantial) gas consumption during the cold months? Perhaps the heat loss is caused by the cold rising from our garage, below the living room?

Another long-overdue topic, is to start controlling some appliances over wireless, not just collecting the data from what are essentially send-only nodes. Very different, since usually there is power nearby for these nodes, and they need good security against replay-attacks.

I’ll want to be able to see the basic “health” indicators of the house at a glance, perhaps shown inconspicuously on a screen on the wall somewhere (as well as on a mobile device).

As always, all my work at JeeLabs will be fully open source for anyone to inspect, adopt, re-use, extend, modify, whatever. You do what you like with it. If you learn from it and enjoy, that’d be wonderful. And if you share and give back your ideas, time, or code: better still!

Stay tuned. Lots of fun with bits, electrons, and molecules ahead :)

Bandwagons and islands

In Musings on Sep 16, 2015 at 00:01

I’ve always been a fan of the Arduino ecosystem, hook, line, and sinker: that little board, with its AVR microcontroller, the extensibility, through those headers and shields, and the multi-platform IDE, with its simple runtime library and access to all its essential hardware.

So much so, that the complete range of JeeNode products has been derived from it.

But I wanted a remote node, a small size, a wireless radio, flexible sensor options, and better battery lifetimes, which is why several trade-offs came out differently: the much smaller physical dimension, the RFM radio, the JeePort headers, and the FTDI interface as alternative for a built-in USB bridge. JeeNodes owe a lot to the Arduino ecosystem.

That’s the thing with big (even at the time) “standards”: they create a common ground, around which lots of people can flock, form a community, and extend it all in often quite surprising and innovative ways. Being able to acquire and re-use knowledge is wonderful.

The Arduino “platform” has a bandwagon effect, whereby synergy and cross-pollination of ideas lead to a huge explosion of projects and add-ons, both on the hardware as on the software side. Just google for “Arduino” … need I say more?

Yet sometimes, being part of the mainstream and building on what has become the “baseline” can be limiting: the 5V conventions of early Arduino’s doesn’t play well with most of the newer sensor chips these days, nor is it optimal for ultra low-power uses. Furthermore, the Wiring library on which the Arduino IDE’s runtime is based is not terribly modular or suitable for today’s newer µC’s. And to be honest, the Arduino IDE itself is really quite limited compared to many other editors and IDE’s. Last but definitely not least, C++ support in the IDE is severely crippled by the pre-processing applied to turn .ino files into normal .cpp files before compilation.

It’s easy to look back and claim 20-20 vision in hindsight, so in a way most of these issues are simply the result of a platform which has evolved far beyond the original designer’s wildest dreams. No one could have predicted today’s needs at that point in time.

There is also another aspect to point out: there is in fact a conflict w.r.t. what this ecosystem is for. Should it be aimed at the non-techie creative artist, who just wants to get some project going without becoming an embedded microelectronics engineer? Or is it a playground for the tech geek, exploring the world of physical computing, diving in to learn how it works, tinkering with every aspect of this playground, and tracing / extending the boundaries of the technology to expand the user’s horizon?

I have decades of software development experience under my belt (and by now probably another decade of physical computing), so for me the Arduino and JeeNode ecosystem has always been about the latter. I don’t want a setup which has been “dumbed down” to hide the details. Sure, I crave for abstraction to not always have to think about all the low-level stuff, but the fascination for me is that it’s truly open all the way down. I want to be able to understand what’s under the hood, and if necessary tinker with it.

The Arduino technology doesn’t have that many secrets any more for me, I suspect. I think I understand how the chips work, how the entire circuit works, how the IDE is set up, how the runtime library is structured, how all the interrupts work together, yada, yada, yada.

And some of it I’m no longer keen to stick to: the basic editing + compilation setup (“any editor + makefiles” would be far more flexible), the choice of µC (so many more ARM fascinating variants out there than what Atmel is offering), and in fact the whole premise of using an edit-compile-upload-run seems limiting (over-the air uploads or visual system construction anyone?).

Which is why for the past year or so, I’ve started bypassing that oh-so-comfy Arduino ecosystem for my new explorations, starting from scratch with an ARM gcc “toolchain”, simple “makefiles”, and using the command-line to drive everything.

Jettisoning everything on the software side has a number of implications. First of all, things become simpler and faster: less tools to use, (much) lower startup delays, and a new runtime library which is small enough to show the essence of what a runtime is. No more.

A nice benefit is that the resulting builds are considerably smaller. Which was an important issue when writing code for that lovely small LPC810 ARM chip, all in an 8-pin DIP.

Another aspect I very much liked, is that this has allowed me to learn and subsequently write about how the inside of a runtime library really works and how you actually set up a serial port, or a timer, or a PWM output. Even just setting up an I/O pin is closer to the silicon than the digitalWrite(...) abstraction provided by the Arduino runtime.

… but that’s also the flip side of this whole coin: ya gotta dive very deep!

By starting from scratch, I’ve had to figure out all the nitty gritty details of how to control the hardware peripherals inside the µC, tweaking bit settings in some very specific way before it all started to work. Which was often quite a trial-and-error ordeal, since there is nothing you can do other than to (re-) read the datasheet and look at proven example code. Tinker till your hair falls out, and then (if you’re lucky) all of a sudden it starts to work.

The reward for me, was a better understanding, which is indeed what I was after. And for you: working examples, with minimal code, and explained in various weblog posts.

Most of all this deep-diving and tinkering can now be found in the embello repository on GitHub, and this will grow and extend further over time, as I learn more tricks.

Embello is also a bit of an island, though. It’s not used or known widely, and it’s likely to stay that way for some time to come. It’s not intended to be an alternative to the Arduino runtime, it’s not even intended to become the ARM equivalent of JeeLib – the library which makes it easy to use the ATMega-based JeeNodes with the Arduino IDE.

As I see it, Embello is a good source of fairly independent examples for the LPC8xx series of ARM µC’s, small enough to be explored in full detail when you want to understand how such things are implemented at the lowest level – and guess what: it all includes a simple Makefile-based build system, plus all the ready-to-upload firmware.bin binary images. With the weblog posts and the Jee Book as “all-in-one” PDF/ePub documentation.

Which leaves me at a bit of a bifurcation point as to where to go from here. I may have to row back from this “Embello island” approach to the “Arduino mainland” world. It’s no doubt a lot easier for others to “just fire up the Arduino IDE” and load a library for the new developments here at JeeLabs planned for later this year. Not everyone is willing to learn how to use the command line, just to be able to power up a node and send out wireless radio packets as part of a sensor network. Even if that means making the code a bit bulkier.

At the same time, I really want to work without having to use the Arduino IDE + runtime. And I suspect there are others who do too. Once you’ve developed other software for a while, you probably have adopted a certain work style and work environment which makes you productive (I know I have!). Being able to stick to it for new embedded projects as well makes it possible to retain that investment (in routine, knowledge, and muscle memory).

Which is why I’m now looking for a way to get the best of both worlds: retain my own personal development preferences (which a few of you might also prefer), while making it easy for everyone else to re-use my code and projects in that mainstream roller coaster fashion called “the Arduino ecosystem”. The good news is that the Arduino IDE has finally evolved to the point where it can actually support alternate platforms, including ARM.

We’ll see how it goes… all suggestions and pointers welcome!

Could a coin cell be enough?

In Musings on Jul 29, 2015 at 00:01

To state the obvious: small wireless sensor nodes should be small and wireless. Doh.

That means battery-powered. But batteries run out. So we also want these nodes to last a while. How long? Well, if every node lasts a year, and there are a bunch of them around the house, we’ll need to replace (or recharge) some battery somewhere several times a year.

Not good.

The easy way out is a fat battery: either a decent-capacity LiPo battery pack or say three AA cells in series to provide us with a 3.6 .. 4.5V supply (depending on battery type).

But large batteries can be ugly and distracting – even a single AA battery is large when placed in plain sight on a wall in the living room, for example.

So… how far could we go on a coin cell?

Let’s define the arena a bit first, there are many types of coin cells. The smallest ones of a few mm diameter for hearing aids have only a few dozen mAh of energy at most, which is not enough as you will see shortly. Here some coin cell examples, from Wikipedia:

Coin cells

The most common coin cell is the CR2032 – 20 mm diameter, 3.2 mm thick. It is listed here as having a capacity of about 200 mAh:

A really fat one is the CR2477 – 24 mm diameter, 7.7 mm thick – and has a whopping 1000 mAh of capacity. It’s far less common than the CR2032, though.

These coin cells supply about 3.0V, but that voltage varies: it can be up to 3.6V unloaded (i.e. when the µC is asleep), down to 2.0V when nearly discharged. This is usually fine with today’s µCs, but we need to be careful with all the other components, and if we’re doing analog stuff then these variations can in some cases really throw a wrench into our project.

Then there are the AAA and AA batteries of 1.2 .. 1.5V each, so we’ll need at least two and sometimes even three of them to make our circuits work across their lifetimes. An AAA cell of 10.5×44.5 mm has about 800..1200 mAh, whereas an AA cell of 14.5×50.5 mm has 1800..2700 mAh of energy. Note that this value doesn’t increase when placed in series!


Let’s see how far we could get with a CR2032 coin cell powering a µC + radio + sensors:

  • one year is 365 x 24 – 8,760 hours
  • one CR2032 coin cell can supply 200 mAh of energy
  • it will last one year if we draw under 23 µA on average
  • it will last two years if we draw under 11 µA on average
  • it will last four years if we draw under 5 µA on average
  • it will last ten years if we draw under 2 µA on average

An LPC8xx in deep sleep mode with its low-power wake-up timer kept running will draw about 1.1 µA when properly set up. The RFM69 draws 0.1 µA in sleep mode. That leaves us roughly a 10 µA margin for all attached sensors if we want to achieve a 2-year battery life.

This is doable. Many simple sensors for temperature, humidity, and pressure can be made to consume no more than a few µA in sleep mode. Or if they consume too much, we could tie their power supply pin to an output pin on the µC and completely remove power from them. This requires an extra I/O pin, and we’ll probably need to wait a bit longer for the chip to be ready if we have to power it up every time. No big deal – usually.

A motion sensor based on passive infrared detection (PIR) draws 60..300 µA however, so that would severely reduce the battery lifetime. Turning it off is not an option, since these sensors need about a minute to stabilise before they can be used.

Note that even a 1 MΩ resistor has a non-negligible 3 µA of constant current consumption. With ultra low-power sensor nodes, every part of the circuit needs to be carefully designed! Sometimes, unexpected consequences can have a substantial impact on battery life, such as grease, dust, or dirt accumulating on an openly exposed PCB over the years…

Door switch

What about sensing the closure of a mechanical switch?

In that case, we can in fact put the µC into deep power down without running the wake-up timer, and let the wake-up pin bring it back to life. Now, power consumption will drop to a fraction of a microamp, and battery life of the coin cell can be increased to over a decade.

Alternately, we could use a contact-less solution, in the form of a Hall effect sensor and a small magnet. No wear, and probably easier to install and hide out of sight somewhere.

The Seiko S-5712 series, for example, draws 1..4 µA when operated at low duty cycle (measuring 5 times per second should be more than enough for a door/window sensor). Its output could be used to wake up the µC, just as with a mechanical switch. Now we’re in the 5 µA ballpark, i.e. about 4 years on a CR2032 coin cell. Quite usable!

It can pay off to carefully review all possible options – for example, if we were to instead use a reed relay as door sensor, we might well end up with the best of both worlds: total shut-off via mechanical switching, yet reliable contact-less activation via a small magnet.

What about the radio

The RFM69 draws from 15 to 45 mA when transmitting a packet. Yet I’m not including this in the above calculations, for good reason:

  • it’s only transmitting for a few milliseconds
  • … and probably less than once every few minutes, on average
  • this means its duty cycle can stay well under 0.001%
  • which translates to less than 0.5 µA – again: on average

Transmitting a short packet only every so often is virtually free in terms of energy requirements. It’s a hefty burst, but it simply doesn’t amount to much – literally!


Aiming for wireless sensor nodes which never need to listen to incoming RF packets, and only send out brief ones very rarely, we can see that a coin cell such as the common CR2032 will be able to support nodes for several years. Assuming that the design of both hardware and software was properly done, of course.

And if the CR2032 doesn’t cut it – there’s always the CR2477 option to help us further.

RFM69s, OOK, and antennas

In Musings on Jul 15, 2015 at 00:01

Recently, Frank @ SevenWatt has been doing a lot of very interesting work on getting the most out of the RFM69 wireless radio modules.

His main interest is in figuring out how to receive weak OOK signals from a variety of sensors in and around the house. So first, you’ll need to extract the OOK information – it turns out that there are several ways to do this, and when you get it right, the bit patterns that come out snap into very clear-cut 0/1 groups – which can then be decoded:

FS20 histo 32768bps

Another interesting bit of research went into comparing different boards and builds to see how the setups affect reception. The good news is that the RFM69 is fairly consistent (no extreme variations between different modules).

Then, with plenty of data collection skills and tools at hand, Frank has been investigating the effect of different antennas on reception quality – which is a combination of getting the strongest signal and the lowest “noise floor”, i.e. the level of background noise that every receiver has to deal with. Here are the different antenna setups being evaluated:

RFM69 three antennas 750x410

Last but not least, is an article about decoding packets from the ELV Cost Control with an RFM69 and some clever tricks. These units report power consumption every 5 seconds:


Each of these articles is worth a good read, and yes… the choice of antenna geometry, its build accuracy, the quality of cabling, and the distance to the µC … they all do matter!

A µC is just a small computer

In on Jun 18, 2015 at 00:00

Ok, so let’s take the position that an embedded µC is “just another computer” for now, albeit dramatically more limited than the ones driving today’s laptops and desktops.

Now we want to write software for it. We can’t load a compiler onto the µC, but we can use a “cross-compiler”, which runs on our “big” computer and generates code for our little µC.

For decades, C/C++ has been the language of choice for this task. For decades, this meant purchasing a proprietary compiler which “targets” our specific brand of µC. But no more. Now, we can use “gcc” as open source toolchain for just about any µC under the sun.

Integrated Development Environments

The Arduino IDE needs no introduction (well, perhaps this one). It wraps a simple editor, the gcc tools, an uploader (avrdude), and a serial console into a convenient executable.

For TI’s ARM chips, there’s a derivative of the Arduino IDE, called Energia.

For NXP’s LPC ARM chips, there’s LPCxpresso, based on the Eclipse IDE.

The above environments are available for Windows, Mac OSX, and Linux. Many more editor/compiler solutions exists for specific host-platform + target-chip combinations.

The MBED online compiler offers another path, and is portable since it’s browser based.

Ide s

Reusing other people’s code

Each of these, in particular the Arduino-centric IDEs, come with a large set of libraries and an even larger range of open source libraries, contributed by their respective communities. This is a mixed blessing, as they don’t always mix-and-match, and sometimes their quality or level of use & support cannot be determined easily. Trial-and-error may be needed.

Libraries vary greatly in purpose and generality. They are, after all, just that: collections of code, organised in such a way to that they can be reused in other projects, and by others. There’s an ever-recurring trade-off between finding existing (solid!) code, extending that code if needed, or starting from scratch plus having to figure out everything ourselves.

As a virtually unlimited resource of open-source projects, there’s always GitHub. Here are a few libraries which cover a lot of ground in many different ways: PlatformIO (portable), stm32plus (STM32, C++ templates), and Cosa (AVR, OO). The list goes on and on, really.

RTOS for concurrency

With physical computing, it’s not uncommon to have to deal with many different tasks at (nearly) the same time: LEDs blinking, buttons getting pressed, data coming in and going out, high-speed ADC acquisition, wired and wireless communication, responding to critical conditions – depending on the project, there can be a lot of things going on.

That poor instruction-by-instruction µC may have a hard time keeping up. Or rather: us, trying to program all the logic and making sure all the combinations work as intended!

This is where the RTOS comes in: it’s like a mini operating system, aimed at switching the CPU around to perform multiple tasks. Which is in fact only half the problem: with multiple tasks there comes the need for robust ways to make these tasks work together – semaphores, mutex’es, critical regions, interrupt hierarchies. A slew of (tricky!) techniques.

MBED has an RTOS available as part of its runtime library (and a new one in progress).

ChibiOS is another example, with a “Hardware Abstraction Layer” (HAL) as well as all the fore-mentioned inter-task communications and synchronisations primitive. An interesting development is Nil, which shares a lot of code with ChibiOS, but aims for a very low code overhead by leaving out some of the more advanced and dynamic features.

The most widespread OSS RTOS is probably FreeRTOS. It’s mature and extensive.

One benefit of the RTOS is that it also creates a lot of conventions and an API, not just for the tasking side of things but also for the way devices are integrated into the application. This in turns encourages writers of larger libraries to tie into them, such as for accessing and modifying (µ)SD cards as a (V)FAT file system and for a full Ethernet TCP/IP stack.

Such more extensive features do need more flash and RAM memory, though. It’ll be hard to fit many features into a low-end ARM chip, alongside the RTOS itself (usually 2..10 KB).

Note how a complete RTOS with several advanced libraries really does start to make an embedded µC like a “big” computer! Megabytes of flash and RAM, gigabytes of permanent storage – it’s all feasible with larger µCs when combined with some external memory chips. Here is an example, with 8 MB flash, 32 MB RAM, and Ethernet, on a 28 x 104 mm board.

But the common theme with all of the above is really: cross-compilation (mostly C/C++) with uploading of the build result. The µC as a full-blown programmable computer, with the major tasks of editing, compiling, linking, and even debugging off-loaded to a “host”.

[Back to article index]

Using RFM12’s with RFM69 native

In on May 30, 2015 at 00:00

So far so good – we now have the RFM69 running in native packet mode using the RF69 driver, for LPC8xx ARM µC’s, ATmega328 JeeNodes, and Raspberry Pi’s w/ RasPi RF.

But let’s not leave the RFM12 modules behind, and the many Arduino IDE projects using the RF12 driver in JeeLib. If only it could send and receive native RF69-type packets!

Well, it turns out that the RFM12B wireless module can be tricked into doing just this. There are several issues involved:

  • the frequency, FSK swing, and bandwidth can be chosen to match the RFM69
  • likewise, the preamble and sync bytes can be chosen to work with both modules
  • the RFM69’s CRC can be computed in software (it’s not the same as in RF12!)
  • the data whitening used in the RFM69 can also be emulated in software
  • and lastly, differences in packet / header layouts can all be handled in software

Quite a few differences, but by carefully choosing the parameters on both types of modules, we can indeed get packets across in both directions. An updated RF12.cpp driver has been committed to GitHub with all the necessary changes.

To enable this “native + RFM12 + RF12 + Arduino + AVR” mode, change one define in RF12.h to set RF12_COMPAT to 1 (not to be confused with RF69_COMPAT mode!):

#define RF12_COMPAT 1

Note that this change needs to be made in JeeLib itself, you cannot simply add such a define to your own sketch. The reason for this is that the changes are much more pervasive.

The result is an “RF12 driver” with a virtually unmodified “RF12 API”, i.e. you can poll using rf12_recvDone(), etc – just as you would with a classic setup.

But there are some important differences when using RF12_COMPAT mode:

  • the packet layout is different (rf12_len and rf12_hdr are defined differently)
  • there’s a new rf12_dst field, the lower 6 bits contain the destination node ID (or 0)
  • the lower 6 bits of rf12_hdr always contain the node ID of the sending node
  • RF12_HDR_ACK is defined as bit 6 (this is bit 5 in classic mode packets)
  • RF12_HDR_DST is no longer present, since there is now a rf12_dst field

The RF12_ACK_REPLY test is not working right now, due to the above flag bit changes.

For an example, see the new rf12compat.ino sketch:

#include <JeeLib.h>

MilliTimer timer;

void setup() {
  rf12_initialize(63, RF12_868MHZ, 42, 1720); // 868.6 MHz for testing

void loop() {
  if (rf12_recvDone()) {
    Serial.print(rf12_crc == 0 ? "OK" : " ?");
    Serial.print(" dst: ");
    Serial.print(rf12_dst, HEX);
    Serial.print(" hdr: ");
    Serial.print(rf12_hdr, HEX);
    Serial.print(' ');
    for (int i = 0; i < rf12_len && i < 66; ++i) {
      Serial.print(rf12_data[i] >> 4, HEX);
      Serial.print(rf12_data[i] & 0xF, HEX);
  if (timer.poll(1000))
    rf12_sendNow(0, "abc", 3);

Here is some sample output, receiving packets from a Micro Power Snitch:

OK dst: 80 hdr: 3D 1EBFCFEF01
OK dst: 80 hdr: 3D 1EBFCFEF41
OK dst: 80 hdr: 3D 1EBFCFEF81
OK dst: 80 hdr: 3D 1EBFCFEFC1
OK dst: 80 hdr: 3D 1EBFCFEF01

The destination is 0x80 & 0x3F => 0, i.e. this is a broadcast.
The sending node ID is 0x3D & 0x3F => 61, i.e. the sender’s node ID is 61.
The rest is the 5-byte payload (4 h/w ID bytes, and type 1 = MPS with 2-bit sequence).

Some code refinements are still needed – the exact settings have not yet been optimised for both modules to inter-operate as well as possible. This results in some packets still being missed (IOW, the CRC not matching up). Also, as usual with the RF12 driver, there are occasional noise packets coming in (again with non-matching CRC, and easily flagged).

But as you can see, the basic mechanism is working: RFM12B-based nodes can successfully participate in a network designed for RFM69’s, all running in native packet mode!

In summary, the RF12 driver in JeeLib can now be used in three different ways:

  • as is, the “traditional” mode: classic + RFM12 + RF12 + Arduino + AVR
  • in RF69_COMPAT mode: classic + RFM69 + RF12 + Arduino + AVR
  • in RF12_COMPAT mode: native + RFM12 + RF12 + Arduino + AVR

The two compatibility modes are a compromise compared with an all-RFM12 or all-RFM69 network, simply because these modules are being pushed in some very unusual ways using various software tricks, but these modes should nevertheless come in handy for existing networks, where you don’t have the option to choose 100% uniform hardware.

[Back to article index]

RF69 native on ATmega’s

In on May 29, 2015 at 00:00

The RF69 driver on GitHub is highly portable, due to the use of C++ templates to keep all platform-specific details nicely separate. All we need, to use this driver in the Arduino IDE for use with JeeNodes, is to implement an SPI class with the proper API (see spi.h):

template< int N>
class SpiDev {
  static uint8_t spiTransferByte (uint8_t out) {
    SPDR = out;
    while ((SPSR & (1<<SPIF)) == 0)
    return SPDR;

  static void master (int div) {
    digitalWrite(N, 1);
    pinMode(N, OUTPUT);

    pinMode(10, OUTPUT);
    pinMode(11, OUTPUT);
    pinMode(12, INPUT);
    pinMode(13, OUTPUT);

    SPCR = _BV(SPE) | _BV(MSTR);
    SPSR |= _BV(SPI2X);

  static uint8_t rwReg (uint8_t cmd, uint8_t val) {
    digitalWrite(N, 0);
    uint8_t in = spiTransferByte(val);
    digitalWrite(N, 1);
    return in;

typedef SpiDev<10> SpiDev10;

The template argument “N” is the pin we are going to use as master SPI select, i.e. 10.

Note that there is also an “SPI” class defined in the newer Arduino IDE releases, which you could use as basis for this SpiDev definition. It would make things more compatible if you intend to use multiple SPI devices – but the above definition will be fine for simple uses.

Using the new driver takes a little setup, since the Arduino IDE expects files to be in a specific place and the sketch directory to have a specific name, whereas this driver lives in the embello respository, which has a different layout (and is more general than the IDE). See the README on Github for how to set things up in your Arduino IDE environment.

The actual demo is very similar to the ones for the LPC8xx and the Raspberry Pi, both using this same RF69 driver. And not surprisingly, so is its output:

OK 80180801020304050607 (130+38:3)
OK 8018090102030405060708 (130+6:4)
 > #1, 1b
OK 80180A010203040506070809 (132+18:3)
OK 80180B0102030405060708090A (130+22:4)
 > #2, 2b
OK 80180C0102030405060708090A0B (128+6:4)
OK 80180D0102030405060708090A0B0C (130+20:4)

The numbers in parentheses are (-2*RSSI ± AFC : LNA).

The API of this new RF69 driver is slightly different from the RF12 driver API:

#include "spi.h"
#include "rf69.h"
RF69<SpiDev10> rf;
rf.init(28, 42, 8686); // node 28, group 42, 868.6 MHz
rf.send(0, txBuf, txLen);
int len = rf.receive(rxBuf, sizeof rxBuf);
if (len > 0) ...

Some notes (see rf69demo.ino for the complete example):

  • the spi.h header has to be included before the rf69.h header
  • you need to declare an RF69 instance to use this driver
  • initialisation takes a node ID (RF69 supports 1..60), group, and frequency
  • the frequency is specified in MHz, but you can add as many decimals as you want
  • sending is essentially the same as with the RF12 driver
  • the rf.receive() caller must supply the buffer to hold the incoming packet
  • the return value is the actual packet size, which may exceed the data stored in rxBuf if that buffer is too small to hold the entire packet
  • the first two bytes returned in rxBuf are a header byte and an origin byte – for a maximum-size packet, the buffer has to be at least 64 bytes

Several conventions have changed slightly with this new RF69 native driver:

  • more node ID’s, also: 61 is for send-only nodes, 62 is reserved, 63 is receive-all
  • the payload length can be 0..62 bytes (compared to max 66 for the RF12)
  • the header byte has the destination ID in bits 0..5 (or 0 if this was a broadcast)
  • the origin byte has some flags in bits 6..7, and the origin node ID in bits 0..5

Last but not least, there’s no need to frequently poll (as with rf12_recvDone) – you can wait to read out an incoming packet when your code is ready for it, the RFM69 will keep the entire packet in its FIFO buffer until that time (or until you re-use it for sending).

Encryption is easy with the new RF69 driver, just define a 1..16-char encryption key:


Note that encryption will be applied to all nodes in the same group, since the receiver cannot change its encryption mode for packets coming from different node IDs. If you need to disable encryption again, call “rf.encrypt(0)“.

To reduce the transmit power (0 is lowest, 31 is highest and the default), use this:

rf.txPower(15); // -3 dBm

This can be useful to limit power consumption and for close-range communication.

So there you have it: a simple new RF69 driver which can be used on all RFM69-based JeeNodes and JeeLinks to run these wireless radio modules in native packet mode.

[Back to article index]

RF compatibility options

In on May 28, 2015 at 00:00

Let’s define the playing field first, and all the associated terminology:

  • Classic packets are the ones understood by the RF12 driver in JeeLib
  • Native packets are built into RFM69 hardware and the RF69 driver in Embello
  • RFM12 is a HopeRF wireless module, with limited FIFO & logic (one B variant)
  • RFM69 is the newer HopeRF module (there are W, CW, and HW variants)
  • RF12 is the driver in JeeLib for Arduino IDE use, supporting classic packets
  • RF69 is a slightly different API supporting the RFM69 in native mode
  • Arduino is the name of the IDE used to build/upload JeeLib-based code
  • Make is the Unix-style mechanism to build/upload Embello-based code
  • AVR is the architecture of ATmega and ATtiny chips made by Atmel
  • ARM is the architecture of NXP’s LPC8xx series and many other vendors

And to elaborate a bit more on this:

  • Classic vs Native refers to the packet format traveling through the air
  • RFM12 vs RFM69 refers to different wireless radio hardware modules
  • RF12 vs RF69 refers to different software and their slightly different API’s
  • Arduino vs Make refers to the cross-build + upload environments
  • AVR vs ARM refers to the different µC architectures (and 8- vs 32-bit)

That’s five pairs of variations, for a theoretical 32 different build and usage combinations! Not all of them make sense, and fewer still need to be implemented for inter-operability.

The most widely used setup is currently (because it was the first available on JeeNodes):

  • Classic + RFM12 + RF12 + Arduino + AVR

While the most recent developments at JeeLabs have focused on this configuration:

  • Native + RFM69 + RF69 + Make + ARM

In other words: it couldn’t be more different and these two cannot inter-operate as is.


This configuration uses “#define RF69_COMPAT 1” and can be characterised as:

  • Classic + RFM69 + RF12 + Arduino + AVR

It’s what allows an RFM69-based node to play nice in an RFM12/RF12-type network.

On Raspberry Pi

Recent developments have made it possible to introduce a third platform, i.e. Linux running on Raspberry Pi’s (any model) or the mostly-header-compatible Odroid C1.

Though omitted from the above list of terms to avoid confusion, it could be described as:

  • Native + RFM69 + RF69 + Linux-make + RasPi

This implementation is designed to be used with the RasPi RF board.

Long term trend

Given some of the new features offered by the RFM69 in native mode, it’s safe to assume that in the long run, the following configurations will become mainstream at JeeLabs:

  • Native + RFM69 + RF69 + some-build-system + some-architecture

What this means, is that “on the air”, the NATIVE packet format will at some point start to dominate. And while we could easily set up our environment to support native and classic packets alongside each other – using a different frequency or net group or both, with two central nodes running in parallel – this wouldn’t be the most convenient setup, long term.

A much better strategy will be to just bite the bullet and make everything work in native packet format. Then we could mix and match, old and new, RFM12’s and RFM69’s, and still be able to treat the entire wireless network as a single one.

This is precisely what the upcoming two articles intend to describe.

Using RFM69 w/ Arduino

Luckily, the new RF69 driver is extremely portable. It started out on LPC8xx ARM chips, but has been extended to run under Linux as well – using a simple “RasPi RF” board. The next article in this series will show that it can also be used in the Arduino environment:

  • Native + RFM69 + RF69 + Arduino + AVR

A new “RF12_COMPAT” option

But the most interesting option perhaps, is to bring the RFM12 into the future with a modification to the RF12 driver to make it work with native packets (keeping its API):

  • Native + RFM12 + RF12 + Arduino + AVR

This could in fact be called “forward compatibility”: an RFM12 can be made to act as if it were a more recently produced RFM69. Because of this, an installed base of RFM12’s need not prevent progress and the use of the more advanced RFM69’s in the rest of the network.

There are trade-offs, of course. Perhaps the most important one is that the RFM69’s AES encryption engine is not available on the RFM12. But even this is not a hard restriction: in principle, AES could be implemented in software in a future extension of the RF12 driver.

Stay tuned for these two exciting new options…

[Back to article index]

RFM69 on ATmega

In Book on May 27, 2015 at 00:01

Now that we have the RFM69 working on Raspberry Pi and Odroid C1, we’ve got all the pieces to create a Wireless Sensor Network for home monitoring, automation, IoT, etc.

But I absolutely don’t want to leave the current range of JeeNodes behind. Moving to newer hardware should not be about making existing hardware obsolete, in my book!


The JeeNode v6 with its on-board RFM12 wireless radio module, Arduino and IDE compatibility, JeePorts, and ultra-low power consumption has been serving me well for many years, and continues to do so – with some two dozen nodes installed here at JeeLabs, each monitoring power consumption, house temperatures, room occupancy, and more. It has spawned numerous other products and DIY installations, and the open-source JeeLib library code has opened up the world of low-cost wireless signalling for many years. There are many thousands of JeeNodes out there by now.

There’s no point breaking what works. The world wastes enough technology as it is.

Which is why, long ago a special RF69-based “compatibility mode” driver was added to JeeLib, allowing the newer RFM69 modules to interoperate with the older RFM12B’s. All you have to do, is to add the following line of code to your Arduino Sketches:

#define RF69_COMPAT 1

… and the RFM69 will automagically behave like a (less featureful) RFM12.

This week is about doing the same, but in reverse: adapting JeeLib’s existing RF12 driver, which uses a specific packet format, to make an RFM12 work as if it were an RFM69:

As I’ve said, I really don’t like to break what works well. These articles will show you that there is no need. You can continue to use the RFM12 modules, and you can mix them with RFM69 modules. You can continue to use and add Arduino-compatible JeeNodes, etc. in your setup, without limiting your options to explore some of the new ARM-based designs.

Let me be clear: there are incompatibilites, and they do matter at times. Some flashy new features will not be available on older hardware. I don’t plan to implement everything on every combination, in fact I’ve been focusing more and more on ARM µC’s with RFM69 wireless, and will most likely continue to do so, simply to manage my limited time.

Long live forward compatibility, i.e. letting old hardware inter-operate with the new…

Classic vs native packets

In on May 27, 2015 at 00:00

There are so many different wireless radio modules and different packet protocols, that it’s easy to get lost. In the context of JeeLabs, the radio modules currently in use are the RFM12 and RFM69, both produced by HopeRF.

At some point, a rumour started echoing on internet that the RFM12 was being abandoned and replaced by the RFM69, but after over a year it is now clear that both will continue to be produced for a long time to come – if only because there is plenty of demand.

The use cases for both types are quite similar, i.e. being able to send and receive short packets on the 433, 868, or 915 MHz bands, but there are also some major differences:

  • the RFM12 module can only store 1..2 bytes in its FIFO, meaning that data must be read and written at a fairly high rate (within 200 µs with the default JeeLib settings)

  • the RFM69 can store an entire packet, so there are no hard real-time timing requirements, other than to clear the FIFO for the next reception or transmission

  • the RFM69 is more sophisticated, in that many settings are more detailed – and it supports more modulation-, synchronisation-, and packet-formatting modes

  • the RFM69 supports hardware encryption, based on 128-bit AES, and data whitening, which helps better recover the data rate clock in some edge cases

  • the RFM69’s receiver is more sensitive, and its transmitter can be more powerful, leading to a potentially much larger range with the same current consumption

All in all, the RFM69 can indeed be considered a next-generation design – although the RFM12 continues to be just fine for lots of home-monitoring and -automation scenarios. But there is one tricky difference: the on-air packet format / protocol of the RF12 driver (implemented by software in JeeLib) is not the same as the format in the RFM69 hardware-based packet engine:

The “CLASSIC” packet layout used by the RF12 driver is:

  • preamble, SYNC bytes, header byte, packet length, payload, CRC bytes

The “NATIVE” packet layout used by the RFM69 hardware is:

  • preamble, SYNC bytes, packet length, payload, CRC bytes

We can easily designate the first byte of the payload to act as header byte, but we cannot change the order of the length and header bytes. Furthermore, the length differs: on the RFM12, it does not include the header byte, whereas on the RFM69 it does.

There are other differences in specific bits and in how the CRC is calculated, but the main difference preventing trivial inter-operability is really that header-/length-byte order.

Obviously, senders and receivers need to agree on the protocol used.

So far, the code in JeeLib for both RFM12 and RFM69 has been aimed at implementing the classic packet format everywhere. By adding the following define into our code:

#define RF69_COMPAT 1

… we have the option to use an RFM69 module, while keeping the classic packet layout. This effectively makes the RFM69 backwards compatible: it acts like an older RFM12.

But there is a price to pay, because we can’t use some of the newer features of the RFM69 this way, in particular full-packet buffering and hardware-based AES encryption.

On the ARM platform, we’ve been using a new native “RF69” driver (in rf69.h) which is considerably simpler (less code), doesn’t need fast interrupt support, and offers optional encryption. But while doing so, we’ve also introduced a serious incompatibility.

Now seems like a good time to re-investigate all the different options and combinations…

[Back to article index]

Hooking RasPi RF into MQTT

In on May 23, 2015 at 00:00

Being able to send and receive wireless packets is only part of the story, of course. We’re going to want to collect the packet data, aggregate it to generate averages and other statistics, produce nice graphs, and allow buttons and other controls as well as automated rules to send out commands if we also want to control stuff.

A modular design becomes a necessity, so that we can add and extend the setup over time, without having to rebuild and tinker with what is already in place.

This is where MQTT comes in, the “Message Queue Telemetry Transport”. From the Wikipedia page, we learn that:

MQTT […] is a publish-subscribe based “light weight” messaging protocol for use on top of the TCP/IP protocol. It is designed for connections with remote locations where a “small code footprint” is required and/or network bandwidth is limited. The Publish-Subscribe messaging pattern requires a message broker. The broker is responsible for distributing messages to interested clients based on the topic of a message.

Quite a mouthful. You can think of MQTT as a Reuters news agency (broker) for Physical Computing: news stories (messages) are submitted (published) to it from all over the world, and newspapers sign up (subscribe) to it to get the stories as soon as they come in.

MQTT messages consist of a topic string and a payload (string, number, bytes, whatever). Topics are structured as “a/b/c” levels, and subscribers can specify to receive only certain matches, e.g. “a/+/c” or “a/#”, where “+” matches one segment and “#” matches anything.

What this means is that our RasPi RF can pick some meaningful topic(s) and publish every packet it receives there, without ever caring who is interested in its messages. There could be any number of subscribers, or none. Similarly, it can subscribe to a specific channel and transmit whatever comes in as outgoing wireless packet. Total modularity and freedom!

MQTT clients (this includes our RasPi RF, but also the rest of the processing we set up) connect to the server / broker via TCP/IP, which means that the broker does not have to run on the same computer as where the RasPi RF is connected. But for simplicity and as first test, let’s set everything up on a single Raspberry Pi.

We’ll need an MQTT broker. Mosquitto is a popular one and trivial to install:

sudo apt-get install mosquitto

That’s it. From now on, the broker will always be running in the background. By default, it listens on port 1883 and accepts all connections in plaintext and without a password.

Let’s install a few more packages for convenience, though:

sudo apt-get install mosquitto-clients
sudo apt-get install libmosquitto0-dev libmosquittopp0-dev

The clients are simple command-line tools to simplify testing, the dev packages have everything we need to build C and C++ programs to talk to the broker.

We can now take our little test-raspi-linux example, and turn it into an rf69mqtt demo. This example will use the class-based C++ API, so first we need to define a subclass:

class MyMqtt : public mosquittopp::mosquittopp {
  MyMqtt () : mosquittopp::mosquittopp (NAME) {

  virtual void on_connect (int err) {
    printf("connected %d\n", err);

  virtual void on_disconnect () {

Then we define an instance and insert the proper initialisation and publishing calls:

MyMqtt mqtt;

struct {
  int16_t afc;
  uint8_t rssi;
  uint8_t lna;
  uint8_t buf [64];
} rx;

while (true) {
  int len = rf.receive(rx.buf, sizeof rx.buf);
  if (len >= 0) {
    rx.afc = rf.afc;
    rx.rssi = rf.rssi;
    rx.lna = rf.lna;

    char topic [30];
    sprintf(topic, "test/RasPiRF/%d", myTopic, rx.buf[1] & 0x3F);
    mqtt.publish(0, topic, 4 + len, (const uint8_t*) &rx);


The “rx” struct has all the details about the received packet, i.e. not just the payload but also some other information provided by the RF69 driver, such as signal strength.

Note also that the topic ends with a number which is the node ID of the sending node. That way, clients can subscribe to be notified only of packets from a specific node if they want.

Since this still uses the WiringPi library and needs access to the physical SPI bus and GPIO pins, we need to run this program as superuser:

$ make
g++ -I../../../lib/arch-raspi -I../../../lib/driver rf69mqtt.cpp \
  -lwiringPi -lwiringPiDev -lpthread -lmosquittopp -o rf69mqtt
$ sudo ./rf69mqtt

connected 0

That’s it, we’re connected without errors. But there will no longer be any visible output from this program, since all messages are now being sent to Mosquitto instead.

In fact, we should keep this program running forever in the background:

    nohup sudo ./rf69mqtt &

Then we can use a command-line utility client to see if anything is happening:

    mosquitto_sub -v -t '#'

This will subscribe to “#”, i.e. all topics, and print the topics and payloads as they come in. Unfortunately, the payloads are binary data, so this will print gibberish. A slightly better way to see all published data is to convert all the output to text using the “xxd” hex dumper:

    mosquitto_sub -v -t '#' | xxd

This way, the output will at least be plain ASCII text. But it’s not quite as easy to read.

Yet another way is to look at Mosquitto’s built-in statistics, which it publishes as special topics (these are not included when using just “#” as subscription wildcard!):

$ mosquitto_sub -v -t '$SYS/#'
$SYS/broker/bytes/received 2239
$SYS/broker/bytes/sent 281
$SYS/broker/bytes/per second/received 1
$SYS/broker/bytes/per second/sent 0
$SYS/broker/version mosquitto version 0.15
$SYS/broker/timestamp 2013-08-23 19:24:40+0000
$SYS/broker/changeset $Revision: e745e1ab5007 $
$SYS/broker/uptime 1639 seconds
$SYS/broker/messages/stored 20
$SYS/broker/messages/received 67
$SYS/broker/messages/sent 15
$SYS/broker/messages/per second/received 0
$SYS/broker/messages/per second/sent 0
$SYS/broker/clients/total 1
$SYS/broker/clients/inactive 0
$SYS/broker/clients/active 1
$SYS/broker/clients/maximum 2
$SYS/broker/heap/current size 3756 bytes
$SYS/broker/heap/maximum size 6368 bytes

As you can see, data is coming in: Mosquitto has received 67 messages so far.

The actual rf69mqtt code on GitHub is a slightly extended version of this demo, which publishes to a topic that includes the frequency band and network group, so that multiple RFM69 modules can be set up and report to this same central broker. It also listens to a fixed topic and sends out every message published to it as RF packet, turning rf69mqtt into a bi-directional bridge between MQTT and our Wireless Sensor Network.

Note that this first crude design still uses idle polling and consumes 10..30% of CPU.

Anyway. Welcome to the Internet of Things! Go wild! Connect anything to anything!

All we need is a bunch of remote nodes and software to do something meaningful with all this data (might also be a good time to think about security and start using passwords!).

[Back to article index]

RFM69 on Raspberry Pi

In Book on May 20, 2015 at 00:01

With the Micro Power Snitch sending out packets, it’d be nice to be able to receive them!

This week is about turning a Raspberry Pi, or a similar board such as an Odroid-C1, into a Linux-based central node for what could become a home-based Wireless Sensor Network.

All it takes is an RFM69 radio module and a little soldering:

DSC 5086

On the menu for this week’s episode:

And before you know it, you’ll be smack in the middle of this century’s IoT craze…

(For comments, visit the forum area)

A super simple “RasPi RF” setup

In on May 20, 2015 at 00:00

The Raspberry Pi hardly needs an introduction these days. Its availability, low cost, and enthusiastic community have put it on the map of every Physical Computing hobbyist.

One way to connect embedded hardware with it is through the USB port, and another one is to use pins 6, 8, and 10 on the expansion header to use a serial “UART” connection.

But for a wireless radio module such as the RFM69, there is a much simpler way: through SPI, which is available on the expansion header as well. As it turns out, we only need pins 14 through 25 for this. Here is an overview of what’s involved:

RPi middle pins SPI

In this case, we’re using the RFM69CW, though with a bit of care the non-C “RFM69W” module could also be used (it has a slightly different pinout). Don’t use the “H” version (high-power), as this will far exceed the max 50 mA current allowance of the 3.3V supply.

All we need is 3.3V, a 4-wire SPI connection, and while we’re at it, let’s also hook up the DIO0 and DIO2 pins so we can make the Raspberry Pi respond to interrupt requests.

Note how the required +3.3V and GND pins are all there, on this part of the header. By using a small piece of prototype PCB, we can construct a tiny (pico?) add-on board:

DSC 5082

The pads on the RFM69 are not very convenient, since they don’t align with the 0.1″ grid spacing of prototyping boards, but we can just use the corners to hold the RF module, and use wires for the rest:

DSC 5083

The radio was mounted upside down so we can see the labels, and the indication of which frequency band this unit is for. The circuit won’t care, it has no idea of “up” and “down”.

Connections are best made with insulated wire, due to the close proximity of all the different signals. The thinner the better (to easily fit everything in there), so let’s use Kynar “wire wrap” wire – this wiring doesn’t need to provide any mechanical support:

DSC 5084

(the careful viewer will note that one wire from pin 22 to NSS still needs to be added)

Here is the completed build, installed on one of the original “Model B” Raspberry Pi’s:

DSC 5086

Hard to see, but at the far end an 86-mm (white) antenna wire has also been soldered on.

We’re ready to go on the air – all we need now, is a little bit of software on Linux!

[Back to article index]

Final MPS schematic and PCB

In on May 16, 2015 at 00:00

In case you’ve reached this page via Google or a direct URL, first a brief overview:

The “Micro Power Snitch” is a battery-less circuit which sends out wireless data packets using energy harvested from a Current Transformer, for example. There is an 8-week series of articles about how this all started, what problems arose, and how they were solved: 12345678.

With that out of the way, here is the final design for the Micro Power Snitch – version 1:


There is a 2×3 header which also doubles as jumper block:

  • for normal use with a C.T. and 50..60 Hz AC, use a jumper between pins 3 and 4
  • with a high-impedance DC input (max ≈ 3.6V), jumper 3-to-5, and 2-to-4 instead

The DC configuration quadruples the reservoir capacitor by placing C1 and C2 in parallel i.s.o. in series. This could be used with a small solar cell, for example. Depending on the cell, add a 3.6V @ 1W zener to limit the cell’s output voltage and absorb excess energy.

Here is a corresponding PCB design – it measures 1.2″ x 1.9″ (≈ 30×48 mm):


Some additional component notes:

  • C1 and C2 are 470 µF here, but some larger reservoir capacitors will also fit
  • note the orientation of Q1 (BC549C) and Q2/Q3 (TP2104 MOSFETs)

The demo software has been extended a bit further and now has the following behaviour:

  • a packet is sent out once every second as long as there is enough power
  • the RF settings are: txPower 18 (0 dBm), 868 Mhz, group 42, node ID 61
  • each packet is 5 bytes (more can be appended later)

The packet contents are: a unique 4-byte identification (derived from the unique 16-byte hardware ID in each LPC810 µC), followed by a sequence/type byte. The lower six bits are “000001”, to tag this node as being an MPS v1. The two upper bits are incremented on each transmission and can be used on the receiving end to detect occasional missed packets.

This introduces a new convention that node ID “61” is reserved for nodes which send but never need to receive any data. The first five bytes then identify each node and its type. You can have as many “61” nodes in the WSN as you like without running out of node IDs.

Here is some sample output (received using this code, but with encryption disabled):

OK 803d5ca3cfef01 (144+34:3)
OK 803d5ca3cfef41 (144+38:3)
OK 803d5ca3cfef81 (144+46:3)
OK 803d5ca3cfefc1 (144+26:3)
OK 803d5ca3cfef01 (144+42:3)

Where: 80=broadcast, 3d=from-61, then 4 unique bytes, then the counter/type byte.

That’s it…

So there you have it. Once the MPS is connected to a C.T. such as this one, and clipped over one (not two!) wires of an AC mains cable, it will report whenever there is more than about 500W @ 230VAC (or 250W @ 115VAC) power going through the wire. Forever!

The MPS was initially created as proof of concept, but (as often with such ideas) has moved well beyond that stage. There is a lot of accumulated knowledge in this design by now, on how to make a µC + RF combination work off an energy source which is very low power and only available intermittently, so startup and shutdown become critical design aspects.

In a way, this project really has only just started. It could also be used outdoors, powered by a very small solar cell, for example. A supercap or battery could be added to last much longer, at which point the spare “AUX” pin could be used to read out some sensor.

The code so far is only about 1 KB, leaving plenty of room in that LPC810 µC to add a lot more smarts. Who knows – it might be possible to estimate actual power levels from either the Vdd voltage or how fast it recovers after a packet transmission. The transmit power could be adjusted to reflect how much energy there is, which would allow the receiver to deduce a very rough power level via the strength of the received signal. Weaker transmits could be mixed with stronger ones, to sail even closer to the edge.

Simple as it may be, this circuit can be used in all sorts of creative new ways. The hardware is just a start – your imagination and software implementation ideas can take it anywhere!

The code and all design files are available as open source – create, share, and enjoy!

[Back to article index]

Should we send, or not?

In on May 14, 2015 at 00:00

We have three ways to try avoiding getting into trouble, i.e. running into “limbo state” where the MPS is powered up, but not well enough to send out packets.

Vcc estimation

Unfortunately, the LPC81x series does not have a real ADC, only an analog comparator, a 0.9V band-gap reference, and a 31-tap voltage “ladder”. But as it turns out, with the help of some trickery and math this is just enough to estimate the supply voltage with!

The idea is a bit convoluted, but fairly simple: we compare the known 0.9V reference with each of the taps, while the ladder is connected to Vcc. That means each tap will have 1/31’rd of the Vcc voltage, and at some point the level will exceed that 0.9V level.

When Vcc is low, each tap/step will be a small voltage, so the match will occur at a higher tap. Conversely, when Vcc is at the maximum allowed 3.6V, each tap will be 3.6V/31 ≈ 0.12V, and the match will happen at a lower tap. This is available as example on GitHub:

1.6V => estimate: 1641 mV
1.7V => estimate: 1743 mV
1.8V => estimate: 1860 mV
1.9V => estimate: 1992 mV
2.0V => estimate: 2146 mV
2.1V => estimate: 2146 mV
2.2V => estimate: 2325 mV
2.3V => estimate: 2325 mV
2.4V => estimate: 2536 mV
2.5V => estimate: 2536 mV
2.6V => estimate: 2790 mV
2.7V => estimate: 2790 mV
2.8V => estimate: 3100 mV
2.9V => estimate: 3100 mV
3.0V => estimate: 3100 mV
3.1V => estimate: 3487 mV
3.2V => estimate: 3487 mV
3.3V => estimate: 3487 mV
3.4V => estimate: 3487 mV
3.5V => estimate: 3985 mV
3.6V => estimate: 3985 mV

At the low end, the estimates are fairly accurate, but at the high end they are very coarse.

With this code, we could enhance the MPS logic to only start sending out wireless packets when the supply voltage is known to be, say, 3.1V or more. But we may not have to…

Avoiding voltage drop

In fact, we can do better than that by making the supply voltage drop less. There are two simple ways to accomplish this:

  1. send smaller packets, i.e. shorter “ON” times for the power-hungry transmitter
  2. increase the two reservoir capacitors, to collect more energy for us

Step 1) is easy to do by not encrypting packets. This may seem odd, but the 128-bit AES engine in the RFM69 will round up all payloads to a multiple of 16 bytes (128 bits) after encryption. The receiver then decrypts and returns the payload length as sent.

For our purposes, we’ll use a 6-byte payload: a 4-byte unique code, derived from the LPC8xx’s built-in hardware ID, and 2 bytes as placeholder for additional information.

Step 2) is even easier: by simply using larger capacitors, it will take a little more time for them to charge up, but they will hold their voltage level much better when the transmitter starts eating up the energy (with its 15..25 mA power consumption, however brief).

Staying out of trouble

Both of the above improvements add value, in that we’ll be able to send only when we think it’s feasible (high enough Vcc) and even with lower Vcc’s since the supply won’t collapse as quickly (larger capacitors) – but we can still get into trouble, depending on AC variations!

So the third trick is really the most important one: re-using the Q1/Q2 circuit to properly shut off the entire MPS when Vres drops below an acceptable level of 1.6..1.8V or so.

As it now stands, the circuit described so far doesn’t shut off nicely due to diode D3, which prevents turn-off. With slightly re-dimensioned resistors and leaving the diode out, we can however get a well-defined turn-off behaviour into the MPS:

Shut off fix

The base of the transistor will go higher as Vres rises, up to the point where it starts conducting. Then, the 220 kΩ positive feedback resistor will cause it to “snap on”. But when Vres drops, it will at some point stop, and now the feedback will cause it to snap off:


Where: CH1/yellow = Vres, CH2/blue = Vradio, CH3/purple = Vdd (i.e. the µC supply).

At about 2.5V, the circuit snaps on, but when power is lost (the 500W AC load was turned off), the circuit snaps completely off once Vres drops to ≈ 1.6V. Note that Vres no longer drops, since both the µC and the radio have been disconnected. The remaining energy will come in handy on the next cycle. How about that, eh? We can store some energy for later!

Note how Vradio drops according to its own rules, due to the RFM69’s internal circuitry.

We now have predictable behaviour, and best of all: we have a guaranteed reservoir on startup, because a full drop from 2.5V to 1.6V is now always supported. As long as we can stay above that lower bound, the µC + radio can go to sleep and loop, else Q1/Q2 will shut them off and force a complete power-up cycle as soon as more energy is available.

This modification – plus shorter packets and larger reservoir capacitors – makes the circuit foolproof: whenever it starts up, we’ll be able to send out at least one wireless packet!

[Back to article index]

Micro Power Snitch, success!

In Book on May 13, 2015 at 00:01

We’ve come to the eighth and final episode of the Micro Power Snitch story: it’s working! The circuit is transmitting wireless packets through the RFM69 radio module running on nothing but harvested electromagnetic energy. Install once, run forever!

But there are still several little details, optimisations, and edge cases we need to take care of – which is what this week’s articles are all about. As always, one article per day:

The MPS triggers on any appliance drawing ≥ 500W (on 230 VAC, or 250W for 115 VAC):

DSC 5089

As always on the JeeLabs weblog: everything is open source – you are welcome to build and adapt this circuit for your own purposes. If you do, please consider sharing your suggestions/findings/improvements on the forum, for others to learn and benefit as well.

(For comments, visit the forum area)

Taming the radio startup current

In on May 9, 2015 at 00:00

Let’s briefly recall the problem we’re having with the RFM69’s startup current:


(CH1/yellow = Vres, CH2/blue = Vradio, QMA/red = Vres-vRadio, CH3/purple = unused)

  • when Vres is about 2.0V, Q1/Q2 switch on and power up the rest of the circuit
  • the Vradio supply starts rising, due to back-feeding from the SPI I/O pins
  • when it reaches about 1.2V, the RFM69 starts drawing more current
  • this prevents Vres, the main supply voltage, from rising much further
  • once the RFM69 is initialised and put to sleep, Vres rises to its maximum level

Although this example works, it prevents the MPS from working with less incoming energy.

The solution is in fact quite simple: as soon as the µC comes out of reset, we turn all the SPI pins connected to the RFM69 to outputs and set their level to “0”. This will prevent any further back-feeding into Vradio.

Then, just before we want to initialise the RFM69, we restore the original settings, send the initial SPI commands, and then put the RFM69 into its 0.1 µA sleep mode.

The difference is quite dramatic:


(with apologies for the scale of the red difference line, which was set 4x lower here)

  • the blue Vradio line only rises to about 0.45V
  • then the µC comes out of reset and prevents it from rising further
  • as a result, the RFM69 never goes into its more-power-consuming mode
  • now Vres continues to rise all the way to the 3.55 V maximum level

Turning on the radio still causes a dip of about 0.7V, but now the dip is at a Vres supply level, where we can easily handle such a brief drop.

The code for this new approach can be found on GitHub:

// disable all special pin functions

// make all SPI pins "0" outputs to prevent radio back-feeding
LPC_GPIO_PORT->DIR[0] |= 0b111110;  // pio1..5 all outputs
LPC_GPIO_PORT->CLR[0] = 0b111100;   // pio2..5 set to 0
LPC_GPIO_PORT->SET[0] = 0b000010;   // pio1 set to 1

sleep(10000); // sleep 1 sec to let power supply rise further

LPC_GPIO_PORT->DIR[0] &= ~0b111100; // pio2..5 all inputs again
LPC_GPIO_PORT->B[0][1] = 0;         // low, turns radio power on

sleep(100); // sleep 10 ms to let the radio start up

// SPI0 pin configuration
// lpc810: sck=3p3, ssel=4p2, miso=2p4, mosi=5p1
LPC_SWM->PINASSIGN[3] = 0x03FFFFFF; // sck  -    -    -
LPC_SWM->PINASSIGN[4] = 0xFF040205; // -    nss  miso mosi

// initialise the radio and put it into idle mode asap
rf.init(61, 42, 8683);              // node 61, group 42, 868.3 MHz

Here is an example where the incoming energy is much lower than before (about half):


The red line is now the derivative of Vres, i.e. the rate of change of that power supply voltage – it’s a rough indication of the current going in or out of the reservoir capacitors. Note that its zero baseline is two divisions down from the top. As you can see, the current starts coming in fast, then drops as the capacitors reach their full level, with a brief consumption spike when the µC starts up and the radio starts drawing some current.

Vradio does not drop back to zero, as you can see. This is because all of this is caused by diodes on the chip, which start conducting early but prevent that charge from flowing out.

In this case, the incoming energy is insufficient (max about 2.4 V) to send out wireless packets. The voltage would drop to a level where neither the µC nor the RFM69 can work.

In summary: we’ve timed the µC startup through Q1/Q2 and we’ve tamed the RFM69 radio startup current sufficiently to bring it up slightly later under µC control.

This final result is also probably the maximum achievable without further circuitry – given that this dip is taking place when the µC is not yet able to do anything more about it.

But we’re fine: the MPS can send out packets using only energy picked up by its C.T. !

[Back to article index]

Let’s switch to a better MOSFET

In on May 6, 2015 at 00:00

At this stage, we have a working circuit, which correctly powers up the LPC810 µC. Now, under software control, we can turn on power to the RFM69 wireless radio.

But there’s a slight problem. The P-MOSFET chosen was picked at random from components which happened to be around at design time. From the datasheet:

Screen Shot 2015 05 05 at 16 58 58

When pulling the gate low by 10V, we can turn the MOSFET fully on. According to these specs it then has an equivalent resistance between drain and source of up to 150 Ω. That’s really a lot – it could lead to a voltage drop of perhaps 0.3V for the µC, but with an even higher transmit current of say 20 mA, this MOSFET is bound to be completely useless.

We have to find a P-MOSFET which turns on well with no more than 2..3V, and which then has a considerably lower resistance. Let’s fix this before trying to send radio packets.

Keep in mind that the Vgs(TH) “knee” at which a MOSFET turns on is not sharply defined.

There are no doubt better alternatives, but again rummaging through the P-MOSFETs in the large “lab component stash” at JeeLabs brought out this unit, which is still through-hole (the same TO-92 package and pinout, in fact), and can be expected to work better:

Screen Shot 2015 05 05 at 17 13 03

That’s a TP2104 from Supertex. And it’ll accept a 3V gate voltage:

Screen Shot 2015 05 05 at 17 15 27

It’s not perfect: 10 Ω at 20 mA is still a 0.2V drop, but at least this one should be able to feed an acceptable 2..3V supply voltage to the RFM69 while it’s in transmit mode.

Since this MOSFET is package- and pin-compatible, we can switch both of ’em to this type:

DSC 5047

… with lots of probes attached to monitor various supply voltages as well as the SPI bus.

Now – at last – we’re ready to try sending out wireless packets!

[Back to article index]

Back on track – now the hard part

In on Apr 30, 2015 at 00:00

Now that all mistakes in the PCB have been fixed and the circuit is working, we get this:

DSC 5046

Note the four oddly-placed components!

And indeed, the 1-blips code is working again, as can be verified through the very faint dimming “blips” of the red LED every 3 seconds.

What we have is an energy harvesting circuit, slowly “pumping up” voltage on the two 100 µF reservoir caps, and the µC kicking into action when there is roughly 2V to power it up.

But that’s really child’s play compared to the second challenge up ahead of us: getting the RFM69 to send out a wireless packet. The reason for this is that the LPC810 µC only draws about 2..3 mA when running, whereas the RFM69 will need some 15..25 mA to enable the transmitter section, depending on the desired RF output power level.

We’re going to have to go through a number of successive stages:

  1. build up the minimal reservoir charge at which point the µC is powered up
  2. go to sleep – to let the charge build up further!
  3. power up the radio module via the I/O pin on the LPC810
  4. go to sleep – to recover some charge!
  5. send SPI commands to initialise the RFM69 and put it into sleep mode
  6. go to sleep – to recover some charge, again!
  7. enable the RFM69 transmitter and make it send out one packet
  8. rinse and repeat, from step 6

The reason for this sequence is that everything uses up charge when running. We have to constantly go back to a low power mode (as low as possible) to let the incoming charge build up our supply voltage again.

Stage 2 keeps the µC off a bit longer, until we’re pretty sure the supply is “plentiful”.

Stage 4 allows the RFM69 to get through its initial start-up sequence, before which it won’t respond to SPI commands. This is mostly about starting up the RFM69’s on-board crystal. According to the data sheet, this phase needs to be given at least 10 milliseconds.

Stage 6 is the main way for the circuit to recover energy. It should be long enough to allow the supply voltage to rise to its maximum value (about 3.5V with currently-chosen values). We’re going to need all the energy we can muster to turn on the radio, get it into transmit mode, send out the packet, and get it back into sleep mode. Luckily, this process can be completely automated on the RFM69 – the µC can set it up, and go to sleep, since the RFM69 can do everything by itself once a packet is placed in its FIFO – all the way to going back to sleep, in fact.

There are numerous failure modes, no doubt – but the main one is probably just running out of juice, causing the supply voltage to collapse and the NPN transistor to shut off again. In this case, the circuit will eventually come back alive in exactly the same way as when powered up for the first time. Else, it’ll cycle through stages 6, 7, and 8.

It’ll be an interesting exploration to see how all the above can be implemented – robustly!

[Back to article index]

Pin changes, levels, and edges

In on Apr 26, 2015 at 00:00

One category of interrupts which is very useful in embedded contexts is the “pin change” interrupt, tying external digital pin changes into the µC’s interrupt system.

Pin change interrupt hardware is vendor and chip-family specific, unlike the NVIC (Nested Vectored Interrupt Controller) which is basically the same for all ARM Cortex processors. For the LPC8xx, we have a fairly sophisticated pin change mechanism at our disposal.

The basic LPC8xx functionality supports up to 8 separate interrupts, each tied to their own selectable I/O pin. These 8 interrupts and their handlers can be used independently.

Level interrupts

One way to generate an interrupt is to define the level (“0” or “1”) which will generate the interrupt. A button tied between ground and an I/O pin could be used for this: we simply enable the internal pull-up on the pin, causing it to default to a “1” and when the button is pressed, it gets pulled down to “0”, generating the interrupt.

As a result, a specific ISR we have associated with that pin change interrupt will be started. The ISR’s are called PIN_INT0_IRQHandler() to PIN_INT7_IRQHandler().

So far so good, but then what? The moment the ISR returns, it would be called again if the button is still pressed, because this type of interrupt fires as long as the level matches.

Edge interrupts

For cases like this, it’s simpler to use another type of pin change interrupt – which triggers on a low-to-high or a high-to-low “edge” (or both). That way, each change triggers a single interrupt. Note that in the case of our simple switch, we’ll still get quite a few interrupts, due to switch bounce – but this is a mechanical issue, not an interrupt-specific one.

An example

Let’s try this out. This example on GitHub uses pin change interrupts to report changes on PIO2, which is pin 4 on the LPC810. It’s been wired up with a button, as described before:

DSC 5035

The code for this is as follows:

#include "sys.h"

volatile bool triggered;

extern "C" void PIN_INT0_IRQHandler () {
  LPC_PININT->IST = (1<<0);       // clear interrupt
  triggered = true;

int main () {


  LPC_SWM->PINENABLE0 |= (3<<2);  // disable SWCLK/SWDIO
  // PIO0_2 is already an input with pull-up enabled
  LPC_SYSCTL->PINTSEL[0] = 2;     // pin 2 triggers pinint 0
  LPC_PININT->SIENF = (1<<0);     // enable falling edge

  while (true) {
    if (triggered) {
      triggered = false;
      printf("%u\n", (unsigned) tick.millis);

It’s not a good idea to call printf() inside the ISR, hence this logic (more on that below).

Here is some sample output, pressing and releasing the button a few times:


Note that, despite responding only to falling edges (i.e. closing the button), some of these lines were triggered by contact bounce when releasing the button.

Another note is that for this simplest of all cases, interrupts would not even have been needed. All pin changes are latched and remembered by the hardware – with a slightly different approach, we could have checked (and then cleared) the hardware flag without risk of missing any edge events. Still, in most situations interrupts will be more useful.

Who needs levels?

You would think that edge-type pin change interrupts are all we need. Why bother with levels at all? Level interrupts come in handy when there is a bit of logic at the other end.

Let’s take the RFM69 wireless module, which has several pins to indicate some specific event has happened. The DIO0 pin, for example, can go high when the radio has received a complete packet. This pin then stays high until the packet has been read by the µC.

So what we can do, is trigger on a high level, and let the ISR read the packet. When the ISR returns, the cause of the interrupt is gone. It won’t fire again until a new packet comes in.

This will add some subtle complexity to the RF69 driver, though. We can no longer just fire off SPI bus requests at will, because now there is an ISR which (at any time!) could do the same, and mess up the SPI bus if there is currently another request going on.

Ignoring this issue for a moment, the logic for our new RF69 driver then becomes:

  • define an ISR, and make it call the RF69::receive() code when DIO0 is high
  • finish the ISR by setting a global flag to indicate that a packet is ready
  • replace calls to receive() in the app by a check on that global flag
  • once the packet has been picked up, clear the global flag
  • and to make things easier: wrap the above two steps in a new function

This approach also requires defining a global packet buffer to hold the received data, because we can obviously no longer control when a packet is goint to be read out.

The extra complexity we’ll need in the RF69 driver to make it “interrupt-safe” consists of adding some sort of locking to prevent interrupts at certain critical times. Maybe we can simply disable the new DIO0 pin-change interrupt whenever we access the bus, i.e.:

  • disable pin-change interrupt N
  • enable the SPI select
  • read/write bytes over SPI as needed
  • disable the SPI select
  • re-enable pin-change interrupt N

As you can image, this will affect many parts of the current RF69 driver. Which is why actually making these changes and testing them will be left for another time.

Now let’s get back to why this needs level interrupts and can’t be done 100% reliably with edge interrupts. Consider this scenario:

  • a packet arrives, and sets DIO0 to “1”
  • the µC registers it and needs to handle the interrupt
  • it’s a bit busy though, so it’s not calling our ISR right away
  • a second packet comes in
  • finally, our ISR gets called, does its thing, and returns
  • since there is still a packet pending, DIO0 remains “1”
  • we never get a second interrupt

The benefit of a level interrupt is that it “interlocks” with the actual cause in a more robust manner: the level stays in the triggering state as long as there is a reason to interrupt. Only after every reason to interrupt is gone will the level drop back to its idle state.

(update: the RFM69 is probably not a good example after all, as it probably can’t handle multiple packets as just described – but other hardware, such as I2C or UART chips, can)

It might seem that this way of using interrupts adds very little, since we’re still checking to find out when a packet has been received, but now through a variable instead of a hardware register. There are two important reasons why this is nevertheless very useful:

  • interrupts can wake up the µC, so it can switch to a low-power state until then
  • we can extend the ISR to buffer multiple packets, even if not used right away

For high-speed I/O (including the serial port example given earlier), additional buffering can achieve much higher data rates, since we’re no longer forced to constantly poll and extract (or send out) individual bytes – only the ISR needs to be fast enough.

Practical use

It can’t be said often enough: interrupts are tricky to get right. They can happen at the worst possible time, and Murphy’s law says they will, in due time, when least expected.

If possible, consider using the following mechanism to deal with pin-change interrupts:

  • set them up as needed, edges, levels, whatever
  • have the ISR(s) only clear the interrupt and set a global flag
  • then, in your application somewhere, periodically check for that flag

In code (this is the same as the above button example):

volatile bool triggered;

extern "C" void PININT0_IRQHandler () {
  LPC_PININT->IST = (1<<0);
  triggered = true;

void main () {
  if (triggered) {
    triggered = false;
    // do the real work here

This has the following properties:

  • the ISR code remains as simple and quick as can be
  • it’ll catch every pin change, no matter how busy the µC is
  • all the real work is done in a normal state, not in interrupt mode
  • the low-level ISR and high-level app logic are cleanly separated
  • notice the (essential!) use of the volatile attribute here

There is a drawback with the above, which can be overcome with some extra coding: multiple triggers in very rapid succession may end up getting “coalesced” into one.

Pattern match engine

The LPC8xx pin-change hardware also has another mode, which does pattern matching. Without going into too much detail right now, this mode allows you define combinations of pin-change events as interrupt causes, as opposed to each pin being an individual source of pin-change interrupts. For example: only trigger an interrupt when pin 1 goes from high to low and pin 2 is “0” and pin 3 is “1”.

The LPC8xx hardware supports either simple pin changes or this pattern-match mode. Pattern-matching is in fact a superset, but setting it up is considerably more involved.

[Back to article index]

Accessing SPI memory

In on Apr 16, 2015 at 00:00

SPI is easy to interface to. That’s because the signalling is quite simple:

Spi signals

There’s a master side (in this case, the µC), which is in control of the ENABLE, CLOCK, and MOSI pins, and there’s a slave side (the dataflash chip), which controls the MISO pin (but only when ENABLE is low).

When ENABLE goes low, the master puts bits on the MOSI (“master out, slave in”) pin, and toggles the clock to shift each bit out to the slave. At the same time, it listens on the MISO pin (you guessed it: “master in, slave out”) and shifts the same number of bits in.

Usually the master first needs to send one or more bytes, so it ignores what comes back during those initial clock cycles. Then, it sends out 0’s and toggles the clock pin further to read back what the slave puts on the MISO pin. When done, the ENABLE pin is set to “1” again. All very simple stuff, even in software.

Unlike I2C, this is not a pure “bus”, in the sense that you can’t simply add more chips in parallel. Well, three of the pins can and should be connected in parallel to all the slaves, but each slave will need a separate ENABLE pin. In this example, there’s just one slave.

Most µCs have hardware support for SPI. This allows them to perform very fast signalling, much faster than software would be able to toggle and read out pins.

In the LPC8xx, the maximum speed for the master is the same as the µC’s clock speed, i.e. 12 MHz on power up, and 30 MHz when the PLL has been set up appropriately. Once set up, sending out and reading back two bytes using the LPC’s SPI hardware is trivial:

uint16_t xfer16 (uint16_t out) {
  while ((addr()->STAT & SPI_STAT_RXRDY) == 0)
  return addr()->RXDAT;

The LPC8xx hardware will set ENABLE to “0”, shift the data out and in using the CLOCK pin, and set ENABLE to “1” when done. At 12 MHz, this will all happen in less than 1.5 µS.

In the above example, three transfers can be seen: send 0x05 (result 0x00), send 0x35 (result 0x00), and send 0x9F (result 0xEF). These all access status and info registers.

SPI is a very common interconnect mechanism, and is also used by the RFM12/RFM69 wireless modules. In fact, we already have a generic SPI bus driver in the embello repository on GitHub. It was designed to be easily re-used for different tasks.

And since SPI flash is a common mechanism, it’s a good idea to allow re-use of that code as well. Such an “SPI flash” driver can also be found on GitHub. Note that the SPI bus driver is LPC8xx specific, but the SPI flash driver built on top is not – it merely embodies the logic specific to the dataflash chip we’re using, not how to talk to it for a particular µC.

With this code, we can now easily access our dataflash memory, using code such as:

#include "spi_flash.h"

SpiFlash<SpiDev0> spif;

int main () {
  printf("0x%x\n", spif.identify());

The SpiFlash<SpiDev0> is a C++ template notation, saying: define an object of the SpiFlash class, specialised to use the SpiDev0 class as its bus access mechanism. Where SpiDev0 is in turn shorthand for SpiDev<0>, i.e. use the SPI0 hardware interface, of the two present in the LPC812 we’re using.

A simple test application can be found here. It erases a few sectors, programs a few pages, and then reads back the data a few times.

So, in theory, this stuff is trivial. We should expect this to work perfectly, right?

Yes and no… the “Channel 0” pin in the image above is a dangling wire, which happened to be attached to the 8-bit logic analyser (a €10 unit from eBay). Surprisingly, it returns some pulses. This is worrying – we seem to be picking up stray signals from nearby wires.

As you’ll see, there is some trouble ahead due to this and other unexpected side-effects.

[Back to article index]

A bag of C++ tricks

In on Apr 3, 2015 at 00:00

The Romvars implementation uses a few advanced C++ features, including:

  • using a C++ template to isolate the hardware-dependent code
  • using a C++ nested class to provide virtual array-like access to entries

The template mechanism makes it possible to define both a 64-byte “Flash64” and a 1024-byte “Flash1k” implementation. Unlike the normal OO subclassing mechanism, all of the differences are handled by gcc at compile time, leading to more efficient and more compact code – often substantially so. Templates are a powerful tool for resource-constrained embedded environments, but some care is needed to avoid making things overly complex.

The basic idea here is that the RomVars is not a subclass of some Flash class (or more likely: vice-versa), but that the Flash class is “integrated” (or perhaps “injected”) into RomVars at compile time through a template:

    template < typename FLASH >
    class RomVars {
      FLASH flash;

So flash is a member of RomVars, of type “FLASH” (a place-holder, not the real type).

One immediate advantage is that the constants in the Flash class become available as constants in the RomVars class. This is why the following lines inside RomVars work:

    enum { NumVars = FLASH::PageSize / sizeof (Tuple) };
    uint8_t map [NumVars];

In other words, a RomVars instance (object) includes a map member which is an array of fixed size, but that size is determined by this definition inside the Flash class:

    enum { PageSize = 64 };

Note the use of the all-capitals “FLASH” as template parameter, which gets filled in when a RomVars instance is defined in the application:

    RomVars<Flash64,0x0F80> rom;

The reason for that second numeric “0x0F80” argument is that the real RomVars class was in fact defined slightly differently – with two template arguments:

    template < typename FLASH, int BASE >
    class RomVars {

Not only does the RomVars class have access to a class telling it how to perform reads and writes to flash memory, it also has a constant value telling it the location in flash memory.

The power from all this comes from being able to use a different flash implementation, at a different location, with all the differences compiled-in (and optimised!) at compile time.

But there is much more to it – this decoupling of the permanent variable implementation code from the actual load/save code could also be used to store the variables in a separate chip, connected via an I2C or SPI bus, or even to store the actual “permanent” data in a remote node, accessed via some wired or wireless protocol. In other words: the RomVars class implements a change mechanism, using two pages with somethingsomewhere!

All we need is a suitably defined “flash-like” class, with the proper members and constants. The ones currently defined can be found on GitHub. More can be defined later, by anyone.

Note that all this code resides in header files, as is common with templates. That’s because templates are very much a compile-time mechanism, so most of the app code needs to have access to the entire implementation – for the compiler to apply the proper substitutions.

So in a way, C++ templates behave like an advanced (and type-safe!) macro pre-processor.

On to the second advanced C++ trick in this RomVars class.

As mentioned earlier, a RomVars object acts very much as an array, allowing uses such as:

    rom[7] = 0x1234;
    if (rom[8] == 0x5678) ...
    rom[9] = rom[9] + 1;

This relies on several fancy C++ features: overloading of the “[]” array operator, the “cast to uint16_t” operator, and the assignment operator. It also depends on C++ “references” (which is a way to work with pointers without having to use “*” to de-reference them).

There’s a lot to go through to cover all the intricacies of this trick, but it boils down to this:

  • writing rom[7] triggers the C++ operator[] definition in RomVars
  • this returns a special kind of object, of type RomVars::Ref
  • that nested class is defined inside RomVars (and not available elsewhere)
  • the purpose of this object is to remember the RomVars object and the array index
  • when used on the right-hand side of an equals sign, as part of an integer expression, the operator uint16_t implementation is triggered, which calls back into the RomVars object to fetch the appropriate index (using the RomVars::at() method)
  • when used on the left-hand side of an equals sign, we call the RomVars::set() method to perform a storage operation, again at the remembered array index

Note that every use of rom[...] returns a temporary instance of type RomVars::Ref (on the stack), but this gets used and cleaned up automatically after use. The gcc compiler is very good at optimising the resulting code, to the point that the actual generated code doesn’t even have this temporary object anymore. It’s all conceptual smoke and mirrors…

The details and notation in C++ to accomplish all this are fairly nasty – perhaps mostly due to the fact that C++ was defined long after C was invented, and did so without breaking all the other C syntax conventions. But the power does lead to a RomVars class which walks, swims, and quacks like a simple array of permanent variable values. No “get” and “set” methods (or fetch/store, at/put, insert/delete, etc) – it all ends up looking like an array.

Hidden inside library definitions, C++ templates and operators can lead to simple code.

At the end of the day, to use this “permanent variable” mechanism, all you need to do is:

  • define a RomVars object, for example: RomVars<Flash1k,0x3800> rom;
  • initialise it before use, by inserting this call: rom.init();
  • use the entries as variables, i.e.: ... = rom[3]; or rom[4] = ...;

And if you want to find out more and investigate / learn all the details: they are all there, out in the open as wide open source, in the flash.h and romvars.h source files on GitHub.

[Back to article index]

Adding an RFM69 radio

In on Mar 28, 2015 at 00:00

With so many additional I/O pins on the LPC812, it’s easy to dedicate a few to a wireless radio module, such as the RFM69. We’ll use the following mapping, just 4 pins for now:

  • NSS = chip select (active low) = PIO0_8 = pin 11
  • SCLK = master SPI bus clock = PIO0_6 = pin 15
  • MISO = master in / slave out = PIO0_11 = pin 7
  • MOSI = master out / slave in = PIO0_9 = pin 10

No need to play trick with diodes or re-use pins, as we had to do with the 8-pin LPC810.

These radio modules are nice, but unfortunately they use a 2.0 mm pin distance, not the usual 0.1″ (2.54 mm) of breadboards. So again, let’s create a little adapter, on both sides:

DSC 5000

Now we can implement this circuit (adding a 10 µF electrolytic capacitor across the power supply to handle better any power consumption spikes from the radio):

DSC 5001

(note: that orange 0.1″ cap should really be placed next to the LPC812 chip to be useful)

As firmware, we’ll use a little RF69 “ping” test application and upload it:

$ make
uploader -s /dev/tty.usbserial-AH01A0EG build/firmware.bin #build/firmware.bin
found: 8120 - LPC812: 16 KB flash, 4 KB RAM, TSSOP16
hwuid: 3C200411C9981AAEF39EDF51031900F5
flash: 0940 done, 2364 bytes
entering terminal mode, press <ESC> to quit:

[rf_ping] dev 8120 node 12
OK 800d4f0102030405060708090a0b0c0d0e (139-56:3)
 > #46, 46b
OK 800d500102030405060708090a0b0c0d0e0f (139-56:3)
 > #47, 47b
OK 800d510102030405060708090a0b0c0d0e0f10 (140-38:3)
 > #48, 48b

The code sends a message about once a second, and reports all properly received packets.

Evidently, you will need to run this on at least two different setups, to be able to test proper transmission and reception. The “OK” lines are the incoming messages, the “>” lines are about messages being sent out. The payload length is increased each time until the limit is reached, then it restarts with a 0-length payload.

That’s it, we now have a brand new network (of 2 nodes), sending packets to each other!

[Back to article index]

Micro Power Snitch, last part

In Book on Mar 18, 2015 at 00:01

In this concluding part (for now) about the Micro Power Snitch, which feeds off the magnetic field around an AC mains power cable when in use, I’ll look into how the whole circuit behaves when it comes to actually sending out wireless radio packets.

The daily articles in this week’s final MPS episode are:

Update – With apologies (again!), I’m going to postpone the following posts:

It’s a huge challenge to manage the incoming energy so that everything keeps going!

(For comments, visit the forum area)

A revised, complete MPS

In on Mar 11, 2015 at 00:00

With startup now properly working, it’s time to look into the complete circuit. After all, it would be useless to just keep an LPC810 micro-controller running, we also need to send out some packets through a wireless radio.

The radio module used for this is the RFM69CW by HopeRF – a drop-in replacement for the original RFM12. The RFM69’s sleep mode power consumption is under 1 µA, but there is the same “startup hump” issue as with a µC: when the module starts up, it needs a bit of current until switched into sleep mode (about 1.2 mA).

So we’ll need to use the same trick as with the µC, with a P-MOSFET to act as power-on switch. This MOSFET however, can be tied to an I/O pin on the LPC810, and be placed under software control. It’s a lot easier to add a few lines of code, than to rewire a circuit!

Here is the complete design of the Micro Power Snitch (version 0, i.e. as first prototype):

Screen Shot 2015 02 24 at 11 31 10

Most of this is the same as before, with just the addition of the radio and its power control. There is one spare I/O pin left on the LPC810, available as header with ground and power.

The order of events we’ll aim for is as follows:

  • power comes in from the CT and gradually builds up Vres
  • at some point, the Q1 + Q2 pair switches on and powers up the µC
  • the µC immediately goes into deep power down mode with a wakeup scheduled
  • more energy comes in, allowing Vres to rise further
  • a pre-determined amount of time later, the µC wakes up
  • it turns on power to the radio and initialises it
  • lastly, the µC sends out a packet and powers down the radio and itself again
  • now rinse and repeat that last step every so often

The actual delays involved will need to be empirically determined, and depend greatly on how much of a voltage drop is created by the periodic µC + radio wake-ups. Keep in mind that the MPS will be consuming considerably more energy now: the radio needs 15..45 mA in transmit mode (depending on the output power selected), and a packet transmission takes at least a few milliseconds.

Still, the issue is really just a trade-off between how often to wake up and how to dimension the reservoir capacitors to limit the voltage drop. These are interrelated, because larger capacitors will take more time to charge up again. And of course it also all depends on the power level we’re monitoring: a 100W load will give us less energy to harvest than 1000W!

A lot more creative coding may be possible later – we could monitor the voltage drop in the µC itself, for example (using its built-in bandgap reference and ladder network), as well as the time it takes to charge up again, then make a rough estimate of how much energy is coming in. And then adjust the packet sending rate accordingly.

But for now, the first goal should be to just get a single wireless packet out the door.

[Back to article index]

An improved MPS circuit

In on Mar 5, 2015 at 00:00

Now that we’ve seen what the voltage monitor does, and how startup can still fail under certain conditions, it’s time to try and resolve everything. So let’s make a few changes:

  1. instead of two LEDs, we’ll use a special TLV431 “shunt regulator” IC
  2. let’s avoid the RC delay by omitting the capacitor on the NPN base
  3. and finally, we add the 1 MΩ positive feedback to get a better “snap action”

Here is the new circuit:

Screen Shot 2015 02 24 at 11 23 22

The TLV431 is an interesting device: it’s essentially an “adjustable zener”, where the chip will do whatever it can (by conducting more or less) to accomplish one goal: maintaining a fixed voltage of 1.24V between its “reference” pin and its “anode”. For your convenience:

Screen Shot 2015 02 24 at 17 00 50

So as long as it can, the TLV431 will try to keep 1.24V across R2. This corresponds to a certain current, and since R1 carries the same current, it will have ≈ 0.6V across it, for a total zener-like effect of 1.82V across the TLV431. At all times, if there is enough voltage.

What this also means is that R3 + R4 will have whatever additional voltage there is from the power supply. Another way to put it is that R3 + R4 will have Vres – 1.82V across them.

The red LED now plays a completely different role: it just limits the voltage across R3 + R4 to about 1.8V. As a result, Vref cannot rise above about 3.6V, which is also the limit of what we should put on the LPC810 and the wireless radio (yet to be attached). Very convenient.

Diode D3 drops the voltage by about 0.6V, and NPN transistor Q1 starts to switch on when its base rises to about 0.6V. This happens a bit sooner than expected perhaps, due to the idle current of the TLV431 itself.

Lastly, we have the 1 MΩ resistor feedback, which will start pulling the base up the moment Vdd rises, i.e. when the transistor starts to switch on, it’ll enter a runaway mode where its base rises further, causing it to switch on even more, etc – until fully on, i.e. “in saturation”.

You can see all of this happening nicely in the following oscilloscope capture:


The blue line (CH2) is the transistor’s output voltage on its collector. Around 1.8V it starts dropping sharply with respect to the supply’s Vres. By the time Vres reaches 2.2V, the whole circuit tips over and switches on very rapidly, due to the feedback resistor.

Meanwhile, the collector voltage drops below ground level! This is due to sudden discharge of the P-MOSFET’s gate charge (in combination with the Miller effect). It’s actually quite a welcome side effect, as it helps turn the P-MOSFET on even faster.

So the TLV431 helps create a really well-defined switching point, and the feedback resistor helps the circuit to behave in a very definitive and extremely quick manner. Note also that there are no RC-based delays anymore – this circuit is now purely DC-level based.

Now let’s find out how it behaves with slower supply changes, both up and down…

[Back to article index]

Micro Power Snitch

In Book on Feb 18, 2015 at 00:01

It’s time to tackle a fairly ambitious challenge: let’s try to make an LPC810 run off “harvested” energy and use it to periodically send out a wireless packet.

This week will be a short intro into the matter, with more to follow later:

To lift the veil a bit, here’s the energy I’m going to try to harvest:

Ct shape

This is the voltage from a Current Transformer (CT), when unloaded. Such a CT needs to be “clamped” around one of the wires in an AC mains cable, and will then generate a voltage which is proportional to the amount of current flowing in that wire.

Well, maybe, sort of…

(For comments, visit the forum area)

Picking up magnetic energy

In on Feb 18, 2015 at 00:00

This is the start of a new project. The goal is to detect when an AC mains appliance is turned on and send out wireless packets when this is the case. An important requirement in this project is to do so without direct (galvanic) connection to AC mains. In fact, we don’t want any external power source – no batteries, no wires, nothing. Just a little gadget that sits next to the appliance in some way, and reports its activity over wireless – forever.

We’ll call it the “Micro Power Snitch”.

Powering up a micro-controller and sending out wireless packets requires energy. We’re obviously going to need to “harvest” this energy from somewhere. In this project, the goal will be to convert magnetic energy into a tiny little power source, just enough to drive an LPC810 µC with attached RFM69 wireless radio.

The device to make this happen is called a current transformer (CT), such as this one:

DSC 4947

Think of it as a special kind of transformer, such as this diagram, courtesy of Wikipedia:

763px Transformer3d col3 svg

… except that one of the windings is a single “loop”, and the other is about 1000..2000 turns, inside that blue hump at the top. The ring itself is made of ferrite material, which captures the magnetic field. The ring is actually two halves, so the whole device can be opened and clipped around an existing wire without having to cut or break that wire.

When we put the CT around a (single!) phase of a power cord, and there is current going through it, then the two wires coming out of the CT will show a signal which looks like this:


The voltage is cycling at the same rate as mains power, i.e. 50 Hz, and as you can see, this one alternates between about +5V to -5V, in a somewhat odd shape. It’s clearly not a sine wave. The shape of this signal will not change when more power is being consumed by the AC appliance, only its amplitude.

Note that we have yet to determine whether there’s enough energy in here to drive a µC and a radio module, but this is essentially the power source we’re going to have to work off.

As you will see, this is an extremely weak power source. We’ll need to use lots of tricks to even stand a chance of reliably running off this energy supplier.

[Back to article index]

Keeping track of the milli’s

In on Jan 31, 2015 at 00:00

Let’s recap: we have a simple “blip” node sending out test packets, and we’re picking them up with another node attached to the Raspberry Pi’s I2C bus as a slave device.

This is clearly only the beginning of a general-purpose wireless sensor network setup. There is no return “send” path yet, not even for ACKs, and there is nothing yet to manage the reception, i.e. polling the I2C slave periodically in the background. There is also no easy way to upgrade the firmware in these nodes, not even the central one – we currently need to unplug, re-flash, and re-insert the LPC810 chip if we want to make changes to it.

Which means that this I2C setup is really not much more than a proof of concept.

But this design does have some interesting properties:

  • it’s as low-end as it gets in terms of the micro-controller and the wireless radio
  • there is no hidden complexity, everything is inspectable – and improvable!
  • more central nodes can be added, for different net groups or frequency bands
  • there is no USB involved, no drivers, no interface chips – not even a 3.3V regulator
  • the I2C bus is fast enough to handle all RFM69 traffic, in both directions

One of the less convenient aspects of this design is that the I2C bus is a master-slave setup, with the Raspberry Pi acting as the master. It needs to continuously poll the slaves to pick up newly received packets.

The good news is that polling the I2C bus 10 times per second consumes less than 2% CPU (this was measured with a small Go application used for testing). It’s not a show stopper.


But one drawback is that we’ll get the packets into the Raspberry Pi a little later than when they actually are received by the central node, due to the periodic polling. For replies which need to be sent out in response, this is a bad thing: the longer a remote node has to wait for a reply, the longer it has to stay in receive mode – which means it’ll drain its battery faster.

A related drawback is that we’ll have less accurate information as to the exact arrival time of each packet. The inaccuracy is determined by how often the master polls the slave on the I2C bus. For some use cases, this might be important.

To start with this second issue, there is a fairly simple way to fix this: we could include an “age” field in the reply sent over I2C, i.e. “45” meaning: the reply you’re now looking at is 45 milliseconds old. The receiving end on the Raspberry Pi can then turn this into an accurate timestamp, by subtracting that value from its (absolute) internal clock.

The other problem is more difficult to deal with – here are some ideas:

  • raise the master-slave poll frequency, at the cost of a higher overhead and bus load
  • put more smarts into the slave, so that it can handle responses on its own
  • add an I/O pin to let the slave signal when there is a new packet
  • abandon I2C and switch to a conventional bi-directional serial protocol

At this point, it’s not yet clear which approach will be best, so for now let’s just keep it in mind along with all the other functions we still need to implement to arrive at a general-purpose central node setup.

Part of the story also depends on whether we want to use a Raspberry Pi (or similar) as our central server. For a laptop or desktop machine, I2C would not be a convenient option.

[Back to article index]

Debugging the I2C bridge

In on Jan 30, 2015 at 00:00

Our final goal: receiving packets from the RFM69, and passing them on to the Raspberry Pi as I2C slave. The code will be a little more complex than before, but the scary bit is that there is no room for easy debugging – we have no I/O pins left to send printf output to!

For reference, here are the pin assignments used for the LPC810:

  • pin 1 = PIO0_5 = SDA (I2C) – to RasPi
  • pin 2 = PIO0_4 = SCL (I2C) – to RasPi
  • pin 3 = PIO0_3 = SSEL (SPI) – to RFM69
  • pin 4 = PIO0_2 = MISO (SPI) – to RFM69
  • pin 5 = PIO0_1 = MOSI (SPI) – to RFM69
  • pin 6 = +3.3 V – to RasPi
  • pin 7 = GROUND – to RasPi
  • pin 8 = PIO0_0 = SCLK (SPI) – to RFM69

And this is what the setup looks like – plugged into the Raspberry Pi’s 3.3V and I2C:

DSC 4933

The good news is that we have all the hardware and software working, separately that is. Having already tested all the individual pieces is extremely useful, as you will see shortly.

Here are the main parts of the “i2listen” code – the full version is on GitHub, as usual:

RF69<SpiDevice> rf;
uint8_t rxBuf[66];

struct Payload {
    uint8_t seq, len, rssi, lna;
    uint16_t afc;
    uint8_t buf[66];
} out;                      // this is the data as returned over I2C

uint32_t i2cBuffer [24];    // data area used by ROM-based I2C driver
I2C_HANDLE_T* ih;           // opaque handle used by ROM-based I2C driver
I2C_PARAM_T i2cParam;       // input parameters for pending I2C request
I2C_RESULT_T i2cResult;     // return values for pending I2C request

uint8_t i2cRecvBuf [2];     // receive buffer: address + register number

void i2cSetup () { ... }

// called when I2C reception has been completed
void i2cRecvDone (uint32_t err, uint32_t) {
    if (err == 0)

// called when I2C transmission has been completed
void i2cSendDone (uint32_t err, uint32_t) {
    if (err == 0)
        out.len = 0;

// prepare to receive the register number
void i2cSetupRecv () {
    i2cParam.func_pt = i2cRecvDone;
    i2cParam.num_bytes_send = 0;
    i2cParam.num_bytes_rec = 2;
    i2cParam.buffer_ptr_rec = i2cRecvBuf;
    LPC_I2CD_API->i2c_slave_receive_intr(ih, &i2cParam, &i2cResult);

// prepare to transmit either the byte count or the actual data
void i2cSetupSend (int regNum) {
    i2cParam.func_pt = i2cSendDone;
    i2cParam.num_bytes_rec = 0;
    if (regNum == 0) {
        i2cParam.num_bytes_send = 1;
        i2cParam.buffer_ptr_send = &out.len;
    } else {
        i2cParam.num_bytes_send = out.len;
        i2cParam.buffer_ptr_send = (uint8_t*) &out;
    LPC_I2CD_API->i2c_slave_transmit_intr(ih, &i2cParam, &i2cResult);

int main () {

    LPC_SWM->PINENABLE0 |= 3<<2;        // disable SWCLK/SWDIO
    // lpc810 coin: sck=0p8, ssel=3p3, miso=2p4, mosi=1p5
    LPC_SWM->PINASSIGN3 = 0x00FFFFFF;   // sck  -    -    -
    LPC_SWM->PINASSIGN4 = 0xFF030201;   // -    nss  miso mosi

    rf.init(1, 42, 8683);
    while (true) {
        int len = rf.receive(rxBuf, sizeof rxBuf);
        if (len >= 0) {
            out.len = len + 6;
            out.rssi = rf.rssi;
            out.lna = rf.lna;
            out.afc = rf.afc;
            memcpy(out.buf, rxBuf, sizeof out.buf);

In case you’re wondering: this code uses 1268 bytes of flash and 288 bytes of RAM.

The basic idea is to set up a complete “payload” once a packet is received, which is then sent out to the I2C bus when the Raspberry Pi asks for it. All I2C slave communication is handled by the ROM-based I2C driver in the LPC810, using interrupt mode:

  • when I2C data comes in, the first byte is assumed to contain either “0” or “1”
  • if “0”, we send back the number of bytes waiting in the payload (zero if none)
  • if “1”, we send back the actual payload, including extra rss, afc, etc. details

As it turns out, this code worked on the very first try – there is nothing to debug!

Here is what we get back when we read “register 0”:

$ i2cdump -y -r 0-0 1 0x70 c
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f    0123456789abcdef
00: 0a 

What it tells us, is that there is a 10-byte (0x0a hex) payload waiting.

And here are a couple of payload reads, as requested from the command line:

$ i2cdump -y -r 1-10 1 0x70 c
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f    0123456789abcdef
00:    7d 0a 93 01 fa ff 80 01 3c 00                    }????.??<.     
$ i2cdump -y -r 1-10 1 0x70 c
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f    0123456789abcdef
00:    7e 0a a5 01 e2 fc 80 01 3d 00                    ~???????=.     
$ i2cdump -y -r 1-10 1 0x70 c
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f    0123456789abcdef
00:    8c 0a 96 01 fa fe 80 01 4b 00

Since the last two bytes are a sequence number sent from the “blip” node, we can see that these are indeed different packets. Keep in mind however, that packets will be lost if we don’t fetch them quickly enough – as shown in that last example.

There are still some weak spots in this code. The most important one is that we’re mixing interrupts and looping code without taking any precautions. If an I2C request comes in while a packet is being copied into the payload buffer, we’ll end up with garbled data. This needs to be addressed for production use, but as proof-of-concept, the above code will do.

So there it is: µC “blip” => RFM69 => air => RFM69 => µC “i2listen” => I2C => RasPi

We’ve created a brand new wireless network with nothing but a few little LPC810’s!

[Back to article index]

Debugging the RF link

In on Jan 28, 2015 at 00:00

Debugging wireless communications can be tricky, especially if the hardware and the software are both untested. We’re going to have to figure out all the details on the sending as well as the receiving side, and until it all works there will be preciously little feedback.

So the first step is really to simplify. We shouldn’t try to debug the RF sender, RF receiver, both sides of the I2C communication and the Raspberry Pi hookup all at once.

We have a working upload setup for the LPC810, based on FTDI, and we have a working serial connection to print debugging information. So let’s start with this direct hookup:

DSC 4930

That’s a modified USB BUB, an LPC810 upload board (including a 3.3V regulator), with six wires to an RFM69 radio module: two for power, 4 for SPI. And an antenna wire.

The wiring is all white, unfortunately, but this is only a temporary setup for debugging – check out the gpio/pin assignments in the code below if you want to see what goes where.

Here is the listen test code we’ll use:

#include <stdio.h>
#include "serial.h"

#include "spi.h"
#include "rf69.h"

RF69<SpiDevice> rf;
uint8_t rxBuf[66];

int main () {
    LPC_SWM->PINASSIGN0 = 0xFFFFFF04;   // only connect 4p2 (TXD)
    serial.init(LPC_USART0, 115200);

    LPC_SWM->PINENABLE0 |= 3<<2;        // disable SWCLK/SWDIO
    // lpc810 coin: sck=0p8, ssel=3p3, miso=2p4, mosi=1p5
    LPC_SWM->PINASSIGN3 = 0x00FFFFFF;   // sck  -    -    -
    LPC_SWM->PINASSIGN4 = 0xFF030201;   // -    nss  miso mosi

    rf.init(1, 42, 8683);
    while (true) {
        int len = rf.receive(rxBuf, sizeof rxBuf);
        if (len >= 0) {
            printf("OK ");
            for (int i = 0; i < len; ++i)
                printf("%02x", rxBuf[i]);
            printf(" (%d%s%d:%d)\n",
                    rf.rssi, rf.afc < 0 ? "" : "+", rf.afc, rf.lna);

This uses the new RF69 driver to report all incoming packets on the serial port. The node is set up as id 1, group 42, and frequency 868.3 MHz, but feel free to adjust these as needed.

The output from this code will not be terribly exciting at this stage:

$ lpc21isp [...]
Terminal started (press Escape to abort)


And then: silence – because there’s nothing sending out packets yet! Let’s fix that next…

[Back to article index]

Voltage sources

In on Jan 23, 2015 at 00:00

A lot of the work at JeeLabs is about wireless communication. When the signalling doesn’t use wires, then the most practical option for power is to also provide it without wires.

There are a number of options:

  1. batteries, both single-use and rechargeable
  2. super-capacitors, i.e. very large capacitors
  3. solar energy, i.e. photovoltaic cells (yep, same Volta as before)
  4. energy harvesting, i.e. tapping some ambient source of energy

DSC 1949


There are many types of batteries. Let’s focus on the small ones first, since the aim here is to power small wireless nodes, spread out in an around the house:

  • AA and AAA: single use alkaline provides 1.5V, and rechargeable NiMh around 1.2V, so in both cases we’ll need two or three of them in series to produce 2.4..3.6V, roughly. These can be used either through a regulator with fairly little voltage drop / loss, or as is if the voltage range is known to be acceptable across the battery’s entire lifetime.
  • Coin cell: the most common one is the CR2032 (20 mm round, 3.2 mm thick), and delivers a nice 3.0V – the disadvantage is that most coin cells have limited power (200 mAh) and are single-use, leading to recurring replacement and waste disposal.
  • LiPo (Lithium Polymer): these rechargeable cells supply 3.7..4.2V and have a very high energy density – advantage: available in a wide variety of sizes/capacities (also as cell phone packs) – disadvantage: needs some care, since the internal resistance is very low (a short can produce high currents on “unprotected” cells).

When making battery choices, it’s also important to consider what voltage range all parts of the circuit will support. The LPC8xx series works from 1.8..3.6V and so will the RFM12/69 radio modules, for example. Many sensors work well in the 3.0..3.3V range, some support a larger range – as long as all components are used in their operating range, we’ll be fine.

More power

Sometimes, you may need much higher voltages, i.e. 12V to drive a motor or relay, and much higher currents of 1A or more. The hobby R/C model airplane/boat/car world has embraced “LiPo packs”, which look like this:

63379 m

This particular one is described as “4S1P”, which means 4 LiPo cells in series, and just one of each (none in parallel). Its nominal voltage is 14.8V, which is really the minimum value once the battery is nearly discharged – when fully charged up, it’ll be 4x 4.2V = 16.8V. The capacity is 1300 mAh, i.e. it can supply 1.3A for 1 hour, 0.65A for two hours, etc. before needing a full recharge. This is actually a relatively small unit in the hobby R/C world.

But here’s the thing to watch: the discharge rate of this particular unit is specified as “40C”, which means that this battery can deliver a current of 40 times its capacity – i.e. 52A! Although it would only be able to do so for 90 seconds, since even at that rate the capacity is still the same. But what this means is that this harmless-looking battery of just 77x34x31 mm can pump a whopping 16.8 x 52 = 870 Watt (≈ 1 horsepower!) through a circuit with low enough resistance. Enough to melt just about any piece of metal connected to it.

LiPo batteries are fantastic power sources, and can keep their charge for many years if only a tiny current is drawn from them. But unconstrained ones can also be very dangerous.

Note: the extra connectors shown above are typical for series-connected batteries, they are used to make sure the voltage is evenly spread across each one of them while charging up.

Super capacitors

These are – as the name indicates – capacitors with a very large capacity. They are not necessarily physically large: most “supercaps” only tolerate a maximum voltage of 2.7V or 5.5V. Here is a 0.47 Farad @ 5V unit, it’s 17×14 mm, i.e. comparable to a small battery:

PB SERIES 13 0H 16 8L

These units are great for storing a little energy, but they are still capacitors: when discharging, the voltage will drop exponentially, as with any capacitor. And their self-discharge due to internal leakage is still in the order of 100 µA usually, so they may not even last through the night, regardless how low-power the rest of the circuit is.

New developments have even led to ultra-capacitors, with capacities of up to 5000 Farad (compare that to the 0.1 microfarad capacitors used for digital circuit decoupling!):

ESHSP 5000C0 002R7

But that’s a 16x6x6 cm unit, costing around €200. Not really what we’re after, usually…

Solar energy

Solar energy is “in”. Lots of people have solar panels on the roof – free energy, as long as the sun is shining, right? Not so fast, there are some hurdles when used inside the house…

Screen shot 2010 03 05 at 23 1 03 57

The big disapointment you will run into when trying out solar cells indoors, is that the sun doesn’t really shine into the house all that strongly. Yes, we could collect some power right next to a south-facing window, but it’ll be a lot less than outside, and who wants a big panel on each sensor node anyway…

Expect to see 1000x less solar energy in many parts of the house inside, versus outside.

Also, you’ll need to carefully investigate what type of material is used in your solar setup. The mono-/polycrystalline types are less suited for indoor use than amorphouse ones.

Getting by on solar energy on a long dark week in winter, somewhere inside the house is still going to be a tough proposition. Hopefully better technologies will arrive one day.

Energy harvesting

This is an area where a lot of development is taking place. Wouldn’t it be great if you could pull some energy literally out of thin air? There are usually lots of electromagnetic fields around the house, so why not pick them up and convert them to a power supply trickle?

The chance of collecting enough energy and finding a source which is available often enough is still slim for now. Here are some sources you could look into:

  • radio waves from a nearby transmitter or a WiFi access point (think nanowatts)
  • magnetic pickup from AC mains (i.e. transformer), only works when power is drawn
  • a piezo crystal being pushed (as in lighters): delivers a very brief high-voltage peak
  • heat differences can be converted by a Peltier element (think: inverted thermocouple)
  • sound can be picked up by a microphone (but where do you get sound all the time?)
  • as a variation of this, you could consider vibration, such as from a fridge motor
  • esoteric, but not unthinkable: the human body (it generates up to 100 W, as heat)

And of course all sorts of ways to convert mechanical energy / motion into electricity:

  • spinning wheels, dropping weights, chains & pulleys, doors & windows
  • water flowing through a pipe (rainwater from the roof?)
  • wind energy, air motion from convection


For most intermittent sources, our best bet is probably to combine it with a rechargeable battery of some kind. That way, we can collect/harvest energy when it’s available, and save it up for hard times. The most likely candidates for such intermittent storage are probably the LiPo battery and the supercap.

The issue here is to plan for worst-case energy scenario’s: a week of cold & dark weather during the winter will strain even the best solar collection setup.

The alternative is to surrender to external variations and take a very different approach: power up when the energy is available, and start measuring, sending, etc. And then just die and wait for the next energy-rich period to power up again.

Although this strategy looks simple on paper, it can be surprisingly tricky to implement. Many electrical circuits have a very inconvenient startup current “ridge”: until fully powered up, so they can start doing clever things to save energy, many chips tend to draw quite a bit of current (relatively speaking). That current draw can easily prevent the power source from ever reaching the minimum voltage needed for proper operation. Catch 22!

Now what?

The conclusion of all this is a resounding “it depends”. The simplest power source is an external one, i.e. a battery with known voltage and capacity, which then gets replaced (or recharged) from time to time. None of the alternatives has the same level of predictability.

Battery life can be prolonged either by making the circuit more energy efficient, or by using some alternative source(s) to “top up” the available power.

For outdoor use, say a plant-watering monitor in the garden, solar w/ a small LiPo battery is probably quite a good option. Or if restarts are ok: just a solar cell + reservoir capacitor.

In each of these cases, you’ll need to determine what voltage levels are involved, and how to get things to match up with the circuit demands. A few NiMh’s or a LiPo, in combination with a linear regulator will often be fine. Switchers can manage energy more efficiently, but they also consume a small-but-permanent quiescent current which cannot be ignored.

In the world of electricity and power, there is no such thing as a free lunch.

[Back to article index]

The water analogy

In on Jan 21, 2015 at 00:00

Once upon a time, a physicist named Alessandro Volta discovered that two different metals with a salty liquid in between generate an electric potential (i.e. a voltage). He combined these “electric cells” into a stack, called a “voltaic pile” to generate much higher voltages. It’s hard to imagine nowadays how the research into all this went – it must have appeared pretty magical at the time.

In a way, some of that magic still persists today, since you can’t see voltage or current directly – only its effects can be made visible or sensed in some other way.

Fortunately, we don’t have to understand what voltage is – as long as we have a sufficiently practical model in our mind of what it does so that we’re able to predict, or at least make an educated guess, of what will happen when applying a voltage to a circuit.

With an apology if you already know all this: dealing with electricity requires a basic but essential intuition about how voltage “works”. Without it, creating circuits is just a waste of time (and expenses, since you’re quite likely to cause permanent damage). Feel free to fast forward through this article if you’ve heard it all before. The following notes are to help everyone else get a feel for the analogy between voltage and pressure.

Ok, so here’s the point: electricity can be compared to water. Voltage is water pressure. There is a lot of written material about this – here’s a nice diagram from the All You Need Is Solar site, found via a Google search:


There’s a small confusing point here, in that the water tank is drawn at a higher level. Just keep in mind that voltage is not analogous to height, but to pressure. Due to the effect of gravity on water, it is somewhat related in this analogy. When pumping water up, the water pressure will increase due to gravity’s pull – there’s no relation with electricity, however.

To take the analogy further…

The pipe has resistance – a thin pipe has more resistance, so with the same pressure, water will flow less quickly, i.e. less current. This is the water equivalent of Ohm’s Law.

In this context, the opposite effect is more relevant: when the connecting pipe is infinitely thick, it will have no resistance, an infinite amount of water will flow, and there will be no way to build up pressure. In electrical terms: a big fat short across a voltage source leads to a huge current.

In the real world, there is no such thing as infinite thickness or amounts. Likewise, in electrical circuits there is always some resistance. In the case of batteries, it is called “internal resistance”, which is like a resistor inside the battery, acting as if it were in series with the circuit. Similarly, in a capacitor (which can also act as voltage source) this is called Equivalent Series Resistance (ESR).

A low resistance – regardless of whether it’s internal or an actual resistor – leads to large currents when the rest of the circuit is shorted out. Which is why shorting out a lead-acid battery, or a LiPo battery, or a large super-capacitor, is a very bad idea. Large currents can generate huge amounts of heat (think: welding equipment). Even at “only” a few volts.

Just like in the water analogy: if you want to avoid big water flows, and keep the pressure intact, then you have to avoid big pipes (and leaks, which are in essence unbounded pipes).

The same holds at much smaller scales as well. Efficient use of electricity, as we’d like it with ultra low-power wireless sensor nodes for example, means we need to control voltage levels very carefully – any difference in voltage levels will lead to current flowing.

[Back to article index]

Embedded Linux

In Book on Jan 14, 2015 at 00:01

The “LPC810 meets RFM69” series, is being postponed a bit longer than anticipated, the relevant pieces are simply not ready and stable enough yet to present as working code. With my apologies for this change of plans. I’ll definitely get back to this – count on it!

For now, let’s start exploring another piece of the puzzle, when it comes to setting up a wireless network, and in particular a wireless sensor network (WSN)

As usual, each of the above articles will be ready on subsequent days.

DSC 4918  Version 2

If you’ve always wanted to try out Linux without messing with your computer – here’s a gentle introduction to the world of physical computing, from a Linux board’s perspective!

(For comments, visit the forum area)

Pies, Bones, and Droids

In on Jan 14, 2015 at 00:00

In the world of physical computing, there are essentially three ways to hook up stuff:

  1. to your laptop or desktop computer, running Windows, Mac OSX, or Linux
  2. to a little USB-connected embbeded µC, such as the ever-popular Arduino
  3. to a self-contained board such as the Raspberry Pi, running embedded Linux

The first option is perhaps the most obvious: find some sort of peripherals, such as the USB-connected Phidgets or the Gadgeteer from Microsoft – also connected via USB.

What you get is some hardware with a software driver plus some supporting application(s) running on your laptop to control every aspect of the attached devices. All the intelligent behaviour you want to implement runs on your laptop or desktop computer. Turn it off, and the whole setup stops working. Not so great for projects which need to run unattended.

Then came the second option, and it took the hobbyist world by storm: it’s still a device hooked up to your computer via USB, but this time it has a µC inside to which you upload software (a “sketch” in artist-conscious Arduino-speak). The difference? It’ll do stuff even when disconnected, as long as its powered up of course – via batteries or a USB adapter.

Great. That’s all it takes to build smart / clever / entertaining projects with, and best of all, this second approach ended up being a lot cheaper than the first one. And all open source.

But those little microcontrollers are pretty limited in what they can do. You still need a “real” computer to edit the code on, to compile things to machine code, and to send the result to it. It’s hard to get a good LAN or WLAN setup working, let alone make it do the things we’re so used to by now from all those web services and web sites on internet.

Small Linux Boards

Fast forward to this decade, and we’re flooded with little computer boards costing no more than €30..60, and able to run the Linux operating system. The difference with non-O/S boards with “bare” µC’s, is that now we get access to dozens of web server options, dozens (hundreds?) of programming languages, graphical user interfaces, databases, text editors, and more. Tens of thousands of free packges in fact, installable with a quick download.

Just about anything a laptop or desktop can do, can now be done on these boards. It won’t be quite as fast, but make no mistake – some of the newer board offerings are getting close in performance to a laptop, with their quad 32-bit ARM “cores” and 1..2 GB of RAM.

The Raspberry Pi started this low-cost high-volume revolution:


A 700 MHz 32-bit ARM processor, 256..512 MB RAM, and 26 pins to hook things up.

Roughly around that time came the BeagleBone Black:

DSC 4611

The BBB has a slighty higher-end ARM processor and 2..4 GB of on-board eMMC flash memory, enough to run a full installation of Linux. With a whopping 96 pins and some advanced hardware features.

Another popular board is the Odroid U3, with a quad-core ARM chip, 2 GB of RAM, and the option to install 8..64 GB of eMMC flash. Only a small number of I/O pins, though:

Screen Shot 2015 01 13 at 23 05 40

All these boards have HDMI video out, Ethernet, some USB ports, and a slot to insert a (µ)SD card. All of them can be turned into fully self-contained standalone little computers, drawing only a few watts of power, even when running flat out.

Each of these would make a nice always-on NAS-type home server (just add some USB drives), or a perfect home-monitoring and -automation system (just add one or more wireless dongles, and perhaps also some wired interfaces).

In terms of raw power, these boards will have roughly 1% .. 10% of the performance of a modern laptop or desktop machine. Can’t expect top performance from only a few watts!

But above all: each of these have lead to a standard setup, which you can easily write software for, and let others reproduce as setup. Or conversely: surf the web, and mix and match what you find to set up your own system without having to figure it out yourself.

With this hardware out of the way, the question becomes: what software do we need?

[Back to article index]

One step at a time

In on Jan 7, 2015 at 00:00

Let’s step back for a moment and review the big picture of what we’re trying to accomplish:

Rpi i2c rf69

The goal is to receive periodic “pings” from a remote wireless node and see the results as plain text on the Raspberry Pi command line. What we have already is:

  • a working Raspberry Pi setup, using either keyboard + monitor or a network login
  • a working FTDI programming adapter, as used in the Getting Started series
  • a way to connect to the I2C bus from the command line, using “i2cdump“, etc.
  • a working example of the LPC810 acting as I2C slave
  • all the hardware setups, both on the Raspberry Pi side and on the remote node side
  • a tentative implementation of the remote node and the I2C bridge

Now is probably a good time to point out that the above setup is actually quite amazing: building a multi-bus / multi-platform / ultra low-power / embedded RF configuration like this can nowadays be done with a hardware investment of well under €100! The wide availability of free and open source tools, very low-cost hardware, and the wealth of information we can find on internet is unprecedented in history.

It’s unlikely (read: impossible) that the code will work on the first try, and there are lots of places where things can go wrong. Our best bet is to dramatically simplify things, and try and get each of the steps working one by one. Since we can’t see whether the RF side is working before I2C works, we’ll need to address that very early on.

We can test the I2C slave logic by simply returning some fixed data, but this time – unlike the previous slave code – we’ll include the new RequestHandler class.

Once the slave access works, we’ll insert some dummy data periodically into the new PacketBuffer class, and verify that it can all be retrieved from the Raspberry Pi host.

But first… let’s write some host-side code to access this imaginary new slave device.

[Back to article index]

LPC810 meets RFM69

In Book on Dec 31, 2014 at 00:01

This week, as we jump from 2014 into 2015, I’d like to start on an exploration which is dear and near to me: ultra low-power wireless sensor nodes for use in and around the house.

The LPC810 µC has 4 free I/O pins, when connected via a serial port or I2C. And as it so happens, it’s also quite feasible to drive an RFM69 wireless module with just 4 pins, i.e. using just an SPI bus connection, without any interrupt pins hooked up.

So why not try to combine the two, eh?

The following articles introduce a brand new “RF69” driver, using native packet mode:

This concludes this year’s refreshed weblog series, but I’m really looking forward to the year ahead. The new weekly format is working out nicely for me – I hope you also like it.

Jeebook cover

To close off the year and fulfil another goal I had set for myself, the recent material on this weblog has now been added to the The Jee Book. It’s just a start to let you download the entire set of articles published so far – in a range of e-book formats, including PDF.

I hope you had a great 2014 and wish you a very Guten Rutsch into 2015. May it bring you and yours much happiness, creativity, and inspiration – with respect and tolerance for all.

Happy hacking,
Jean-Claude Wippler

(For comments, visit the forum area)

Radio blips from an LPC810

In on Dec 31, 2014 at 00:00

It may sound like a bit of a stretch: using an 8-DIP µC to drive an RFM69 wireless radio?

After all, we need two pins for power and at least 4 pins to drive the radio module. That leaves only two free I/O pins. But keep in mind that two pins is all you need to hook up to all sorts of I2C devices, such as I/O expanders, ADC converters, temperature / humidity, even entire 9-DOF sensors which can report acceleration, rotation, and magnetic heading.

Here’s a tiny hookup, just to see if it can be done – all held together with soldered wires:

DSC 4892  Version 2

That “big” round black plastic is a battery holder for a common CR2032 coin cell.

Sooo, let’s see if we can make this thing send out a test packet once a second, eh?

We’re going to need a “driver” for the RFM69, which knows all about how to set it up, how to put it into transmit mode, and how to get data packets into the radio’s buffer. There is a fresh RF69 driver in the “embello” repository on GitHub which does just that.

The RF69 class is defined as follows:

template< typename SPI >
class RF69 {
    void init (uint8_t id, uint8_t group, int freq);
    void encrypt (const char* key);
    void txPower (uint8_t level);

    int receive (void* ptr, int len);
    void send (uint8_t header, const void* ptr, int len);
    void sleep ();

    int16_t afc;
    uint8_t rssi;

This uses “templates”, an advanced mechanism in C++ to “specialise” the RF69 class for a specific way of talking to the SPI hardware in this case. Most of the gory details are hidden, but this means we have to define an instance of the RF69 class as follows:

    RF69<SpiDevice> rf;

With “SpiDevice” being another (low-level) class which implements the interfacing to the actual SPI hardware present in the LPC810. The benefit of using templates over other layering mechanisms such as passing in details at run time or defining virtual methods, is that the resulting code is considerably more efficient because the compiler can optimise the code a lot more. As a result, the RF69 driver is super compact – great with 4 KB flash!

Here is the main code, also available in full from GitHub:

int main () {
    LPC_SWM->PINENABLE0 |= (3<<2) | (1<<6); // disable SWCLK/SWDIO and RESET

    // NSS=1, SCK=0, MISO=2, MOSI=5
    LPC_SWM->PINASSIGN3 = 0x00FFFFFF;   // sck  -    -    -
    LPC_SWM->PINASSIGN4 = 0xFF010205;   // -    nss  miso mosi


    rf.init(1, 42, 8683);
    rf.txPower(0); // minimal

    int cnt = 0;
    while (true) {
        rf.send(0, &++cnt, sizeof cnt); // send out one packet
        sleep(1000);                    // power down for 1 second

Most of this should be fairly self-explanatory, but here are some extra notes:

  • rf.init(1, 42, 8683) – sets node ID 1, net group 42, frequency 868.3 MHz
  • rf.encrypt("mysecret") – turn on the RFM69’s 128-bit AES encryption
  • rf.txPower(0) – lowest transmit power level (and lowest current consumption)

And that’s all there is to it, really. The packet we’re sending out contains 4 bytes with an incrementing counter value – an int is 32 bits, we’re in the ARM world now, remember?

All we need to do is upload this firmware, which is less than 1 KB, plug the LPC810 into the above circuit, and insert a coin cell. It will keep going for many months.

But evidently this is all useless until we also have a way to receive that data. Stay tuned…

[Back to article index]

Eye Squared See

In Book on Dec 24, 2014 at 00:01

Physical computing is about hooking things up. Sure, there’s also low-power and wireless in the mix, but in the end you need to tie into the real world, and that means connecting sensors, indicators, actuators, and what-have-you-not. It’s a big varied world out there!

The computing side is all about information. From a µC’s perspective, we need to direct information from sensors to us, and from us back out to indicators and actuators.

The more data you need to shuttle across (or the more frequently), the harder it becomes in terms of engineering. But sometimes all you need is to send or receive a few bytes of data, perhaps just a few times per second. That’s where I2C comes in, created over 30 years ago.

Or, more accurately: the I²C bus, which stands for the “Inter-Integrated Circuit” bus.

So the upcoming article series is about this wickedly clever “eye squared see” invention:

As before, one article per day. And while we’re at it, I’ll also use the Raspberry Pi single-board computer as an example of how to use I2C under Linux. As you’ll see, it’s really easy!

DSC 4904

(For comments, visit the forum area)

Relax and live longer

In on Nov 29, 2014 at 00:00

Now let’s see how little power the LPC810 can consume when in “deep power-down” mode.

All it takes to fully power down is a few lines of C code:

#include "LPC8xx.h"

int main () {
    SCB->SCR |= 1<<2;       // enable SLEEPDEEP mode
    LPC_PMU->PCON = 3;      // enter deep power-down mode
    __WFI();                // wait for interrupt, powers down

The problem here is how to measure the ridiculously tiny current involved. But it’s actually quite simple, thanks to Ohm’s law: if we place a 10 kΩ resistor in series with the µC, then the voltage drop will be proportional to the current, with 1 µA of current leading to a 10 mV voltage drop: enough for a modern multimeter to measure accurately in its lowest range.

The only remaining problem is the initial startup current surge, which is a few milliamps. What we need to do is put the resistor in series, with a voltmeter across it, but short out the resistor while powering up. Once it is in deep sleep, we remove the short and measure.

Result: the LPC810 consumes a mere 0.266 µA (266 nA!) in deep sleep mode.

That’s less than a millionth of a Watt of power (Watt = Volts x Amps).

Unfortunately, this setup is not very useful. The only way to wake up the µC from this state is to pull its WAKEUP pin low (pin 2). It has no way to wake up on its own anymore.

A considerably more useful configuration is to activate the low-power watchdog timer, and use it to periodically wake up from deep power-down. This was used in the minimal and fader demos in jeelabs/embello to make the LED blink.

There’s another demo on GitHub called relax, which goes to sleep for an entire minute between blinks. It’s really just a small variation of the original minimal demo. With a timeout of a minute, there’s enough time to measure the sleep mode current consumption, with an occasional blink to let us know we haven’t lost the ability to wake up again.

New result: 1.125 µA current consumption with the watchdog running.

Let’s estimate how long the “relax” demo would run on a standard CR2032 coin cell:

  • we’ll keep the LED on for 0.5s once every 60s, i.e. less than 1 % duty cycle
  • assuming a 4 mA current draw, this translates to 40 µA on average
  • the 1.125 µA current consumption can more or less be ignored in this case
  • a fresh CR2032 coin cell normally has a capacity of at least 200 mAh
  • with 40 µA, that’s 5000 hours of runtime, i.e. over half a year
  • it’d be trivial to increase this to a year: simply reduce the ON time to 0.25s

Note that on a coin cell, the LED can’t be placed in series with the µC – we need to modify the code slightly to toggle an I/O pin and connect the LED with a series resistor to it.

Now let’s leave the LED out, and assume this setup has something else useful to do which takes 10 ms once a minute. While running, let’s also assume that the µC + attached circuit draws a whopping 25 mA. Then we can adjust the above estimate as follows:

  • 10 ms every 60 is a 1:6000 duty cycle, so 25 mA averages out to only about 4 µA
  • add to that the 1.125 µA needed while the circuit is asleep, i.e. most of the time
  • if we round off a bit, this circuit will draw 5 µA on average
  • again using the same coin cell, we get a 200 mAH / 5 µA = 40,000 hours of run time
  • that’s four years on a coin cell, for a circuit which wakes up once per minute

Now we’re getting somewhere: think of a wireless sensor node. Using a coin cell!

In conclusion: with everything covered in this Getting Started series, you should now be able to program LPC810 chips and make them do all sorts of things. From generating a 15 MHz square wave (or anything else that needs this sort of processing “power”) to running off a coin cell without having to replace it for several years. How’s that for flexibility, eh?

It’s all just a matter of software plus electronics – and anyone can get into the game.

[Back to article index]

Uploading from Windows

In on Nov 21, 2014 at 00:00

Which leaves us with Windows, the operating system most readers are probably using:

Win logos

While setting up a development environment for Windows is definitely feasible, this story is not about developing under Windows, but developing under Linux under Windows, i.e. running Linux in a virtual machine and handling compilation & uploading from there.

The main reason is that Linux is highly geared towards automation – of the development process itself that is. Which is not so surprising, when you consider the fact that it was built by and for software developers. With Linux, you can augment your own developer skills.

Soapbox time

With a little setup, hitting a few keys in your editing environment is all you need to save your changes, perform a cross-compilation, build the firmware image, report some statistics, and upload the result to the LPC810, or any other µC for that matter (including ATmega’s). This is not a gimmick – each of those little conventions, time savings, and habits you collect add up to becoming a major productivity boost over time.

Note that this is not about “everyone must learn editor X”. You can in fact continue to write and edit your source code in whatever tool you like: it’s quite simple to have a VM share an area on the disk and pick up changes so the same files are also visible within the Linux VM.

This is, however, a gentle way to nudge you into using the command line, “make” (a terrific workhorse for automation), and command-line based compilers, linkers, debuggers, uploaders, and terminal emulators. The Arduino IDE is very nice, but it’s single purpose. No matter how good it might be at editing and compiling and uploading, it won’t let you streamline your own workflow in ways which were not envisioned by the Arduino team.

There is much more to embedded software development than editing and compiling and uploading firmware: for example, you might wish to implement or integrate some “host-side” software, such as a central home automation server driving a network of remote Wireless Sensor Nodes. With a Unix / Linux mindset, the whole setup can become a single (but evolving) environment. How about securely downloading new code, compiling it, and uploading it to remote nodes via that same central server? Can do – it’s just a matter of designing the right workflow and automating it. Linux is good at long-term automation.

This does mean you’ll need to become familiar – and even proficient – at working “on” the command line, typing in commands at the “shell”, and setting up little routines for testing, comparing results, repeating some process, or launching debugging sessions. Even brief tasks such as diving in deep to chase a bug, can benefit from setting up some quick-and-dirty helpers, to avoid you from pressing the same buttons over and over again while going through an “edit-run-debug” cycle.

Mice are great. IDE’s are great. But in repetitive cycles with some variability, such as long-term development, or intense testing or debugging sessions, every step saved helps. Your muscles will thank you. So will your co-developers. Even your brain will thank you, as all the routines and repetitions end up in shell scripts, makefiles, and muscle memory.

Getting sources from GitHub, creating snapshots, comparing changes, installing software… once every action can be performed through the command line, it can all be automated. Anything done more than a few times, and anything you might have to do again sometime in the future is worth casting into a few lines of a “shell script”, or as part of a “makefile”.

Imagine being able to type “make help” in any project you’ve ever worked on, to get a list of actions you can perform in that project. Regardless of tools used, platforms used, or even the problem domain. All you need is one simple text file in each project directory.

Even an elaborate programmer’s environment such as the Eclipse IDE can’t quite cover the breadth of such an approach, and it’s considerably more complex to manage Eclipse than to verify that you have the “make” command installed on your (virtual) machine.

The fact is that “large” environments such as IDE’s are almost always constructed on top of command-line tools anyway. But we can look under the hood, and ignore the shiny veneer layer added on top. Sure, veneer is great and beautiful, but real power lies underneath.

Linux in a VM

So the task ahead is to create a virtual machine running Linux under Windows, and then to integrate things in such a way that they can both be used conveniently together.

We’ll use VirtualBox for this, the same product which is also available for Mac OS X (and Solaris, and Linux). Getting started is easy:

  • download and install the latest version from their download page
  • you also need to install the “VirtualBox Oracle VM VirtualBox Extension Pack”, to add support for USB ports within the VM (this step requires registration)

VirtualBox is open source software (originally from Sun, now Oracle), but the extension pack isn’t, even though both are free for personal use.

You now have the ability to create virtual machines. The next step is to obtain a server version of Linux. Ubuntu is solid, popular, and well-supported, so we’ll use that, as before:

  • download a copy of the server edition image, preferably a 64-version – listed here

Using the server edition means that the system installs without GUI or desktop (which can be added later on, if needed). We’re going after the command-line, remember?

A server install is a fraction of the size of a full install (under 1 GB versus ≈ 3 GB disk use).

VirtualBox setup

Click on the “New” button in VirtualBox and enter a name (“minnie” in this example) and adjust the other settings to match what is shown below:

New minnie

Click on “Create”, which leads you to the main window again:

Minnie vms

(in this example, a second VM is also listed, please ignore it)

You need to “attach” the downloaded Ubuntu disk image to this VM. Click on “Settings”:

Minnie cd

Select “Storage”, then click on the “empty” IDE controller, and then click on the CD icon next to the text marked “IDE Secondary Ma(ster)”. Select the *.iso file you downloaded.

That’s it. Your new VM is ready to launch. At this point it is still empty, but it will boot from the virtual CDROM drive, and then go through the standard procedure of setting up Linux. Be sure to select “Install to hard disk” (which is virtual, but Ubuntu doesn’t know that).

The whole installation process of Linux itself will consist of about a dozen questions and should take less than half an hour. It’s a once-only process.

At this point, you have a working Linux environment running inside Windows, although neither of them knows much (if anything) about the other.

Bridging two worlds

To complete the process, you need one more tool: an SSH terminal emulator called Putty. Please download and install it according to the instructions on its homepage.

Just to regain our perspective for a moment: the goal of this whole exercise is to run Linux under Windows, and then hook it up to the network and USB. Once configured (in VBox), the Linux part is essentially ready. We can then start / suspend / resume Linux at will.

Hang in there… there is still quite a bit of nasty (one-time) configuration left!

The second part of the equation is Putty: it can be used as a (secure) terminal window into any SSH server on the network, including the Linux VM on this machine itself. It needs to be configured to connect to the VM (which must be running).

The details of this VBox + Putty configuration are a bit too complex to include here – they are described in a separate article, see Setting up the Virtual Machine.

For now, here’s what the result of all this fiddling will look like, i.e. connecting Putty to the VM, entering a Linux login password, and typing a few standard Linux commands:

Putty login

This is an application on Windows (Putty) logging into another application (VBox) running on the same Windows machine. Inside Putty, you’re in Linux – outside, you’re in Windows!

But there is one more trick which really helps keep these apart (as if the difference in appearance weren’t enough) – press ALT+ENTER and the Putty window goes fullscreen. You now have two operating systems running side by side on the same hardware.


Even though this setup goes much further, and took quite some effort, the original goal was to upload a firmware image to the LPC810. The good news is that once the USB ports have been set up properly in Vbox, this is now the same as on a Linux or Mac OSX machine:

  • plug in the (modified) FTDI board, with the breadboard setup described earlier
  • type: lpc21isp -control -bin firmware.bin /dev/ttyUSB* 115200 0

Everything is back in sync, whether you’re a Windows, a Mac, or a Linux person.

Some will see a crude text-only window and regard it as the lowest common denominator, but that’s not really so: you now have the foundation on which to build any embedded software, with a world of software and automation tools within reach of a few keys.

Yes, “keys”: mice and trackpads can still be used (really!), but they play a considerably smaller role in this command-line environment.

[Back to article index]

Getting started, episode 3

In Book on Nov 19, 2014 at 00:01

The idea of starting out with the 8-DIP LPC810 ARM µC occurred to me not very long ago, when I discovered a simple upload mechanism based on the modified FTDI interface. It’s quite an intriguing idea that you can put some pretty advanced decision and timing logic into such a small chip and do so entirely through free open source tools, with all the details fully exposed.

Making an LPC810 do stuff feels like creating our own custom chips: we can upload any software of our own design into the chip, then place it in a project as “control centre” to do fun stuff. Protocol decoders / encoders / converters, LED drivers (e.g. the WS2812 “neopixel”), even a small interpreter or a wireless radio driver, these are all feasible – despite having just 4 KB of flash memory.

Small programmable chips such as the LPC810 demand a relentless drive for simplicity, which is an excellent skill to develop IMO – for whatever physical computing projects you may have in mind.

Anyway. The hardware side is now completely done, with something like this ready to go:

DSC 4810

Unfortunately, that’s only half of the story. We still need to address uploads + compilation.

Check out the next set of articles, to be published from Wednesday through Saturday:

With this out of the way, we can make an LED blink or fade. Trivial stuff, but note that we’re setting up the infrastructure for embedded software development on ARM chips.

Oh, and if this is too basic for you, see if you can figure out this JeePuzzle … any takers?

(For comments, check the forum area)

It’s been a year

In News on Oct 6, 2014 at 13:10

It’s been a year since the last post on this weblog. A year is a long time. The world has changed in many ways, and technology has advanced in just as many, but completely different ways. I have also progressed, in the sense that I’ve been exploring and learning about lots of new things in the world of electronics, software, and physical computing.

Some things have solidified, such as my main laptop, which is still the same as three years ago (an 11″ MBA), because the shiny big fast one went to my daughter Myra, who has a far bigger need for that sort of hardware – for her photography and video work. Another solidifying trend has been my touch typing, which is now at the point where I do so 95% of the time (editing code still makes me go for the hunt-and-peck mode, occasionally).

Other things have stagnated, such as most notably the work on writing The Jee Book. There are pages and pages with words and images on my laptop, but I don’t like them one bit, and will not publish these as it is today. There is not enough direction, passion, focus, and fun in those draft pages. It would diminish the excitement and joy this deserves.

I’ve given a few presentations and workshops in the past year, but nothing of substance has come out of it all with respect to the JeeLabs site(s). My goal for this month is to get back into higher gear in public. Writing has always been very fulfilling and its own reward for me – I’m looking forward to finding my voice on the web again, in some form or other.

Now the hard part… I could use your help.

A year in solitary confinement (just kidding!) has made it harder for me to understand what you’d like most from JeeLabs. Just to get this clear: I don’t think I can restart the daily schedule of the weblog, as it was up to a year ago. This isn’t only about the effort and energy involved, or the lack of material, but the fact that the resulting stream-of-conscience website that it leads to is a bit hard to navigate through. Also, the resulting collection of articles is really not very practical as a resource – there are too many bits and pieces of information in there which are outdated and at times even misleading by now.

What sort of topics would you wish to see covered? My own interests still tend to gravitate towards long-lasting autonomous wireless sensor nodes. What frequency and size of posts / articles do you like? Should the topics be spread out broadly, or rather focus on some very specific problems? How simple or deep-diving should the information be? Do you want more science and maths, or rather some detailed construction plans? Do you prefer a personal and conversational style (such as this), or more a factual information source?

As always, I will make my own independent choices, but I promise to listen carefully and respectfully to each and every comment you send my way (email to

Wrapping up

In AVR, Hardware, Musings, News, Software on Oct 6, 2013 at 00:01

I’m writing this post while one of the test JeeNode Micro’s here at JeeLabs is nearing its eighth month of operation on a single coin cell:


It’s running the radioBlip2 sketch, sending out packets with an incrementing long integer packet count, roughly once every minute:

Screen Shot 2013-10-04 at 15.44.58

The battery voltage is also tracked, using a nice little trick which lets the ATtiny measure its own supply voltage. As you can see, the battery is getting weaker, dropping in voltage after each 25 mA transmission pulse, but still recovering very nicely before the next transmission:

Screen Shot 2013-10-04 at 15.45.45

Fascinating stuff. A bit like my energy levels, I think :)

But this post is not just about reporting ultra low-power consumption. It’s also my way of announcing that I’ve decided to wrap up this daily weblog and call it quits. There will be no new posts after this one. But this weblog will remain online, and so will the forum & shop.

I know from the many emails I’ve received over the years that many of you have been enjoying this weblog – some of you even from the very beginning, almost 5 years ago. Thank you. Unfortunately, I really need to find a new way to push myself forward.

This is post # 1400, with over 6000 comments to date. Your encouragement, thank-you’s, insightful comments, corrections and additions – I’m deeply grateful for each one of them. I hope that the passion which has always driven me to explore this computing stuff tied to the physical world technology and to write about these adventures, have helped you appreciate the creativity that comes with engineering and invention, and have maybe even tempted you to take steps to explore and learn beyond the things you already knew.

In fact, I sincerely hope that these pages will continue to encourage and inspire new visitors who stumble upon this weblog in the future. For those visitors, here’s a quick summary of the recent flashback posts, to help you find your way around on this weblog:

Please don’t ever stop exploring and pushing the boundaries of imagination and creativity – be it your own or that of others. There is infinite potential in each of us, and I’m certain that if we can tap even just a tiny fraction of it, the world will be a better place.

I’d like to think that I’ve played my part in this and wish you a lot of happy tinkering.

Take care,
Jean-Claude Wippler

PS. For a glimpse of of what I’m considering doing next, see this page. I can assure you that my interests and passions have not changed, and that I’ll remain as active as ever w.r.t. research and product development. The whole point of this change is to allow me to invest more focus and time, and to take the JeeLabs projects and products further, in fact.

PPS. Following the advice of some friends I highly respect, I’m making this last weblog post open-ended: it’ll be the last post for now. Maybe the new plans don’t work out as expected after all, or maybe I’ll want to reconsider after a while, knowing how much joy and energy this weblog has given me over the years. So let’s just call this a break, until further notice :)

Update Dec 2013 – Check out the forum at for the latest news about JeeLabs.

Flashback – Dive Into JeeNodes

In AVR, Hardware, Linux, Software on Oct 4, 2013 at 00:01

Dive Into JeeNodes (DIJN) is a twelve-part series, describing how to turn one or more remote JeeNodes, a central JeeLink, and a Raspberry Pi into a complete home monitoring setup. Well, ok, not quite: only a first remote setup is described with an LDR as light sensor, but all the steps to make the pieces work together are described.

More visually, DIJN describes how to get from here:

dijn01-essence.png   dijn01-diagram

.. to here:


This covers a huge range of technologies, from embedded Arduino stuff on an ATmega-based JeeNode, to setting up Node.js and the HouseMon software on a Raspberry Pi embedded Linux board. The total cost of a complete but minimal setup should be around €100. Less than an Xbox and far, far more educational and entertaining, if you ask me!

It’s all about two things really: 1) describing the whole range of technologies and getting things working, and 2) setting up a context which you can explore, learn, and hack on tinker with in numerous ways.

If you’re an experienced Linux developer but want to learn about embedded hardware, wireless sensors, physical computing and such, then this offers a way to hook up all sorts of things on the JeeNode / Arduino side of things.

If you’re familiar with hardware development or have some experience with the Arduino world, then this same setup lets you get familiar with setting up a self-contained low-power Linux server and try out the command line, and many shell commands and programming languages available on Linux.

If you’ve set up a home automation system for yourself in the past, with PHP as web server and MySQL as back end, then this same setup will give you an opportunity to try out rich client-side internet application development based on AngularJS and Node.js – or perhaps simply hook things together so you can take advantage of both approaches.

With the Dive Into JeeNode series, I wanted to single out a specific range of technologies as an example of what can be accomplished today with open source hardware and software, while still covering a huge range of the technology spectrum – from C/C++ running on a chip to fairly advanced client / server programming using JavaScript, HTML, and CSS (or actually: dialects of these, called CoffeeScript, Jade, and Stylus, respectively).

Note that this is all meant to be altered and ripped apart – it’s only a starting point!

Flashback – Batteries came later

In AVR, Hardware, Software on Sep 30, 2013 at 00:01

During all this early experimentation in 2008 and 2009, I quickly zoomed in on the little ATmega + RFM12B combo as a way to collect data around the house. But I completely ignored the power issue…

The necessity to run on battery power was something I had completely missed in the beginning. Everyone was running Arduino’s off either a 5V USB adapter or – occasionally – off a battery pack, and never much more than a few days. Being “untethered” in many projects at that time, meant being able to do something for a few hours or a day, and swapping or recharging batteries at night was easy, right?

It took me a full year to realise that a wireless “node” tied to a wire to run for an extended period of time made no sense. Untethered operation also implies being self-powered:


Evidently, having lots of nodes around the house would not work if batteries had to be swapped every few weeks. So far, I just worked off the premise that these nodes needed to be plugged into a power adapter – but there are plenty of cases where that is extremely cumbersome. Not only do you need a power outlet nearby, you need fat power adapters, and you have to claim all those power outlets for permanent use. It really didn’t add up, in terms of cost, and especially since the data was already being exchanged wirelessly!

Thus started the long and fascinating journey of trying to run a JeeNode on as little power as possible – something most people probably know this weblog best for. Over the years, it led to some new (for me) insights, such as: transmission draws a “huge” 25 mA, but it’s still negligible because the duration is only a few milliseconds. By far the most important parameter to optimise for is sleep-mode power consumption of the entire circuit.

In September 2010, i.e. one year after starting on this low-power journey, the Sleepy class was added to JeeLib, as a way to make it easy to enter low-power mode:

class Sleepy {
    /// start the watchdog timer (or disable it if mode < 0)
    /// @param mode Enable watchdog trigger after "16 << mode" milliseconds 
    ///             (mode 0..9), or disable it (mode < 0).
    static void watchdogInterrupts (char mode);
    /// enter low-power mode, wake up with watchdog, INT0/1, or pin-change
    static void powerDown ();
    /// Spend some time in low-power mode, the timing is only approximate.
    /// @param msecs Number of milliseconds to sleep, in range 0..65535.
    /// @returns 1 if all went normally, or 0 if some other interrupt occurred
    static byte loseSomeTime (word msecs);

    /// This must be called from your watchdog interrupt code.
    static void watchdogEvent();

The main call was named loseSomeTime() to reflect the fact that the watchdog timer is not very accurate. Calling Sleepy::loseSomeTime(60000) gives you approximately one minute of ultra low-power sleep time, but it could be a few seconds more or less. To wait longer, you can call this code a few times, since 65,535 ms is the maximum value supported by the Sleepy class.

As a result of this little class, you can do things like put the RFM12B into sleep mode (and any other power-hungry peripherals you might have connected), go to sleep for a bit, and restore all the peripherals to their normal state. The effects can be quite dramatic, with a few orders of magnitude less power consumption. This extends a node’s battery lifetime from a few days to a few years – although you have to get all the details right to get there.

One important design decision in the JeeNode was to use a voltage regulator with a very low idle current (the MCP1700 draws 2 µA idle). As a result, when a JeeNode goes to sleep, it can be made to draw well under 10 µA.

Most nodes here at JeeLabs now keep on running for well over a year on a single battery charge. Everything has become more-or-less install and forget – very convenient!

3 years on one set of batteries

In AVR, Hardware on Sep 8, 2013 at 00:01

Ok, so maybe it’s getting a bit boring to report these results, but one of the JeeNodes I installed long ago has just reached a milestone:

Screen Shot 2013-09-06 at 23.30.55

That “buro JC” node has been running on a single battery charge for 3 years now:

And it’s not even close to empty: this is a JeeNode USB with a 1300 mAh LiPo battery tied to its back, and (as I just measured) it’s still running at 3.74 V, go figure.

Let’s do the math on what’s going on here:

  • the battery is specified as 1300 mAh, i.e. 1300 mA for one hour and then it’s empty
  • in this case, it has been running for some 1096 x 24 = 26,304 hours total
  • so the average current consumption must have been under 1300 / 26304 = 50 µA
  • well… in that case, the battery should be empty by now, but it isn’t
  • in fact, I suspect that the average power consumption is more like 10..25 µA

Two things to note: 1) LiPo batteries pack a lot of energy, and 2) they have a really low self-discharge rate, so they are able to store that energy for a long time.

The other statistic worth working out, is the amount of energy consumed by a single packet transmission. Again, first assuming that the battery would be dead by now, and that the microcontroller and the rest of the circuit are not drawing any current:

  • 1,479,643 packets have been sent out, i.e. ≈ 1300 / 1500000 = under 1 µAh per packet
  • since 60 packets are sent out per hour: about 60 µAh per hour, i.e. 60 µA continuous
  • energy can also be expressed in coulombs, i.e. 60 µC gets used per packet transmission (3,600 seconds to the hour, but there were 60 packets sent out during that period)
  • so despite the fact that the RFM12B draws a substantial 25 mA of current during transmission, it does it so briefly that overall it’s still extremely low-power (a few ms every 1s, so that’s a truly minute duty cycle)

The conclusion here is: for these types of uses, with occasional brief wireless sensor data transmissions, the power consumption of the wireless module is not the main issue. It’s far more important to keep the idle (i.e. sleep mode) of the entire circuit under control.

The 2nd result is also a record, a JeeNode Micro, running over 6 months on a coin cell:

This one is running the newer radioBlip2 sketch, which also measures and reports the battery voltage before and after packet transmission. As you can see, the coin cell is struggling a bit, but its voltage level is still fine: it drops to 2.74 V right after sending out a packet (drawing 25 mA), and then recovers the rest of the time to a fairly high 2.94 V. This battery sure isn’t empty yet. Let’s see how many more months it can keep this up.

The 3rd result (penlight test), is this setup, based on the latest JNµ v3:

The timing values are way off though: it has also been running for over 6 months, but I accidentally caused it to reset when moving things around earlier this summer. This one is running with a switching boost regulator. The Eneloop battery started out at 1.3 V and has now dropped slightly to 1.28 V – it should be fine for quite some time, as these batteries tend to run down gradually to 1.2V before they start getting depleted. This is a rechargeable battery, but Eneloop is known to hold on to its charge for a surprisingly long time (losing 20% over 2 years due to self-discharge, if I remember correctly).

You can see the boost regulator doing its thing, as the output voltage sent to the ATtiny is the same 3.04 V as it was on startup. That’s the whole idea: it regulates to a fixed level, while sucking the battery dry along the way…

Note that all these nodes are not sensing anything. They’re just bleeping once a minute.

Anyway… so much for the progress report on a pretty long-running experiment :)

Back pressure

In Software on Sep 4, 2013 at 00:01

No, not the medical term, and not this carambolage either (though it’s related):


What I’m talking about is the software kind: back pressure is what you need in some cases to avoid running into resource limits. There are many scenario’s where this can be an issue:

  • In the case of highway pile-ups, the solution is to add road signalling, so drivers can be warned to slow down ahead of time – before that decision is taken from them…
  • Sending out more data than a receiver can handle – this is why traditional serial links had “handshake” mechanisms, either software (XON/XOFF) or hardware (CTS/RTS).
  • On the I2C bus, hardware-based clock stretching is often supported as mechanism for a slave to slow down the master, i.e. an explicit form of back pressure.
  • Sending out more packets than (some node in) a network can handle – which is why backpressure routing was invented.
  • Writing out more data to the disk than the driver/controller can handle – in this case, the OS kernel will step in and suspend your process until things calm down again.
  • Bringing a web server to its knees when it gets more requests than it can handle – crummy sites often suffer from this, unless the front end is clever enough to reject requests at some point, instead of trying to queue more and more work which it can’t possibly ever deal with.
  • Filling up memory in some dynamic programming languages, where the garbage collector can’t keep up and fails to release unused memory fast enough (assuming there is any memory to release, that is).

That last one is the one that bit me recently, as I was trying to reprocess my 5 years of data from JeeMon and HouseMon, to feed it into the new LevelDB storage system. The problem arises, because so much in Node.js is asynchronous, i.e. you can send off a value to another part of the app, and the call will return immediately. In a heavy loop, it’s easy to send off so much data that the callee never gets a chance to process it all.

I knew that this sort of processing would be hard in HouseMon, even for a modern laptop with oodles of CPU power and gigabytes of RAM. And even though it should all run on a Raspberry Pi eventually, I didn’t mind if reprocessing one year of log files would take, say, an entire day. The idea being that you only need to do this once, and perhaps repeat it when there is a major change in the main code.

But it went much worse than I expected: after force-feeding about 4 months of logs (a few hundred thousand converted data readings), the Node.js process RAM consumption was about 1.5 GB, and Node.js was frantically running its garbage collector to try and deal with the situation. At that point, all processing stopped with a single CPU thread stuck at 100%, and things locked up so hard that Node.js didn’t even respond to a CTRL-C interrupt.

Now 1.5 GB is a known limit in the V8 engine used in Node.js, and to be honest it really is more than enough for the purposes and contexts for which I’m using it in HouseMon. The problem is not more memory, the problem is that it’s filling up. I haven’t solved this problem yet, but it’s clear that some sort of back pressure mechanism is needed here – well… either that, or there’s some nasty memory leak in my code (not unlikely, actually).

Note that there are elegant solutions to this problem. One of them is to stop having a producer push data and calls down a processing pipeline, and switch to a design where the consumer pulls data when it is ready for it. This was in fact one of the recent big changes in Node.js 0.10, with its streams2 redesign.

Even on an embedded system, back pressure may cause trouble in software. This is why there is an rf12_canSend() call in the RF12 driver: because of that, you cannot ever feed it more packets than the (relatively slow) wireless RFM12B module can handle.

Soooo… in theory, back pressure is always needed when you have some constraint further down the processing pipeline. In practice, this issue can be ignored most of the time due to the slack present in most systems: if we send out at most a few messages per minute, as is common with home monitoring and automation, then it is extremely unlikely that any part of the system will ever get into any sort of overload. Here, back pressure can be ignored.

Electricity usage patterns

In Hardware, Software on Sep 3, 2013 at 00:01

Given that electricity usage here is monitored with a smart meter which periodically phones home to the electricity company over GPRS, this is the sort of information they get to see:

Screen Shot 2013-09-02 at 11.20.38

Consumption in blue, production in green. Since these are the final meter readings, those two data series will never overlap – ya can’t consume and produce at the same time!

I’m reading out the P1 data and transmitting it wirelessly to my HouseMon monitoring setup (be sure to check the develop branch, which is where all new code is getting added).

There’s a lot of information to be gleaned from that. The recurring 2000+ W peaks are from a 7-liter kitchen boiler (3 min every 2..3 hours). Went out for dinner on Aug 31st, so no (inductive) home cooking, and yours truly burning lots of midnight oil into Sep 1st. Also, some heavy-duty cooking on the evening of the 1st (oven dish + stove).

During the day, it’s hard to tell if anyone is at home, but evenings and nights are fairly obvious (if only by looking at the lights lit in the house!). Here’s Sep 2nd in more detail:

Screen Shot 2013-09-02 at 11.23.50

This one may look a bit odd, but that double high-power blip is the dish washer with its characteristic two heating cycles (whoops, colours reversed: consumption is green now).

Note that whenever there is more sun, there would be fewer consumption cycles, and hence less information to glean from this single graph. But by matching this up with other households nearby, you’d still get the same sort of information out, i.e. from known solar power vs. returned power from this household. Cloudy patterns will still match up across a small area (you can even determine the direction of the clouds!).

I don’t think there’s a concern for (what little) privacy (we have left), but it’s quite intriguing how much can be deduced from this.

Here’s yet more detail, now including the true house usage and solar production values, as obtained from some pulse counters, this hardware and the homePower driver:

Screen Shot 2013-09-02 at 11.52.21

There is a slight lag in smart meter reporting (a value on the P1 port every 10s). This is not an issue of the smart meter though: the DyGraphs package is only able to plot step lines with values at the start of the step, even though these values pertain to the past 10 seconds.

Speaking of which – there was a problem with the way data got stored in Redis. This is no longer an issue in this latest version of HouseMon, because I’m switching over to LevelDB, a fascinating time- and space-efficient database engine.

Summer Break

In News on Jul 1, 2013 at 00:01

It’s that time of year again: 1st of July, and time for me to take a break.

Just as last year, this is going to be my last post on this JeeLabs weblog post for a while – two months to be exact. This isn’t so much about going on holiday (nothing planned yet, other than a few short trips) as it is about performing an internal reset.

The past year has been mostly about taking a new direction in software (by which I mean Node.js and the Dive Into JeeNodes series) – not the embedded Arduino stuff), and about exploring numerous electronics and software topics – a.k.a. Physical Computing.

Speaking of which, yesterday’s post was a good example of what I hope to keep up for a long time to come: introducing electronics and mixing it with embedded microcontrollers to show just how easy it is to tie the two fields together, and to keep on enticing everyone who comes across this weblog to explore, learn, tinker, and play with this stuff. It’s all very low-cost, it’s wide open to make tons of new ideas happen, and there is an immense body of knowledge, experience, and open source software + hardware available to anyone with some spare time, a healthy dose of curiosity, and an internet connection.


There’s a lot to be done beyond what I’ve been dabbling in here at JeeLabs. The field of wireless communication has only just started for the home and the hobbyist. There’s an explosion going on w.r.t. small affordable Linux boards, which take Physical Computing to totally new levels of capability. And there is a huge need to find easy and enticing entry paths into all this, if you ask me. The more there is to learn, the more we need to come up with ways to help people “find their way in”. The RoboCup 2013 event (which will be over by the time you read this) has shown that there is a great opportunity to expose kids of all ages to technology. From hundreds of cheering cardboard FanBots all the way to amazing self-organizing teams of autonomous “Middle Size League” football-playing robots.

The future has only just begun. What an amazing times we live in!

Second bit of news is that it’s time to kick off a summer sale in the web shop once again. And as in previous years, the following discount is for existing JeeLabs supporters, i.e. it can only be requested if you have ordered products from JeeLabs in the past:


And while I’m at it, let me list all the key dates for this period, here at JeeLabs:

  • July 1st – weblog paused and summer sale kicks off
  • July 31st – sale ends at midnight, 0:00 CEST time
  • September 1st – daily weblog resumes

Due to the fine efforts by Martyn Judd & Co., the shop will remain open throughout the summer break, with fulfilment continuing as before from the UK Center in Cambridge. The staff level will be somewhat reduced during August, but we will nevertheless attempt to keep up the regular flow of sending your packages promptly.

There is also a summer sale at Modern Device, but note that since the shops are separate and independent entities, we cannot extend the same offer across the shops. Please check the shop you have purchased from in the past to see what’s on sale.

If you’re looking for geek stuff to do this summer: check out the chronological index of this weblog, maybe there is something that interests you, somewhere in those past 1,364 posts?

Anyway: thank you for all the interest, comments, discussions, tips, and appreciative emails – I’m honoured by everything that’s happening around JeeLabs and looking forward to lots of new ideas, developments, sharing, and products – starting again on September 1st.

Have a nice summer (or winter, on the southern hemisphere)!

Status of the RFM12B

In Hardware on Jun 28, 2013 at 00:01

The RFM12B wireless radio module has been around for quite some time. When I found out about it at the time, I really liked the mix of features it provided – far more capable than the morse-code like OOK modules in cheap sensors, very low power, and available at a considerably lower cost than the XBee and other ZigBee solutions out there:


There was little software for it at the time, and writing an interrupt-driven driver for it was quite a challenge when this all started, but nowadays that’s no longer an issue – the RF12 driver which is now part of JeeLib has turned out to work quite well.

There have been some rumours recently, and understandably also some worries, that the RFM12B might be phased out, but if I may paraphrase Mark Twain on this: the reports of the RFM12B’s death have been greatly exaggerated…

After some email exchanges between HopeRF and Martyn Judd, I’m now happy to report that we have received a formal statement from HopeRF to clarify the issue:

“The popular RFM12B S1/S2 modules remain in production with a large volume of long standing and long term orders. There is NO plan to discontinue the series and NO plan to end production in the foreseeable future. It is correct that we no longer recommend this series to customers developing new projects, since we have developed more recent designs, for example the RFM6X series. These incorporate functionality improvements that can provide a better price/performance for specific applications.”

— Derek Zhu, Marketing Manager, HopeRF

As I see it, this means that the RFM12B will be around for quite some time and that we don’t have to worry about supplies for keeping all current networks based on these modules working. I definitely intend to keep using RFM12B’s at JeeLabs and in products designed and produced by JeeLabs.

Having said that, I’m also evaluating alternatives and looking for a convenient option to act as next step forward. My focus is on low-power consumption and good range for use within the house, both of which I hope to take even further. I’d also like to make sure that over-the-air flashing with JeeBoot will work well with the current as well as future choices.

In my opinion the RFM12B continues to be a simple & excellent low-cost and low-power foundation, so I’m taking my time to very carefully review any new options out there.

Yes, we CAN bus

In Hardware on May 31, 2013 at 00:01

The CAN Bus is a very interesting wired bus design, coming from the automobile industry (and probably built into every European car made today). It’s a bus with an ingenious design, avoiding bus collisions and supporting a good level of real-time responsiveness.

I’ve been intrigued by this for quite some time, and decided to dive in a bit.

There are several interesting design choices in CAN bus:

  • it’s all low-voltage, just 0..5V (even 0..3.3V) is all it takes on each connected node
  • the bus is linear, reaching from 40 m @ 1 Mbit/s to 500 m @ 125 kbit/s, or even longer
  • signalling is based on voltage between two wires, and terminated by 120 Ω on each end
  • signals are self-clocked, with bit-stuffing to insert bit-transitions when needed

But the three most surprising aspects of the CAN bus design are probably the following:

  • the design is such that collisions cannot happen: one of the two senders always wins
  • each CAN bus packet can have at most 8 bytes of data (and is CRC-checked)
  • as described recently, messages have no destination, but only a message ID (type)

What’s also interesting is that – like I2C – this protocol tends to be fully implemented in hardware, and is included in all sorts of (usually ARM-based) microcontrollers. So unlike UARTs, RS485, I2C, and SPI, you simply get complete and valid packets in and out of the peripheral. No need to deal with framing, CRC checking, or timing decisions.

You can almost feel the car-like real-time nature of these design trade-offs:

  • short packets – always! – so the bus is released very quickly, and very often
  • no collisions, i.e. no degradation in bus use and wasted retransmits as it gets busier
  • built-in prioritisation, so specific streams can be sent across with controlled latencies
  • with a 16-bit CRC on each 0..8 byte packet, chances of an undetected error are slim

Since my scope includes hardware CAN bus decoding, I decided to try it out:


The message has an ID of 0x101 (message ID’s are either 11 or 29 bits), eight bytes of data (0xAA55AA55FF00FF00), and a CRC checksum 0x1E32. I’m using a 500 KHz bit clock.

If you look closely, you can see that there are never more than 5 identical bits in a row. That’s what bit-stuffing does: insert an opposite bit to avoid longer stretches of identical bits, as this greatly helps deduce exact timings from an incoming bit-stream.

It seems crazy to limit packets to just 8 bytes – what could possibly be done with that, without wasting it all on counters and offsets to send perhaps 4 bytes of real data in each packet? As it turns out, it really isn’t so limiting – it just takes a somewhat different mindset. And the big gain is that multiple information streams end up getting interleaved very naturally. As long as each of them is reasonable, that is: don’t expect to get more than 2 or 3 data streams across a 1 Mb/s bus, each perhaps no more than 100 kb/s. Then again, you can expect these to arrive within a very consistent and predictable time, regardless of what other lower-priority burst traffic is going on.

Neat stuff…

ChibiOS for the Arduino IDE

In AVR, Software on May 25, 2013 at 00:01

A real-time operating system is a fairly tricky piece of software, even with a small RTOS – because of the way it messes with several low-level details of the running code, such as stacks and interrupts. It’s therefore no small feat when everything can be done as a standard add-on library for the Arduino IDE.

But that’s exactly what has been done by Bill Greiman with ChibiOS, in the form of a library called “ChibiOS_AVR” (there’s also an ARM version for the Due & Teensy).

So let’s continue where I left off yesterday and install this thing for use with JeeNodes, eh?

  • download a copy of from this page on Google Code
  • unpack it and inside you’ll find a folder called ChibiOS_AVR
  • move it inside the libraries folder in your IDE sketches folder (next to JeeLib, etc)
  • you might also want to move ChibiOS_ARM and SdFat next to it, for use later
  • other things in that ZIP file are a README file and the HTML documentation
  • that’s it, now re-launch the Arduino IDE to make it recognise the new libraries

That’s really all there is to it. The ChibiOS_AVR folder also contains a dozen examples, each of which is worth looking into and trying out. Keep in mind that there is no LED on a standard JeeNode, and that the blue LED on the JeeNode SMD and JeeNode USB is on pin 9 and has a reverse polarity (“0” will turn it on, “1” will turn it off).

Note: I’m using this with Arduino IDE 1.5.2, but it should also work with IDE 1.0.x

Simple things are still relatively simple with a RTOS, but be prepared to face a whole slew of new concepts and techniques when you really start to dive in. Lots of ways to make tasks and interrupts work together – mutexes, semaphores, events, queues, mailboxes…

Luckily, ChibiOS comes with a lot of documentation, including some general guides and how-to’s. The AVR-specific documentation can be found here (as well as in that ZIP file you just downloaded).

Not sure this is the best place for it, but I’ve put yesterday’s example in JeeLib for now.

I’d like to go into RTOS’s and ChibiOS some more in the weeks ahead, if only to see how wireless communication and low-power sleep modes can be fitted in there.

Just one statistic for now: the context switch latency of ChibiOS on an ATmega328 @ 16 MHz appears to be around 15 µs. Or to put it differently: you can switch between multiple tasks over sixty thousand times a second. Gulp.

Measuring the battery without draining it

In Hardware on May 16, 2013 at 00:01

In yesterday’s post, a resistive voltage divider was used to measure the battery voltage – any voltage for that matter, as long as the divider resistor values are chosen properly.

With a 6V battery, a 10 + 10 kΩ divider draws 0.3 ma, i.e. 300 µA. Can we do better?

Sure: 100+100 kΩ draws 30 µA, 1+1 MΩ draws 3 µA, and 10+10 MΩ draws just 0.3 µA.

Unfortunately there are limits, preventing the use of really high resistor divider values.

The ATmega328 datasheet recommends that the output impedance of the circuit connected to the ADC input pin be 10 kΩ or less for good results. With higher values, there is less current available to charge the ADC’s sample-and-hold capacitor, meaning that it will take longer for the ADC to report a stable value (reading it out more than once may be needed). And then there’s the leakage current which every pin has – it’s specified in the datasheet as ± 1 µA max in or out of any I/O pin. This means that a 1+1 MΩ divider may not only take longer to read out, but also that the actual value read may not be accurate – no matter how long we wait or how often we repeat the measurement.

So let’s find out!

The divider I’m going to use is the same as yesterday, but with higher resistor values.

Let’s go all out and try 10 + 10 MΩ. I’ll use the following sketch, which reads out AIO1..4, and sends out a 4-byte packet with the top 8 bits of each ADC value every 8 seconds:

#include <JeeLib.h>

byte payload[4];

void setup () {
  rf12_initialize(22, RF12_868MHZ, 5);
  DIDR0 = 0x0F; // disable the digital inputs on analog 0..3

void loop () {
  for (byte i = 0; i < 4; ++i) {
    analogRead(i);                    // ignore first reading
    payload[i] = analogRead(i) >> 2;  // report upper 8 bits

  rf12_sendNow(0, payload, sizeof payload);

This means that a reported value N corresponds to N / 255 * 3.3V.

With 5V as supply, this is what comes out:

L 10:18:14.311 usb-A40117UK OK 22 193 220 206 196
L 10:18:22.675 usb-A40117UK OK 22 193 189 186 187
L 10:18:31.026 usb-A40117UK OK 22 193 141 149 162
L 10:18:39.382 usb-A40117UK OK 22 193 174 167 164
L 10:18:47.741 usb-A40117UK OK 22 193 209 185 175

The 193 comes from AIO1, which has the 10 + 10 kΩ divider, and reports 2.50V – spot on.

But as you can see, the second value is all over the map (ignore the 3rd and 4th, they are floating). The reason for this is that the 10 MΩ resistors are so high that all sorts of noise gets picked up and “measured”.

With a 1 + 1 MΩ divider, things do improve, but the current draw increases to 2.5 µA:

L 09:21:25.557 usb-A40117UK OK 22 198 200 192 186
L 09:21:33.907 usb-A40117UK OK 22 198 192 182 177
L 09:21:42.256 usb-A40117UK OK 22 197 199 188 183
L 09:21:50.606 usb-A40117UK OK 22 197 195 187 183
L 09:21:58.965 usb-A40117UK OK 22 197 197 186 181
L 09:22:07.315 usb-A40117UK OK 22 198 198 190 184

Can we do better? Sure. The trick is to add a small capacitor in parallel with the lower resistor. Here’s a test using 10 + 10 MΩ again, with a 0.1 µF cap between AIO2 and GND:


Results – at 5V we get 196, i.e. 2.54V:

L 10:30:27.768 usb-A40117UK OK 22 198 196 189 186
L 10:30:36.118 usb-A40117UK OK 22 198 196 188 183
L 10:30:44.478 usb-A40117UK OK 22 198 196 186 182
L 10:30:52.842 usb-A40117UK OK 22 198 196 189 185
L 10:31:01.186 usb-A40117UK OK 22 197 196 186 181

At 4V we get 157, i.e. 2.03V:

L 10:33:31.552 usb-A40117UK OK 22 158 157 158 161
L 10:33:39.902 usb-A40117UK OK 22 158 157 156 157
L 10:33:48.246 usb-A40117UK OK 22 158 157 159 161
L 10:33:56.611 usb-A40117UK OK 22 158 157 157 159
L 10:34:04.959 usb-A40117UK OK 22 159 157 158 161

At 6V we get 235, i.e. 3.04V:

L 10:47:26.658 usb-A40117UK OK 22 237 235 222 210
L 10:47:35.023 usb-A40117UK OK 22 237 235 210 199
L 10:47:43.373 usb-A40117UK OK 22 236 235 222 210
L 10:47:51.755 usb-A40117UK OK 22 237 235 208 194
L 10:48:00.080 usb-A40117UK OK 22 236 235 220 209


Note how the floating AIO3 and AIO4 pins tend to follow the levels on AIO1 and AIO2. My hunch is that the ADC’s sample-and-hold circuit is now working in reverse: when AIO3 is read, the S&H switches on, and levels the charge on the unconnected pin (which still has a tiny amount of parasitic capacitance) and the internal capacitance.

The current draw through this permanently-connected resistor divider with charge cap will be very low indeed: 0.3 µA at 6V (Ohm’s law: 6V / 20 MΩ). This sort of leakage current is probably fine in most cases, and gives us the ability to check the battery level in a wireless node, even with battery voltages above VCC.

Tomorrow I’ll explore a setup which draws no current in sleep mode. Just for kicks…

What if we want to know the battery state?

In Hardware on May 15, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

One useful task for wireless sensor nodes, is to be able to determine the state of the battery: is it full? is it nearly depleted? how much life is there left in them?

With a boost converter such as the AA Power Board, things are fairly easy because the battery voltage is below the supply voltage – just hook it up to an analog input pin, and use the built-in ADC with a call such as:

word millivolts = map(analogRead(0), 0, 1023, 0, 3300);

This assumes that the ATmega is running on a stable 3.3V supply, which acts as reference for the ADC.

If that isn’t the case, i.e. if the ATmega is running directly off 2 AA batteries or a coin cell, then the ADC cannot use the supply voltage as reference. Reading out VCC through the ADC will always return 1023, i.e. the maximum value, since its reference is also VCC – so this can not tell us anything about the absolute voltage level.

There’s a trick around this, as described in a previous post: measure a known voltage with the ADC and then deduce the reference voltage from it. As it so happens, the ATmega has a 1.1V “bandgap” voltage which is accurate enough for this purpose.

The third scenario is that we’re running off a voltage higher than 3.3V, and that the ATmega is powered by it through a voltage regulator, providing a stable 3.3V. So now, the ADC has a stable reference voltage, but we end up with a new problem: the voltage we want to measure is higher than 3.3V!

Let’s say we have a rechargeable 6V lead-acid battery and we want to get a warning before it runs down completely (which is very bad for battery life). So let’s assume we want to measure the voltage and trigger on that voltage dropping to 5.4V.

We can’t just hook up the battery voltage to an analog input pin, but we could use a voltage divider made up of two equal resistors. I used two 10 kΩ resistors and mounted them on a 6-pin header – very convenient for use with a JeeNode:


Now, only half the battery voltage will be present on the analog input pin (because both resistor values are the same in this example). So the battery voltage calculation now becomes a variant of the previous formula:

word millivolts = map(analogRead(0), 0, 1023, 0, 3300) * 2;

But there is a drawback with this approach: it draws some current, and it draws it all the time. In the case of 2x 10 kΩ resistors on a 6V battery, the current draw is (Ohm’s law kicking in!): 6 V / 20,000 Ω = 0.0003 A = 0.3 mA. On a lead-acid battery, that’s probably no problem at all, but on smaller batteries and when you’re trying to conserve as much energy as possible, 0.3 mA is huge!

Can we raise the resistor values and lower the current consumption of this voltage divider that way? Yes, but not indefinitely – more on that tomorrow…

Winding down

In Musings, News on Apr 22, 2013 at 00:01

The JeeDay 2013-04 event is over.

I would like to warmly thank the 40 or so people who attended on Friday and Saturday. It is clear to me from the kind follow-up emails that the event was appreciated by many of you and I really hope that everyone got something useful and stimulating out of this.

Allow me to also thank the “anonymous sponsor” at this point for funding the venue, the coffee and drinks, and Saturday’s lunch. I’ve passed on your and my appreciation, and it has gratefully been accepted. As several people have pointed out, this whole concept of an anonymous sponsor is really a contradiction in terms, so let’s all just cherish the fact that philanthropy (and mystery) still exists, even in today’s western societies.

This is probably the point where I’m expected to write sentences full of superlatives, self-congratulatory remarks, let’s-conquer-the-world type of pep-talk, congratulations for the speakers and their choice of interesting topics, all sorts of grandiose plans, and where I’d also describe how stimulating all the discussions on the side turned out to be.

I could, and it’d be true. But I won’t…

Instead, I’d like to give this a somewhat different (personal / philosophical) twist.

We’re focused on success. We crave rewards. We seek recognition. So when something good (for some definition of “good”) happens, we want to take it further.

Again. Better. More.

Yet to me, that’s not what JeeDay was about. Sure, we could do it again. In fact, I’d love to and I’ve even sort-of committed to organising another JeeDay a year from now. We’ll see.

But to me, JeeDay is not about the next step or some future trend. It’s about this event we just had. Some 10 talks from people describing what they like to do in their free time. That’s quite a special situation, when you stop and think about it: here we all are, a few dozen geeks with a common techie interest, and this what we choose to spend our time, our creative energies, and our money on. We could do anything, yet this is what we want to do. In. Our. Free. Time.

Now of course, everyone’s reasons will differ. But to me, it’s pretty amazing: there’s rarely a financial reward (heck, it usually costs money!). There’s often not much recognition. These are not TED talks, we’re not working on some high-visibility successful project and showing the world. We just tinker in private, we come up with stuff, we learn, and we like doing it.

In my view, this is about the top two tiers of Maslow’s hierarchy of needs:


The basic idea being that you can’t really get to focus on the levels above before the levels underneath have more or less been covered.

This is – again, in my perception – not about success, and probably not even about peer recognition, but about the intrinsic fun of discovery, invention, creation, and problem-solving. And about finding out how others deal with this. It’s no accident that most of it happens as open source, either: open source (hardware + software) and sharing is what floats to the top when the intrinsic puzzles and their solutions dominate.

In a world where so much is about ownership, money, and time, I think that’s precious.

I hope JeeDay has helped you find and follow your passion. Everything else is secondary.

PS. The mystery topic in my presentation was JeeBoot – more to follow soon.

Cheap power analysis

In Hardware on Apr 21, 2013 at 00:01

Remember this screen shot?


It was a carefully captured analysis of the power consumption of a JeeNode, running the RoomNode sketch, and sending / receiving wireless RFM12B packets. There’s a fantastic amount of info in there, to help understand which part of the code and which activity is drawing the most power. It was a great help at the time to reduce power consumption, allowing these nodes to run well over a year on a couple of AA batteries.

Trouble with this, is that you need an expensive piece of equipment, called an oscilloscope. Long-time readers might remember that I’ve written extensively about this. These things cost anywhere from a few hundred Euro, to thousands, or even tens of thousands for high-end units. I ended up settling for a Hameg HMO2024, which is a great instrument, but with a pretty hefty price of well over €1000.

So how would you go about analysing the power consumption of your sketch without plunking down this sort of cash? Well, there really are not that many alternatives, you have to see the current-vs-time graph to be able to understand what’s going on.

Luckily, there is a fairly capable little unit from Gabotronics, called the Xminilab. It pushes an ATXmega (note the “X”) to its limits, allowing it to capture quite a bit of information, just like its bigger brothers. It even includes things like FFT analysis, an 8-channel Logic Analyser, and an AWG Signal Generator! Last but not least, the software is open source.

Interested in how capable this $69 device is? Well, check this out:


Do you recognise the waveform? The Xminilab has captured a packet transmission, a bit like the one shown at the start of this post (it’s a different sketch, i.e. radioBlip2, hence a different pattern). It may not look like much, but it should be sufficient to see the effect of changes in the code and to optimise power consumption with it.

So, do you need a scope? IMO, anyone wishing to explore electronics should have one. Whether second-hand or the above-mentioned Xminilab, it really helps to be able to see things in a way where our human senses fall short – such as these brief events. It’s the most versatile instrument in the lab, if you ask me – even with a 128×64 pixel LCD screen.

PS. I don’t recommend the even lower-cost $49 Xprotolab (which I also have). It has the same functionality, but with its tiny OLED display it really is too hard to read, IMO.

Electro:camp 13.04

In News on Apr 17, 2013 at 00:01

As if one event is not enough – here’s a reminder for another one coming up soon:

This the next one in a series of bi-annual meetings, from people in Germany, Belgium, the Netherlands, the UK, and beyond – this time in Kaiserslautern @ the Fraunhofer institute.

See the wiki for further details. Don’t forget to register if you wish to attend.

As I see it, Electro:camp is more focused on electricity metering and monitoring, whereas JeeDay – which has yet to define itself, clearly – will be more of a “maker/hacker” style event with focus on DIY home projects, low-power wireless, and electronics as a hobby.

JeeNode Micro start-up power

In AVR, Hardware on Mar 29, 2013 at 00:01

The JeeNode Micro v3 includes a P-channel MOSFET to control power to the RFM12B radio. This isn’t just a new gimmick – the goal was to “fix” the RFM12B wireless radio’s startup power consumption, which can prevent an ultra-low power source from ever building up a high enough supply voltage for a JeeNode to start up.

Now that the JNµ is in production, it’s time to measure how well such an approach works. Get ready for a bunch of scope screenshots, all based on the same circuit as before:

JC's Grid, page 51

… except that now the entire JeeNode Micro is in there, and I’m using a 10 Ω resistor.

I’ll be applying a 1 Hz ramp signal going from 0.0 to 3.0V using the power booster behind a signal generator, to see exactly what amount of current is being drawn. In all the images below, the yellow trace is the input voltage (i.e. a simulated power supply), and the blue trace is the voltage over the 10 Ω resistor – that means 1 vertical division on the blue trace corresponds to 0.5 mA when the display shows 5 mV/div:


The above image is just a baseline: a simple blink sketch which never enables the radio, and which then toggles some I/O pins every 500 ms. As you can see, the ATtiny84 comes out of power-on reset at about 1.4V and ends up drawing about 3.5 mA at 3.0V.

Fuses are set to low=C2 high=D7 ext=FF, i.e. BOD disabled, startup asap on RC @ 8Mhz.

Now let’s look at the same setup with the JNµ running radioBlip2.ino:


This time, the sketch enables the MOSFET to power up the radio, measures the battery voltage, tries to send out a packet (this will fail at 1.4V), and goes into deep sleep. A short but very quick (and high!) blip before power consumption drops to almost zero.

The third measurement is with a sketch doing nothing but powering down right away:

#include <JeeLib.h>

void setup () {
    cli(); // disable all interrupts

void loop () {}

Which produces this result:


I’ve bumped the scope sensitivity up to its maximum of 1 mV/div (i.e. 100 µA/div) and am now adding a lot of averaging to try and keep the displayed noise levels low. The “blip” is the ATtiny getting out of reset and powering down completely.

As last test, I repeated the above, but now using a sweep of 10 s (0.1 Hz), and filtering the signal through the lowest low-pass setting available, i.e. 5 Hz. This loses the important spike at 1.4V, which is of course still there, but improves the readout of the baseline:


As you can see, the power consumption now never rises above ≈ 60 µA – that’s a ten-fold improvement over what we get with the RFM12B connected to power in the standard way.

The shape of this curve is quite interesting: it’s essentially resistive (since it’s more or less linear), but the current starts at 1.2V, i.e. after overcoming two extra diode drops.

This is the power-up “hump” which any ultra-low power supply based on solar cells or other energy harvesting techniques will need to overcome, so that the ATtiny can switch itself into power down mode and let the supply voltage rise further.

I think that’s an excellent result, and am looking forward to trying a few things out!

JeeDay => April 20

In Musings on Feb 24, 2013 at 00:01

It’s been four and a half years of fun since I had this crazy idea to start JeeLabs, and it’s been four years also since the JeeNode was born. An excellent reason to celebrate, eh?

Coming April 19th and 20th (Friday evening and Saturday), I’m going to kick off JeeDay:

Meet face-to-face with fellow PhysComp / WSN / JeeStuff enthusiasts and JC + Martyn. Get the latest news, share your ideas and show off your project (or pictures of it). Discussions, presentations, hands-on sessions – it’s all possible, if we organise ourselves and our time appropriately!

The topics we could cover include things like:

  • Wireless Sensor Networks
  • Ultra-low power nodes in the Arduino world
  • Home monitoring and home automation
  • JeeLabs products Q & A
  • Solutions for dealing with AC mains
  • Funky sensors and clever displays
  • How to lower your energy bill
  • Soldering and measurement techniques
  • Hands-on with an oscilloscope
  • Designing and manufacturing PCBs
  • Enclosures, laser-cutting, 3D printing
  • Hack sessions? Debug sessions?
  • Bring and show your projects, especially if in-progress
  • Ideas for future projects and products
  • Presentations, presentations, presentations

Whoa, that list could go on forever… a huge set of topics!

The location will be in Utrecht or in Houten (5 min by train from Utrecht), which is located in the middle of the Netherlands. There are lots of accommodation nearby for those who want to stay overnight. Come and visit the Netherlands, you’ll enjoy it!

We can extend this to Sunday, if I can find a suitable venue and if there is enough interest, although perhaps that’s a bit too ambitious for such a first event.

Fees would be just to cover costs, drinks, etc. Also some sandwiches or pizza to get us through the day. Should all be doable for €15 .. €25.

I have no idea yet how many people would be interested and might be able to come, so I’ve set up a meeting scheduler – if you’re considering participating, please, please, please do add your name and indicate the time range – 10? 20? 50? 100? people – Let’s find out!

Further details will be added to the JeeDay 13.04 wiki page, as preparations progress. The sooner you respond, the more chances that I can figure out a proper venue and how to make it all happen. And… if you have any tips or suggestions, please get in touch now!

It’ll be great to meet face-to-face, it can be informative for all, and it’ll definitely be fun! :)

DIJN.12 – Final checks and unattended use

In Uncategorized on Feb 22, 2013 at 00:01

Welcome to the last instalment of Dive Into JeeNodes. Let’a make an unattended setup!

Everything is working now. The only step that remains is to automate things a bit further, so that the RPi will automatically start up HouseMon when powered up. This turns it into a fire-and-forget system, so that it becomes a permanent service on your LAN.

Auto-startup is convenient, but it means we also have to think a bit about how to upgrade HouseMon. This is where the nodemon utility comes in: it can be used to start up a Node.js application, and restart it whenever certain source files change. This is mostly intended as development tool, but at this stage where HouseMon is still so young and evolving rapidly, it’s actually going to be quite practical – even in an unattended mode.

Install the nodemon package by entering the command: npm install nodemon -g

The “-g” flag causes nodemon to be installed in a central location instead of as part of HouseMon, so that we can type in “nodemon” from the command line.

Now we’re ready to configure an unattended setup. Copy and paste these commands:

    cd ~/housemon
    echo 'cd ~/housemon' >
    echo 'PATH=/usr/local/bin:$PATH' >>
    echo 'nodemon >nohup.out 2>nohup.err &' >>
    chmod +x
    echo '@reboot ~/housemon/' | crontab

This creates a “” script which will be used to start up HouseMon while you’re not around and sets up an entry in the “cron” table which will run that script right after reboot, even when you are not logged in.

Warning: this loses any previous crontab entries, if this is not a fresh Raspbian install.

Reboot now, using sudo reboot to start the ball rolling. After a few minutes, the RPi will be up and running again, and you will be able to visit the HouseMon server via your web browser – no need to log in for that!

Screen Shot 2013-02-10 at 13.57.11

Congratulations: you have created your own personal Wireless Sensor Network, with a JeeNode sending out light readings once a second over wireless, to a JeeLink connected to a stand-alone Raspberry Pi, and via HouseMon running on Node.js, you’re able to watch the current light level in real time, from any web browser with access to your LAN.

Is this a home monitoring / home automation system? Heh.. not quite, but it is definitely an important first step towards such a system. All the foundations are in place, yearning to be filled-in and extended in numerous directions. The load on a Raspberry Pi looks fine:

Screen Shot 2013-02-21 at 15.36.54

This also concludes this initial series of “Dive Into JeeNodes”. The goal was to set up a basic – but fully functional – system, as a baseline for lots and lots of further explorations. As far as I’m concerned, there will be many more posts building upon everything that has been accomplished so far. For now, I’d like to leave this to settle down a bit, and to reconcile loose ends – such as going through these 12 instalments using Windows. Since I don’t use Windows myself, I’m hoping that someone else will chime in with details, so that the exact steps to get going can be documented in a follow-up post or how-to page in the project.

Cheers for now, I hope you’ve enjoyed this “PhysComp+WSN fun pack” DIJN series!

(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

PS – I’ve set up an SD card image pre-configured with Raspbian and HouseMon 0.5.1, so if you want to bypass all the setup work, download the hm051.img.gz file (550 MB!), and follow the instructions in DIJN.05 to set up that SD card. Then insert the JeeLink and SD card, and power up the RPi. It’ll start with HouseMon running – including the demo page.

PPS – Another milestone, this is weblog post #1250. Onwards! :)

DIJN.11 – Connect the light sensor

In Uncategorized on Feb 21, 2013 at 00:01

Welcome to the eleventh instalment of Dive Into JeeNodes. Let there be light!

Everything until now was nice, but wasn’t really about sensing the environment. For that, we need to hook up some sensors, obviously. One of the simplest sensors around is the Light Dependent Resistor, or LDR, as it’s usually called. It does exactly what its name says: vary its own internal resistance, depending on the amount of light falling on it.

To read it out with an ATmega, all we need to do is connect the LDR between ground and an analog I/O pin, and enable the ATmega’s internal pull-up resistor for that I/O pin to get a voltage drop over the LDR. Let’s first hook up the LDR:


As you can see, I’ve connected the LDR to the analog pin of Port 1 on the JeeNode (a JeeNode USB in this case – but any type of JeeNode will do).

The next step is to upload the proper code to the JeeNode, so that it actually does the measurement and transmits the result over wireless. This is a task for the Arduino IDE. We’ve already used it to load the initial test1 sketch, and now we can replace it with test2:

#include <JeeLib.h>

#define LDR 0 // the LDR will be connected to AIO1, i.e. analog 0

void setup () {
  // this is node 1 in net group 100 on the 868 MHz band
  rf12_initialize(1, RF12_868MHZ, 100);

  // need to enable the pull-up to get a voltage drop over the LDR
  pinMode(14+LDR, INPUT_PULLUP);

void loop () {
  // measure analog value and convert the 0..1023 result to 255..0
  byte value = 255 - analogRead(LDR) / 4;

  // actual packet send: broadcast to all, current counter, 1 byte long
  rf12_sendNow(0, &value, 1);

  // let one second pass before sending out another packet

(rf12_sendNow was recently added to JeeLib, make sure you’re using a recent version)

If you have followed along to the letter of the series, then the JeeNode will still be hooked up, and the IDE will still have the proper serial port setting for it. If not, please go back and make sure you can upload a new sketch. It’s a matter of choosing the proper serial port, selecting the examples/DIJN/test2 sketch in the IDE, and clicking on the Upload button. If all went well, you’ll see this:

Screen Shot 2013-02-16 at 11.40.21

That’s it. If you now look at the Demo page in HouseMon again via your browser, you’ll see actual light readings. Put your hand over the LDR and see the value decrease in real-time (within a second, in this case, since that’s how often the JeeNode sends out a new packet).

As you’ll see, the LDR is very sensitive. It has to be pretty dark for the value to drop under 100 (it ranges from 0..255). The range is also not linear or calibrated in any way.

Congratulations… now you have a real WSN! Tomorrow we’ll take care of some loose ends.


(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

DIJN.10 – Set up a demo web page

In Uncategorized on Feb 20, 2013 at 00:01

Welcome to the tenth instalment of Dive Into JeeNodes. Let’s show real-time data!

With HouseMon running on the Raspberry Pi, we can start to think how to show incoming data on a web page. The following approach is the most basic one possible: get the received packets into Node.js, i.e. the RPi server, “publish” each through a WebSocket connection to any attached client, and figure out how to get the received value onto the web page.

Let’s start from the back. This is the web page we’ll be setting up:

Screen Shot 2013-02-09 at 12.22.13

Note that I’m running the server on my Mac in these screen shots, hence the “localhost” in the URL, but this will work in exactly the same way with HouseMon running on the RPi. Note also that the following three files are already included in HouseMon – they are just shown to explain what is going on.

That page is produced by a file called client/templates/demo.jade, with this contents:

        h1 Demo
        h3 {{value}}

Lots of stuff going on behind the scenes, clearly, but this is what Jade looks like with Foundation CSS, and how AngularJS ties a dynamic “value” variable into a web page. It’s sort of like a template (but not quite, due to the two-way binding nature of all this).

The second piece of the puzzle is a file called client/code/modules/

        module.exports = (ng) ->

          ng.controller 'DemoCtrl', [
            ($scope) ->
              $scope.$on 'ss-demo', (event, value) ->
                $scope.value = value

This one is a bit dense: it defines an AngularJS “controller” which will listen to incoming “ss-demo” events, and set the “value” variable we referred to in that Jade file above.

The last code we need, is in a file called briqs/ It defines this “Briq”: =
  name: 'demo'
  description: 'This demo briq is used by the "Dive Into JeeNodes" series'
  menus: [
    title: 'Demo'
    controller: 'DemoCtrl'

state = require '../server/state'
ss = require 'socketstream'

exports.factory = class
  constructor: ->
    state.on 'rf12.packet', packetListener
  destroy: -> 'rf12.packet', packetListener

packetListener = (packet, ainfo) ->
  if is 1 and is 100
    value = packet.buffer[1]
    ss.api.publish.all 'ss-demo', value

A “Briq” is something which can be installed at run time in HouseMon. Once installed, that file above will run on the server side. Briqs are a new mechanism I came up with (it’s no rocket science, but at least the name is new), and it’s still very early days. Once I figure out how, briqs will be turned into self-contained directories with the above three source files all in one place. For now, each of those files needs to be placed in a very specific location, for SocketStream to be able to find and use these files.

So how do we “install” the above demo in HouseMon, you may ask…

Easy. Go to the “Admin” page in HouseMon, via the Admin tab in the upper right corner:

Screen Shot 2013-02-09 at 13.18.11

No installed briqs yet, but you can see some briqs which are currently included in HouseMon in the second list. We need to install two briqs: one to set up that demo page, and one to connect to the JeeLink.

Click on the “demo” entry in the list to get this (whoops, I already installed everything):

Screen Shot 2013-02-09 at 12.24.52

Once you’ve clicked on install, you’ll see a new “Demo” tab appear in the upper right. It won’t do much, though, because we’re not receiving any data yet!

Click on the “rf12demo” entry, and fill in the serial port of your JeeLink. On the RPi, it’ll most probably be “/dev/ttyUSB0”, so just enter “ttyUSB0”. Then click install.

Go to the “Demo” page, and if your JeeNode is powered up, you’ll see an updating value.

Bingo – you’re looking at data coming from your Wireless Sensor Network in real-time.


(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

Data, data, data

In Software on Feb 17, 2013 at 00:01

If all good things come in threes, then maybe everything that is a triple is good?

This post is about some choices I’ve just made in HouseMon for the way data is managed, stored, and archived (see? threes!). The data in this case is essentially everything that gets monitored in and around the house. This can then be used for status information, historical charts, statistics, and ultimately also some control based on automated rules.

The measurement “readings” exist in the following three (3!) forms in HouseMon:

  • raw – serial streams and byte packets, as received via interfaces and wireless networks
  • decoded – actual values (e.g “temperature”), as integers with a fixed decimal point
  • formatted – final values as shown on-screen, formatted and with units, i.e. “17.8 °C”

For storage, I’ll be using three (3!) distinct mechanisms: log files, Redis, and archive files.

The first decision was to store everything coming in as raw text in daily log files. With rollover at midnight (UTC), and tagged with the interface and a time stamp in milliseconds. This format has been in use here at JeeLabs since 2008, and has served me really well. Here is an example, taken from the file called “20130211.txt”:

    L 01:29:55.605 usb-AH01A0GD OK 19 115 113 98 25 39 173 123 1
    L 01:29:58.435 usb-AH01A0GD OK 9 4 50 68 235 251 166 232 234 195 72 251 24
    L 01:29:58.435 usb-AH01A0GD DF S 6373 497 68
    L 01:29:59.714 usb-AH01A0GD OK 19 96 13 2 11 2 30 0

Easy to read and search through, but clearly useless for seeing the actual values, since these are the RF12 packets before being decoded. The benefit of this format is precisely that it is as raw as it gets: by storing this on file, I can improve the decoders and fix the inevitable bugs which will crop up from time to time, then simply re-parse the files and run them through the decoders again. Given that the data comes from sketches which change over time, and which can also contain bugs, the mapping of which decoders to apply to which packets is an important one, and is in fact going to depend on the timeline: the same node may have been re-used for a different sketch, with a different packet format over time (has rarely happened here, once a node has been put to permanent use).

In HouseMon, the “logger” briq now ties into serial I/O and creates such daily log files.

The second format is more meaningful. This holds “readings” such as:

    { id: 1, group:5, band: 868, type: 'roomNode', value: 123, time: ... }

… which might represent a 12.3°C reading in the living room, for example.

This is now stored in Redis, using the new “history” briq (a “briq” is simply an installable module in HouseMon). There is one sorted set per parameter, to which new readings are added (i.e. integers) as they come in. To support staged archive storage, the sorted sets are segmented per 32 hours, i.e. there is one sorted set per parameter per 32-hour period. At most two periods are needed to store full details of every reading from at least the past 24 hours. With two periods saved in Redis for each parameter, even a setup with a few hundred parameters will require no more than a few dozen megabytes of RAM. This is essential, given that Redis keeps all its data in memory.

And lastly, there is a process which runs periodically, to move data older than two periods ago into “archival storage”. These are not round-robin databases, in the sense of a circular buffer which gets overwritten as new data comes in and wraps around, but they do use a somewhat similar format on disk. Archival storage can grow infinitely, here I expect to end up with about 50..100 MB per year once all the log files have been re-processed. Less, if compression is used (to be decided once speed trade-offs of the RPi have been measured).

The files produced by the “archive” briq have the following three (3!) properties:

  • archives are redundant – they can be reconstructed from scratch, using the log files
  • data in archives is aggregated, with one data point per hour, and optimised for access
  • each hourly aggregation contains: a count, a sum, a minimum, and a maximum value

I’ll describe the archive design choices and tentative file format in the next post.

DIJN.08 – Set up Node.js and Redis

In Uncategorized on Feb 15, 2013 at 00:01

Welcome to the eighth instalment of Dive Into JeeNodes. Let’s install Node.js!

(Note: these instructions have been updated to use the Feb 9th Raspbian image)

Now that the basic Wireless Sensor Network is up and running, it’s time to start thinking about the server-side software, which in this series will be HouseMon. For that, we need to get three pieces of software up and running on the RPi: Node.js, Redis, and git.

Those last two are in fact very easy to install on Linux, because they are available via the apt/aptitude package manager. And while we’re at it, let’s update all packages to the latest version. Getting the latest updates is always a good idea on Linux – just type:

    sudo apt-get update && sudo apt-get upgrade

Hit return when asked whether you want to install the updates. The first time around, this is bound to take some 15..30 minutes, as the system may not have been updated for quite some time, but later in it should normally takes no more than a few minutes.

Ok, back to the task at hand, i.e. installing redis and git:

    sudo apt-get install redis-server git

Easy, eh? Welcome to the world of open source Linux software. There are over 35,000 packages available. One way to explore what there is, is to use the menu-based “aptitude” command (type “sudo aptitude” if you’re curious, but be careful what you wish for!).

With Node.js, things are a bit more complicated, because the current version of “Node” available via apt-get is a bit too old for us. Fortunately, we don’t have to go through an actual build from the source code, as I described last month (things move fast in this world!). There is now a pre-compiled version of Node which we can install fairly easily:

  • First, a preparatory step, enter:

    sudo usermod -aG staff pi && sudo reboot

    Don’t be alarmed by the reboot, it’s merely a lazy way to log out and back in, which is essential for this one-time configuration change of the RPi. Wait a minute or two, and then you can log back in via ssh, as usual.

  • Now, enter these commands exactly as shown below:

    curl$v/node-$v-linux-arm-pi.tar.gz | tar xz
    cp -a node-$v-linux-arm-pi/{bin,lib} /usr/local/
    rm -r node-$v-linux-arm-pi
    npm -v

(Note: ignore the “preserving times for …: Operation not permitted” error messages)

That’s it. You can copy and paste these commands line by line, or all in one go – the latter is usually more convenient. The final “npm -v” command verifies that “node” and “npm” work properly by reporting npm’s version number when you hit return, i.e. “1.3.x”.

Lots of powerful tools… all in preparation of being able to install and use HouseMon.


(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

DIJN.07 – Attach a JeeLink to the RPi

In Uncategorized on Feb 14, 2013 at 00:01

Welcome to the seventh instalment of Dive Into JeeNodes. Hello RPi, meet the JeeLink!

With all the RPi setup out of the way, it’s time to hook it up to the hardware. Fortunately, most of the essential ingredients are already included in the RPi. The JeeLink is based on an FTDI chip, for which the driver will be auto-loaded by Linux when it’s plugged in.

Proceed as follows:

  1. Plug the JeeLink in one of the two USB ports on the RPi.
  2. Type the following command in Linux: dmesg | tail -20

You’ll get something like this:

Screen Shot 2013-02-10 at 10.41.26

This is the kernel log, reporting that it has recognised the inserted USB device, has loaded a kernel driver for it, and has created a new serial “device” in Linux, called “/dev/ttyUSB0”.

We’re connected!

There are many ways to communicate with the JeeLink. Here’s a quick check that it works:

  • Enter the command: stty 57600 raw -echo </dev/ttyUSB0
  • Then enter the command: cat </dev/ttyUSB0

This sets up the serial ports and shows what is coming in. This is what I get:

Screen Shot 2013-02-10 at 10.50.33

Yippie! The JeeLink has restarted and shown its usual greeting, and is reporting incoming packets. You can hit CTRL+c to stop the process.

The above commands are sufficient to prove that things work, but not convenient. For that, we need to install a serial terminal emulator. There are two we can use, minicom or screen:

  • For minicom, install it with this command: sudo apt-get install minicom
  • To launch minicom, type: minicom -b 57600 -o -D /dev/ttyUSB0
  • For a help page in minicom, type: CTRL+a, then z
  • To exit and close the connection in minicom, type: CTRL+a, then q

I tend to prefer screen, mostly because I’m used to it:

  • For screen, install it with this command: sudo apt-get install screen
  • To launch screen, type: screen /dev/ttyUSB0 57600
  • For a help page in screen, type: CTRL+a, then ?
  • To exit and close the connection in screen, type: CTRL+a, then \

At this point, all the hardware, all the connections, and all the wireless communication are now known to work. All that remains, is to set up the HouseMon server on the RPi and to connect the light sensor.


(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

PS. There may be a potential issue with the FTDI driver on RPi. Will need to test this later.

DIJN.06 – Boot the Raspberry Pi

In Uncategorized on Feb 10, 2013 at 00:01

Welcome to the sixth instalment of Dive Into JeeNodes. We’re halfway there!

(Note: these instructions have been updated to use the Feb 9th Raspbian image)

This part is probably the most complex piece of the journey called DIJN. Then again, you’re about to set up a self-contained server which can be used for a truly astonishing number of tasks. Linux is everywhere nowadays, and this is probably one of the easiest, lowest-cost, and most risk-free ways to tap into that world which has driven – and evolved alongside – the Open Source Software trend for so long already.

One could probably spend a decade exploring all the software that runs on Linux, and a lot of it may well be affecting your daily life (ever heard of this thing called “internet”?). The good news of course is that you don’t have to spend years, but that whatever interests you is going to be totally in reach, if and when you choose to, eh, dive in…

Let’s get rollin’, shall we?

  1. Insert the SD card you prepared according to yesterday’s post into the RPi. Note that it has to be inserted with the label facing down.
  2. Connect the Ethernet cable on both sides to hook up the RPi to your LAN.
  3. Connect the 5V power adapter to the RPi’s micro-USB port. This will power it up.

LEDs will start flashing, the world will start spinning, and a loud roar will emerge as the positron drive starts up. Oh, wait… wrong movie. Here’s what will really happen:

  • The red LED marked “PWR” will turn on.
  • The other LEDs will blink in a various patterns – after some 30s they’ll look like this:


This indicates that everything has started up properly. Now it’s time to try and connect to it over Ethernet. How, depends on your platform. On Windows, you can use the putty program. On Mac and Linux, the built-in “ssh” command will be perfect.

Now is the time to shed your Fear Of The Command Line, if it’s all new to you. It’s very simple, really: the GUI we tend to use is like the body of a car, and the command-line its chassis. This DIJN series is about what goes on under the hood. That’s where the “real” stuff happens. There’s no other way – we need to look inside. We need to operate inside, in fact. Soooo… welcome to the machine, I’m sure you’ll like it. It’s fascinating in there :)

But first, we need to solve a small network puzzle: what IP address does the RPi have? There are two ways to find out: 1) have a monitor plugged into the RPi when it boots up, and the value should show up on the screen, or 2) consult the “DHCP client table” in your network router, which is usually the DHCP server for your local network. In my case, the IP address is, so that’s what I’ll use in the rest of this post.

The next step is to log into your RPi over the network, using ssh or Putty. Putty has its own GUI way of logging in, but in the Mac’s “Terminal” app or from the Linux command line, this is the way to do so (i.e. user name “pi”, IP address

ssh pi@

Here’s a transcript of what it will look like the first time around:

Screen Shot 2013-02-20 at 18.37.25

Bingo, you have logged into your RPi – welcome to the Linux command line!

You now need to go through some specific first-time RPi configuration adjustments, as hinted on that last line above. See also an earlier post for some more info (but don’t install Node.js just yet: that will be covered later on):

Screen Shot 2013-02-20 at 18.37.54

It’s a menu-based step, just type sudo raspi-config, and once finished, reboot the system (by typing the command sudo reboot). This last step is essential in this case.

Then wait a minute or two and log back in as before. You’ll again be connected to the Linux “shell”, patiently awaiting your command(s). If it’s all totally new to you, then trying out a few commands to familiarise yourself might be a good idea at this point. See this site for some tips and ideas. As long as you use the “sudo” command with caution, which gives you full “superuser” (admin) rights, there’s not much that can really go wrong. And if things ever get totally out of hand, you can always get back to this starting situation by re-formatting the SD card and starting over.

In the next step, we’ll make the RPi the central node of our Wireless Sensor Network.


(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

DIJN.04 – Receive data with a JeeLink

In Uncategorized on Feb 8, 2013 at 00:01

Welcome to the fourth instalment of Dive Into JeeNodes. Let’s go wireless!

The JeeLink is a version of the JeeNode with some extra features, but the main reason I like to use it as central node is that it comes as a little pluggable unit – quite convenient, here it is in action on my MacBook, for example:


As delivered, JeeLinks and JeeNodes come with the RF12demo sketch pre-installed. This can be used for a variety of tasks, including basic configuration and basic packet reception.

But first, we need to be able to connect to the JeeLink over serial USB. This is similar to the uploading performed in the previous DIJN post:

  1. Plug in the JeeLink – it will briefly flash on its red and green LEDs.
  2. Start the Arduino IDE, and go to the Tools -> Serial Port menu entry. Then, make sure to select the JeeLink, which will show up as a second serial port (assuming you still have the other JeeNode plugged in).
  3. In the IDE, click on the right-most button, the one with the magnifying glass, to bring up the “Serial Monitor”.

That last step will open up a new window, with something like this:

Screen Shot 2013-02-07 at 14.05.31

If you only get gibberish, you need to adjust the baud rate popup in the lower right to 57600. If you don’t get anything: double-check that you selected the correct serial port.

What we need to do next, is to match up the wireless settings to be able to pick up the packets sent from that other JeeNode. There are a few settings:

  • The frequency band must be set to 868 MHz. Note that this may differ from the unit you have, and the frequency used in your part of the world, but all JeeLinks and JeeNodes can operate in any of the 433 / 868 / 915 MHz bands. The only difference is that the power and range will be dramatically reduced in bands for which that wireless module is not optimised. In this case, we don’t care (yet), since the first tests will be across less than 1 meter, which can easily be bridged in any frequency band.
  • The “net group” must match between all transmitters and receivers, to allow them to pick up each other’s packets. In this case, we need to use group “100”.
  • The “node ID” is normally different for each node, so they can tell each other apart. For a central node, it’s usually best to pick the special ID “31”.

The commands to set these parameters can be combined into a single line:

Screen Shot 2013-02-07 at 14.15.51

Once the commands have been entered, you will see a confirmation shown as follows:

 _ i31 g100 @ 868 MHz 

And then, if the sending JeeNode is still powered up, these – once a second:

OK 1 0
OK 1 1
OK 1 2
OK 1 3
OK 1 4
OK 1 5

The “OK” text indicates reception of a valid packet (you may also get some “?” packets, which are usually caused by picking up noise or other 868 MHz transmitters), the first byte indicates that this is a regular packet broadcast coming from node #1, and the second byte is the counter, incremented for each new packet, as you can see in the test1 source code.

Congratulations. You have created your own Wireless Sensor Network. Well, no actual sensors yet, but we’ll get to that soon enough. First, we need to look more deeply into this thing called a “Ra(h)spberry P(h)i” (here’s a little exercise to get you started).


(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

HouseMon resources

In AVR, Hardware, Linux, Musings, Software on Feb 6, 2013 at 00:01

As promised, a long list of resources I’ve found useful while starting off with HouseMon:

JavaScript – The core of what I’m building now is centered entirely around “JS”, the language behind many sites on the web nowadays. There’s no way around it: you have to get to grips with JS first. I spent several hours watching most of the videos on Douglas Crockford’s site. The big drawback is the time it takes…

Best book on the subject, IMO, if you know the basics of JavaScript, is “JavaScript: The Good Parts” by the same author, ISBN 0596517742. Understanding what the essence of a language is, is the fastest way to mastery, and his book does exactly that.

CoffeeScript – It’s just a dialect of JS, really – and the way HouseMon uses it, “CS” automatically gets compiled (more like “expanded”, if you ask me) to JS on the server, by SocketStream.

The most obvious resource,, is also one of the best ways to understand it. Make sure you are comfortable with JS, even if not in practice, before reading that home page top to bottom. For an intruiging glimpse of how CS code can be documented, see this example from the CS compiler itself (pretty advanced stuff!).

But the impact of CS goes considerably deeper. To understand how Scheme-like functional programming plays a role in CS, there is an entertaining (but fairly advanced) book called CoffeeScript Ristretto by Reginald Braithwaite. I’ve read it front-to-back, and intend to re-read it in its entirety in the coming days. IMO, this is the book that cuts to the core of how functions and objects work together, and how CS lets you write on a high conceptual level. It’s a delightful read, but be prepared to scratch your head at times…

For a much simpler introduction, see The Little Book on CoffeeScript by Alex MacCaw, ISBN 1449321046. Also available on GitHub.

Node.js – I found the Node.js in Action book by Mike Cantelon, TJ Holowaychuk and Nathan Rajlich to be immensely useful, because of how it puts everything in context and introduces all the main concepts and libraries one tends to use in combination with “Node”. It doesn’t hurt that one of the most prolific Node programmers also happens to be one of the authors…

Another useful resource is the API documentation of Node itself.

SocketStream – This is what takes care of client-server communication, deployment, and it comes with many development conveniences and conventions. It’s also the least mature of the bunch, although I’ve not really encountered any problems with it. I expect “SS” to evolve a bit more than the rest, over time.

There’s a “what it is and what it does” type of demo tour, and there is a collection on what I’d call tech notes, describing a wide range of design docs. As with the code, these pages are bound to change and get extended further over time.

Redis – This a little database package which handles a few tasks for HouseMon. I haven’t had to do much to get it going, so the README plus Command Summary were all I’ve needed, for now.

AngularJS – This is the most framework-like component used in HouseMon, by far. It does a lot, but the challenge is to understand how it wants you to do things, and altough “NG” is not really an opinionated piece of software, there is simply no other way to get to grips with it, than to take the dive and learn, learn, learn… Let me just add that I really think it’s worth it – NG can be magic on the client side, and once you get the hang of it, it’s in fact an extremely expressive way to create a responsive app in the browser, IMO.

There’s an elaborate tutorial on the NG site. It covers a lot of ground, and left me a bit overwhelmed – probably because I was trying to learn too much as quickly as possible…

There’s also a video, which gives a very clear idea of NG, what it is, how it is used, etc. Only downside is that it’s over an hour long. Oh, and BTW, the NG FAQ is excellent.

For a broader background on this sort of JS frameworks, see Rich JavaScript Applications by Steven Sanderson. An eye opener, if you’ve not looked into RIA’s before.

Arduino – Does this need any introduction on this weblog? Let me just link to the Reference and the Tutorial here.

JeeNode – Again, not really much point in listing much here, given that this entire weblog is more or less dedicated to that topic. Here’s a big picture and the link to the hardware page, just for completeness.

RF12 – This is the driver used for HopeRF’s wireless radio modules, I’ll just mention the internals weblog posts, and the reference documentation page.

Vim – My editor of choice, lately. After many years of using TextMate (which I still use as code browser), I’ve decided to go back to MacVim, because of the way it can be off-loaded to your spine, so to speak.

There’s a lot of personal preference involved in this type of choice, and there are dozens of blog posts and debates on the web about the pro’s and con’s. This one by Steve Losh sort of matches the process I am going through, in case you’re interested.

Best way to get into vim? Install it, and type “vimtutor“. Best way to learn more? Type “:h<CR>” in vim. Seriously. And don’t try to learn it all at once – the goal is to gradually migrate vim knowledge into your muscle memory. Just learn the base concepts, and if you’re serious about it: learn a few new commands each week. See what sticks.

To get an idea of what’s possible, watch some videos – such as the vim entries on the DAS site by Gary Bernhardt (paid subscription). And while you’re at it: take the opportunity to see what Behaviour Driven Design is like, he has many fascinating videos on the subject.

For a book, I very much recommend Practical Vim by Drew Neil. He covers a wide range of topics, and suggests reading up on them in whatever order is most useful to you.

While learning, this cheatsheet and wallpaper may come in handy.

Raspberry Pi – The little “RPi” Linux board is getting a lot of attention lately. It makes a nice setup for HouseMon. Here are some links for the hardware and the software.

Linux – Getting around on the command line in Linux is also something I get asked about from time to time. This is important when running Linux as a server – the RPi, for example.

I found the resource which appears to do a good job of explaining all the basic and intermediate concepts. It’s also available as a book, called “The Linux Command Line” by William E. Shotts, Jr. (PDF).

There… the above list ought to get you a long way with all the technologies I’m currently messing around with. Please feel free to add pointers and tips in the comments, if you think of other resource which can be of use to fellow readers in this context.

DIJN.03 – Store code in a JeeNode

In Uncategorized on Feb 4, 2013 at 00:01

Welcome to the third instalment of Dive Into JeeNodes. Some real hardware now!

Both the remote sending side and the more nearby reeiving side of the wireless sensor link are handled by a JeeNode – which is essentially an Arduino-like microcontroller with some support circuitry, plus a low-power wireless radio module.

The JeeLink is a modified version of a JeeNode in a USB-stick form factor and enclosure.

Remote nodes

But let’s start with that remote JeeNode first. What we’re going to set up is a little self-contained unit with a light sensor, a battery pack to power the whole thing, and the proper software pre-loaded onto the ATmega328 microcontroller inside that JeeNode.

The whole point of this “node” is that it’s completely autonomous. You’ll set it up once, take it to the place where you want to perform the light level measurements, hook up the battery pack, and that’s it. Its only task is to periodically measure the current light intensity, and report it over wireless – so that we can pick up the signal and pass it to the Raspberry Pi in a central location in the house, collect the data, tie it into the web server, and make it available on the local Ethernet network for visualisation in the browser.

Let’s set up that node now – except that it won’t be the final measurement code. Let’s start with a simpler goal first: sending out a counter value once a second, to see if we can pick up that value. With a little blink “blip” to let us know that the node is doing its work. But no light sensor for now…

As it so happens, JeeLib now includes an example sketch which does exactly that – see test1.ino on GitHub for the code (make sure you have the latest version of JeeLib!).


To get this code into the JeeNode, we need to communicate with it. This is done by connecting the JeeNode to our workstation / laptop – the one where we installed the Arduino IDE. That connection is created via a USB cable and “BUB”, as follows:


The next step is to tell the IDE which serial port to use to reach our JeeNode:

Go to the “Tools -> Serial Port” menu in the IDE, and select the serial port under which the USB BUB has registered itself. The actual name will depend on the platform (Windows, Mac OSX, or Linux) and in the case of Mac OSX, also on the unique name of that particular USB BUB.

Now, we need to load the proper sketch into the Arduino IDE:

Go to the “File -> Sketchbook -> libraries -> JeeLib -> DIJN -> test1” menu. This will load the test sketch.

To check whether the code is in good shape, click on the leftmost round button with the checkmark in it (or go to the “Sketch => Verify / Compile” menu – same thing).

(rf12_sendNow was recently added to JeeLib, make sure you’re using a recent version)

If all is well, you’ll get one line from the compiler, reporting something like:

Binary sketch size: 3,958 bytes (of a 32,256 byte maximum)

The value may differ a bit, but this indicates that the sketch is ok and can be uploaded.

The last step is where the real upload takes place, and since it includes the compile step, you could in fact have skipped that last instruction above.

Click on the second button from the left (the one with the arrow pointing right).

If all is well, you’ll see some LEDs blink the USB BUB, and when all is done a promising message “Done uploading.” will appear in the IDE (in the middle green bar).

If you get an error, make sure you have selected the proper board type (menu “Tools -> Board -> Arduino Uno”). Still no luck? Check that the USB serial port connection, the cable, the BUB, and the JeeNode are all hooked up as shown above. When in doubt, disconnect and reconnect gently.

Getting through this first upload is an important milestone. There are a few trouble spots, depending on the platform you use. You may have to install the latest FTDI USB driver. You may have to check that the serial USB drivers are present. There is not much generic advice to give here – other than “google for the error message you get”, in the hope that others have run into the same issue and that you’ll find a page with solution or tip.

At this point, you should see a brief LED flash once a second. This indicates that the JeeNode is running the test sketch and sending out its counter packet once a second.

Congratulations, you have set up a live test node for your wireless sensor network!


(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

DIJN.02 – The Arduino IDE

In Uncategorized on Feb 3, 2013 at 00:01

Welcome to the second instalment of Dive Into JeeNodes. Let’s get going!

DIJN differs from a conventional laptop / workstation setup, in that it will be interfacing to the real world via microntrollers running our code. To get that code onto these devices, we need to cross compile the software on our (big) computer running Windows, Mac OSX, or Linux, and then upload it into the permanent memory of a microcontroller chip.

The Arduino has brought this approach to the masses, by providing an open-source IDE to develop, compile, upload, and run the software in the form of embedded software (called “sketches” in Arduino-speak). The success of the Arduino is partly due to the fact that these tools have become easy to install and use.

Before the Arduino era, cross compilers were complex, platform-specific, and too expensive for personal use in small projects. Now they are free, open source, extensively documented, and surrounded by a vibrant community.

Installing the IDE

As a first step, we need to perform that installation. This process is fully described on the Getting Started with Arduino page.

The recommended version is Arduino IDE 1.0.3, at the moment. Note that you don’t have to follow all the steps (1..7 on Windows and Mac OSX are enough), because we’re not going to connect any hardware or upload any software just yet.

Go ahead, perform the installation now. Should take no more than 15 minutes, normally.

If all went well, you should be able to launch the IDE and see something like this after you open the LED blink example sketch with File > Examples > 1.Basics > Blink. :


If you click on the leftmost round button, the one with the checkmark, then the bottom part of the window should also show that same text:

Binary sketch size: ...

Congratulations, you’ve installed a fairly large package and it’s all running as expected.

What is an IDE?

But, wait… what is this thing? And why do we need it?

The “Integrated Development Environment” is where we create the software that ends up running on our microcontroller(s) – be it an Arduino, a JeeNode (which can be used just like an Arduino Uno because it has the same ATmega328 microcontroller), an RBBB, or a range of other similar products.

The Arduino IDE is an editor, a compiler (translating the program source text into some obscure binary code which one particular type of microcontroller understands), an uploader, and a serial terminal interface – all rolled into one app. If your background is PHP or Python, you may want to read a bit more about compiled languages.

The IDE also defines various conventions for how and where to store different pieces of software if we have more than one microcontroller, how to deal with standard libraries obtained from other software developers, and which type of microcontrollers we can handle through this environment.

All of this is essential during development, but once the compiled code has been stored in the microcontroller, it’s not really needed anymore. In day-to-day use, the IDE plays no role – until we decide to make changes, add new features, extend the code… or fix bugs. Then the whole process kicks in again – we load the sketch we were working on back into the IDE, make the changes, compile, upload, try things out, and again: once it works, the IDE can be closed and left alone.

Note that there is no way back: you can’t get code out of a microcontroller and turn it into the source text you wrote. So an essential part of the IDE is to help you manage all those different tasks and versions used for different microcontrollers in your (usually growing) collection of remote sensor nodes. It’s a good idea to think about how you name stuff and where you put it, because you’ll want to find it again whenever you fire up the IDE.

So we write (or use someone else’s) code, as a “sketch” for a particular task, and we upload it into one or more microcontrollers in the different remote nodes. We give these sketches names, and we use the IDE to keep a copy of each of them, to be able to improve / refine / fix / alter the logic of certain nodes at any time in the future.

Some people spend all their time in the IDE (because that’s where lots of code is created). We usually call ’em software developers, although I think IDEalists sounds a lot nicer!


Software can grow to become quite complex. Yet some parts are going to be used over and over again in different sketches. In the case of wireless JeeNodes, the best example is the “RF12” wireless driver, i.e. a piece of software which takes care of sending and receiving wireless data packets.

This sort of “standard” extra functionality is usually made available as software libraries. In the Arduino IDE, such libraries can contain the re-usable code as well as example sketches to try everything out.

The library we want is called “JeeLib”, and is documented here.

We need to add this essential ingredient to our IDE. It has been developed and extended by JeeLabs and other people over the years, and the latest version of this code is available from GitHub – a popular place for a huge range of open source software projects you’re likely to want to use. The GitHub website is enormous, but easy to search and navigate.

Installation requires some care, because we must download the entire set of files and put it in a specific location for the Arduino IDE to be able to find it. There is a good introduction on how to install libraries.

To download the latest version of JeeLib, click on this link, which you can also find as a “ZIP” button in the top left of this page on GitHub. Unpack and rename the resulting folder to just “JeeLib”, then follow the instructions on that how to install page.

To check that things went right, you have to first work around a quirk of the IDE:

Quit the Arduino IDE and then restart it, so that it will pick up all library changes.

Lastly, see if you can locate the following entry in the Arduino IDE’s nested menus:

File -> Sketchbook -> libraries -> JeeLib -> DIJN -> test1

Found it? Terrific. You’ve completed the software installation for embedded Arduino and JeeNode use. As far as the software required for DIJN is concerned, your laptop / workstation is ready to rock.


(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

DIJN.01 – Introduction

In Uncategorized on Feb 2, 2013 at 00:01

Welcome to the first instalment of Dive Into JeeNodes. This one is a bit lengthy, alas…

The purpose of these articles is to introduce you to the world of Physical Computing and Wireless Sensor Networks in an easy to follow way. We will create a low-cost setup to let you track the light level of some spot anywhere in your house and present this information on any computer, tablet, or mobile phone with access to your home network.


In more visual terms, this is what we’re aiming for, and what we need to to set up for it:

dijn01-essence.png   dijn01-diagram

Don’t worry too much about the detailed diagram on the right – it’s just to give you an idea of the pieces involved. Here’s a quick rundown of the hardware which will be used:

  • a remote wireless sensor node – which will be a JeeNode SMD
  • the sensor – an LDR, which changes resistance depending on current light levels
  • an USB BUB to load new code into the JeeNode
  • a central wireless node to collect the measurement data – this will be a JeeLink
  • a small Linux “bare board” computer – we’ll use a Raspberry Pi (with Raspbian)
  • your existing local wired (and optionally wireless) network
  • some cables, a USB power supply, batteries, and…
  • your time, your attention, and your enthusiasm, of course!

In case you’re wondering: the Wireless Starter Pack includes much of the above.

None of this is set in stone. It’s possible to replace the Raspberry Pi with another board, or even run that part on your exsiting workstation, laptop, or server. You could use two JeeNodes, or replace the JN SMD + USB BUB by a JeeNode USB, or even create your own variations – but to limit the scope of this DIJN series, the above will be used here.

Note very specifically that you will not need to solder anything for this setup, although it’s very likely that you’ll be itching to do so once the basic system is working – because that’s how you can get more sensors and remote control features in there.

The final result will be something you can leave on, consuming a fraction of a normal PC’s or even laptop’s power – fully unattended, running a dedicated built-in webserver where everything gets configured and where the up-to-date light level “readings” will be available.

It might not sound like much… but don’t jump to conclusions just yet!


This is a truly minimal setup. It ought to be possible to assemble this in say a weekend, even if you’ve never done any hardware or software development before. The total cost should remain well under €150, more or less evenly split between the two JeeNodes + sensor and the Raspberry Pi + power and cabling. It’s still a significant amount of money… for which you could also buy a game console, or go watch all the latest blockbuster movies in 3D.

So what’s the point?

Well, it might not look like much, but this little setup opens up a whole new world and offers access to a surprisingly broad range of state-of-the-art technologies:

  • our world is being filled with sensors at an astonishing pace – just think of all the new “senses” mobile phones have acquired in the last couple of years
  • wireless information exchange is becoming so ubiquitous, it’s not even funny anymore: we live in an always-connected age, and that trend is here to stay
  • software, in the form of built-in intelligence, is everywhere – from the smallest ultra-low power microcontrollers, to tiny functionally-complete computers running Linux
  • hardware is shrinking and spreading everywhere, and more and more based on self-contained extremely sophisticated low-cost electronic chips
  • web technology is advancing faster than ever, covering everything from “big” desktops, to laptops, tablets, and mobile phones

With the DIJN setup presented here, everything just mentioned becomes something you can investigate, explore, tinker with, alter, extend, improve upon, or simply… learn from!

Whether you want to do this out of geek curiosity, for general self-education, to refresh knowledge from the past, to increase your job opportunities, to enhance specific skill sets in… take a deep breath: embedded or web software, microcontroller or Linux hardware, basic electronics, advanced chip capabilities, miniaturisation, ultra-low power design, system integration, wireless networking, communication protocols, C / C++ programming, JavaScript, shell scripting, Linux command-line tools, … everything is in there, in that “little” setup, ready to go where your interests (and your patience) take you.

Not only is everything open source, and hence ready to be explored, it’s also virtually risk free: even if you were to damage something (which is surprisingly hard, with a few simple precautions), it would be in the context of a very limited and low-cost setup.

Just add an extra wireless node, and go solder things together for the first time in your life. Or look into that Linux stuff which drives the Raspberry Pi. Perhaps you’re curious about WebSockets and real-time software. It’s all there. It doesn’t bite. It’s probably easier to understand, because small systems have to be simple by design to maintain their low cost.

Yet at the same time, it’s all really state-of-the-art in many ways. The power levels and battery life achievable with JeeNodes is measured in years. The performance of the Raspberry Pi is such that it can actually drive a display with full-screen HD movies. And the Node.js-based web server technology we’re going to use is at the forefront of what the web has to offer today. This isn’t some mix of technologies cobbled together “just because it works”. Under the hood is what drives our technological world today, and a glimpse of what will be evolving into the technology of tomorrow.


The DIJN series of posts is aimed at being totally, completely, fully, truly practical. Every post (ehm, except this one) is about making things work. Concrete steps, describing everything needed to create that final setup. This is going to be as “hands on” as it gets…

Then again, not everything is going to be spelled out in baby steps, and where possible, pointers will be supplied to point to instructions elsewhere, such as how to set up the Arduino IDE, or how to prepare an SD card for use as bootable system in the Raspberry Pi.

The goal is really to reach that finish line, for everyone who’s interested, regardless of specific knowledge. I.e. a working system, spanning a surprinsigly wide range of topics and technologies, but by necessity a very simple system. The idea is that once you have a working setup, you also have the foundation for diving in deep, to explore whatever aspect interests you most, and to alter and extend as much as you like.

This series will not explain how everything works. Nor go into more advanced topics such as implementing ultra-low power modes in the sensor node. Or extending the web server with huge amounts of logic and web page presentation. That’s step 2 (and 3, and 4).

Winding down

This post ended up being much longer than planned. Let’s hope the next ones fare better. There will definitely not be a DIJN post every day – it’ll be spread out over this month, to allow adding some lighter material and other topics from/about JeeLabs. There are only so many hours in a day – and that applies to both reading and writing all this stuff :)

I’m quietly hoping that a few people will try and follow along right away though, and hopefully also comment on where information is incomplete or incorrect. But even if you don’t have the time or opportunity to tag along as this unfolds, please note that this series of posts will be available from the Dive Into JeeNodes page on the Café wiki.

There’s a lot to cover. And I hope there is something in here for everyone. Last but not least: please do comment and make suggestions. That’s how weblogs like this work best.


Dive Into JeeNodes

In AVR, Hardware, Linux, Software on Feb 1, 2013 at 00:01

Welcome to a new series of limited-edition posts from JeeLabs! Read ’em while they last!

Heh… just kidding. They’ll last forever of course, as does everything on this thing called internet. But what I’m going to describe in probably a dozen posts or so is the following:


Hm, that doesn’t quite explain it, I guess. Let me try again:

JC's Grid, page 63

So this is to announce a new “DIJN” series of weblog posts, describing how to set up your own Wireless Sensor Network with JeeNodes, as well as the infrastructure to report a measured light-level somewhere in your house, in real time. The end result will be fully automated and autonomous – you could take your mobile phone, point it to your web server via WiFi, and see the light level as it is that very moment, adjusting as it changes.

This is a far cry from a full-fledged home monitoring or home automation system, clearly – but on the other hand, it’ll have all the key pieces in place to explore whatever direction interests you: ready-made sensors, DIY sensors, your own firmware on the remote nodes, your own web pages and automation logic on the central server… it’s up to you!

Everything is open source, which in this context matters a lot, because that also means that you can really dive into any aspect of this to learn and explore the truly magical world of Physical Computing, Wireless Sensor Networks, environmental sensing and control, as well as state-of-the art web technologies.

The focus will be on describing every step needed to implement this from scratch. I’ll cover setting up all the necessary software and hardware, in such a way that if you know next to nothing about any of the domains involved, you can still follow along and try it out – whether your background is in software, electronics, wireless, or none of these.

If technology interests you, and if I can bring across even a small fraction of the fun there is in tinkering with this stuff and making new things up as you g(r)o(w) along, then that would be a very nice reward for everyone involved, as far as I’m concerned.

PS. “Dijn” is also old-Dutch for “your” (thy, to be precise). Quite a fitting name in my opinion, as this sort of knowledge is really yours for the taking – if you want it…

PPS. For reference: here is the first post in the series, and here is the overview.

Solar… again – the code

In Software on Jan 22, 2013 at 00:01

With the hardware ready to go, it’s time to take that last step – the software!

Here is a little sketch called slowLogger.ino, now in JeeLib on GitHub:

Screen Shot 2013-01-19 at 16.20.50

It’s pretty general-purpose, really – measure 4 analog signals once a minute, and report them as wireless packet. There are a couple of points to make about this sketch:

  • The DIO1 pin will toggle approximately once every 64 minutes, i.e. one hour low / one hour high. This is used in the solar test setup to charge the supercaps to about 2.7V half of the time. The charge and discharge curves can provide useful info.

  • The analog channel is selected by doing a fake reading, and then waiting 100 ms. This sets the ADC channel to the proper input pin, and then lets the charge on the ADC sample-and-hold capacitor settle a bit. It is needed when the output impedance of the signal is high, and helps set up a more accurate voltage in the ADC.

  • The analog readings are done by summing up each value 32 times. When dividing this value, you get an average, which will be more accurate than a single reading if there is noise in the signal. By not dividing the value, we get the maximum possible amount of information about the exact voltage level. Here, it helps get more stable readings.

As it turns out, the current values reported are almost exactly 10x the voltage in millivolts (the scale ends up as 32767 / 3.3), which is quite convenient for debugging.

Having said all this, you shouldn’t expect miracles. It’s just a way to get minute-by-minute readings, with hopefully not too many fluctuations due to noise or signal spikes.

Tomorrow, some early results!

Remote node discovery – part 2

In Software on Jan 15, 2013 at 00:01

Yesterday’s post was about the desire to automate node discovery a bit more, i.e. having nodes announce and describe themselves on the wireless network, so that central receivers know something about them without having to depend on manual setup.

The key is to make a distinction between in-band (IB) and out-of-band (OOB) data: we need a way to send information about ourselves, and we want to send it by the same means, i.e. wirelessly, but recognisable in some way as not being normal packet data.

One way would be to use a flag bit or byte in each packet, announcing whether the rest of the packet is normal or special descriptive data. But that invalidates all the nodes currently out there, and worse: you lose the 100% control over packet content that we have today.

Another approach would be to send the data on a different net group, but that requires a second receiver permanently listening to that alternate net group. Not very convenient.

Luckily, there are two more alternatives (there always are!): one extends the use of the header bits already present in each packet, the other plays tricks with the node ID header:

  1. There are three header combinations in use, as described yesterday: normal data, requesting an ACK, and sending an ACK. That leaves one bit pattern unused in the header. We could define this as a special “management packet”.

  2. The out-of-band data we want to send to describe the node and the packet format is probably always sent out as broadcast. After all, most nodes just want to report their sensor readings (and a few may listen for incoming commands in the returned ACK). Since node ID zero is special, there is one combination which is never used: sending out a broadcast and filling in a zero node ID as origin field. Again, this combination could be used to mark a “management packet”.

I’m leaning toward the second because it’s probably compatible with all existing code.

So what’s a “management packet”, eh?

Well, this needs to be defined, but the main point here is that the format of management packets can be fully specified and standardised across all nodes. We’re not telling them to send/receive data that way, we’re only offering them a new capability to broadcast their node/packet properties through these new special packets.

So here’s the idea, which future sketches can then choose to implement:

  • every node runs a specific sketch, and we set up a simple registry of sketch type ID’s
  • each sketch type ID would define at least the author and name (e.g. “roomNode”)
  • the purpose of the registry is to issue unique ID’s to anyone who wants a bunch of ’em
  • once a minute, for the first five minutes after power-up, the sketch sends out a management packet, containing a version number, a sequence number, the sketch type ID, and optionally, a description of its packet payload format (if it is simple and consistent enough)
  • after that, this management packet gets sent out again about once an hour
  • management packets are sent out in addition to regular packets, not instead of

This means that any receiver listening to the proper net group will be able to pick up these management packets and know a bit more about the sketches running on the different node ID’s. And within an hour, the central node(s) will have learned what each node is.

Nodes which never send out anything (or only send out ACKs to remote nodes, such as a central JeeLink), probably don’t need this mechanism, although there too it could be used.

So the only remaining issue is how to describe packets. Note that this is entirely optional. We could just as easily put that description in the sketch type ID registry, and even add actual decoders there to automate not only node type discovery, but even packet decoding.

Defining a concise notation to describe packet formats can be very simple or very complex, depending on the amount of complexity and variation used in these data packets. Here’s a very simple notation, which I used in JeeMon – in this case for roomNode packets:

  light 8 motion 1 rhum 7 temp -10 lobat 1

These are pairs of name + number-of-bits, with a negative value indicating that sign extension is to be applied (so the temp field ranges from -512..+511, i.e. -51.2..+51.1°C).

Note that we need to fit a complete packet description in a few dozen bytes, to be able to send it out as management data, so probably the field names will have to be omitted.

This leads to the following first draft for a little-endian management packet format:

  • 3 bits – management packet version, currently “1”
  • 5 bits – sender node ID (since it isn’t in the header)
  • 8 bits – management packet number, incremented with each transmission
  • 16 bits – unique sketch type ID, assigned by JeeLabs (see below)
  • optional data items, in repeating groups of the form:
    • 4 bits – type of item
    • 4 bits – N-1 = length of following item data (1..16 bytes)
    • N bytes – item data

Item types and formats will need to be defined. Some ideas for item types to support:

  • type 0 – sketch version number (a single byte should normally be enough)
  • type 1 – globally unique node ID (4..16 bytes, fetched from EEPROM, could be UUID)
  • type 2 – the above basic bit-field description, i.e. for a room node: 8, 1, 7, -10, 1
  • type 3 – tiny URL of an on-line sketch / packet definition (or GitHub user/project?)

Space is limited – the total size of all item descriptions can only be up to 62 bytes.

Sketch type ID’s could be assigned as follows:

  • 0..99 – free for local use, unregistered
  • 100..65535 – assigned through a central registry site provided by JeeLabs

That way, anyone can start implementing and using this stuff without waiting for such a central registry to be put in place and become operational.

Tomorrow, a first draft implementation…

Remote node discovery

In Software on Jan 14, 2013 at 00:01

The current use of the wireless RF12 driver is very basic. That’s by design: simple works.

All you get is the ability to send out 0..66 bytes of data, either as broadcast or directed to a specific node. The latter is just a little trick, since the nature of wireless communication is such that everything goes everywhere anyway – so point to point is simply a matter of receivers filtering out and ignoring packets not intended for them.

The other thing you get with the RF12 driver, is the ability to request an acknowledgement of good reception, which the receiver deals with by sending an “ACK” packet back. Note that ACKs can contain data – so this same mechanism can also be used to request data from another node, if they both agree to use the protocol in this way, that is.

So there are three types of packets: data sent out as is, data sent out with the request to send an ACK back, and packets with this ACK. Each of them with a 0..66 byte payload.

But even though it all works well, there’s an important aspect of wireless sensor networks which has never been addressed: the ability for nodes to tell other nodes what/who they are. As a consequence, I always have to manually update a little table on my receiving server with a mapping from node ID to what sketch that node is currently running.

Trivial stuff, but still a bit inconvenient in the longer run. Why can’t each node let me know what it is and then I don’t have to worry about mixing things up, or keeping that table in sync with reality every time something changes?

Well… there’s more to it than a mapping of node ID’s – I also want to track the location of each node, and assign it a meaningful name such as “room node in the living room”. This is not something each node can know, so the need to maintain a table on the server is not going to disappear just by having nodes send out their sketch + version info.

Another important piece of information is the packet format. Each sketch uses its own format, and some of them can be non-trivial, for example with the OOK relay sending out all sorts of packets, received from a range of different brands of OOK sensors.

It would be sheer madness to define an XML DTD or schema and send validated XML with namespace info and the whole shebang in each packet, even though that is more or less the level of detail we’re after.

Shall we use JSON then? Protocol buffers? Bencode? ASN.1? – None of all that, please!

When power is so scarce that microcoulombs matter, you want to keep packet sizes as absolutely minimal as possible. A switch from a 1-bye to a 2-byte payload increases the average transmit power consumption of the node by about 10%. That’s because each packet has a fixed header + trailer byte overhead, and because the energy consumption of the transmitter is proportional to the time it is kept on.

The best way to keep packet sizes minimal, is to let each node pick what works best!

That’s why I play lots and lots of tricks each time when coming up with a remote sketch. The room node gets all its info across in just 4 bytes of data. Short packets keep power consumption low and also reduce the chance of collisions. Short is good.

Tomorrow, I’ll describe an idea for node discovery and packet format description.

Flukso with RFM12B

In Hardware, Linux on Jan 12, 2013 at 00:01

Some exciting new developments going on…


You’re looking at the final prototype of the latest Flukso meter, which can be connected to AC current sensors, pulse counters, and the Dutch smart metering “P1” port. Here’s the brief description from that website:

Flukso is a web-based community metering application. Install a Fluksometer near your fuse box and you will be able to monitor, share and reduce your electricity consumption through this website.

The interesting bit is that it’s all based on a Linux board with wired and wireless Ethernet, plus a small ATmega-based add-on board which does all the real-time processing.

But the most exciting news is that the new version, now entering production, will include an RFM12B module with the JeeNode-compatible protocol. A perfect home automation workstation. Yet another interesting aspect of this, is that Bart Van Der Meersche, the mastermind behind Flusko, is working on getting the Mosquitto MQTT broker running permanently on that same Flukso meter.

Here’s the basic layout (probably slightly different from the actual production units):

Screen Shot 2013-01-11 at 21.10.20

Flukso runs OpenWRT, and everything in it is based on the Lua programming language, which is really an excellent fit for such environments. But even if Lua is not something you want to dive into, the open-endedness of PubSub means this little box drawing just a few Watt can interface to a huge range of devices – from RF12 to WiFi to LAN, and everything flowing in and out of that little box becomes easily accessible via MQTT.

PS. I have no affiliation with Flukso whatsoever – I just like it, and Bart is a nice fellow :)

Encoding P1 data

In Software on Jan 2, 2013 at 00:01

After yesterday’s hardware hookup, the next step is to set up the proper software for this.

There are a number of design and implementation decisions involved:

  • how to format the data as a wireless packet
  • how to generate and send out that packet on a JeeNode
  • how to pick up and decode the data in Node.js

Let’s start with the packet: there will be one every 10 seconds, and it’s best to keep the packets as small as possible. I do not want to send out differences, like I did with the otRelay setup, since there’s really not that much to send. But I also don’t want to put too many decisions into the JeeNode sketch, in case things change at some point in the future.

The packet format I came up with is one which I’d like to re-use for some of the future nodes here at JeeLabs, since it’s a fairly convenient and general-purpose format:

       (format) (longvalue-1) (longvalue-2) ...

Yes, that’s right: longs!

The reason for this is that the electricity meter counters are in Watt-hour, and will immediately exceed what can be stored as 16-bit ints. And I really do want to send out the full values, also for gas consumption, which is in 1000th of a m3, i.e. in liters.

But for these P1 data packets that would be a format code + 8 longs, i.e. at least 25 bytes of data, which seems a bit wasteful. Especially since not all values need longs.

The solution is to encode each value as a variable-length integer, using only as many bytes as necessary to represent each value. The way this is done is to stored 7 bits of the value in each byte, reserving the top-most bit for a flag which is only set on the last byte.

With this encoding, 0 is sent as 0x80, 127 is sent as 0xFF, whereas 128 is sent as two bytes 0x01 + 0x80, 129 is sent as 0x01 + 0x81, 1024 as 0x08 + 0x80, etc.

Ints up to 7 bits take 1 byte, up to 14 take 2, … up to 28 take 4, and 32 will take 5 bytes.

It’s relatively simple to implement this on an ATmega, using a bit of recursion:

Screen Shot 2012-12-31 at 13.47.01

The full code of this p1scanner.ino sketch is now in JeeLib on GitHub.

Tomorrow, I’ll finish off with the code used on the receiving end and some results.

Assembling the LED Node v2

In AVR, Hardware on Dec 28, 2012 at 00:01

After yesterday’s little mistake, here’s a walk-through of assembling the LED Node v2:


Note that the LED Node comes with pre-soldered SMD MOSFETs so you don’t have to fiddle with ’em.

The LED Node is really just a JeeNode with a different layout and 3 high-power MOSFET drivers, to control up to 72W of RGB LED strips through the ATmega’s hardware PWM. Since there’s an RFM12B wireless module on board, as well as two free JeePorts, you can do all sorts of funky things with it.

As usual, the build progresses from the flattest to the highest components, so that you can easily flip the PCB over and press it down while soldering each wire and pin.

Let’s get started! So we begin with 7 resistors and 1 diode (careful, the diode is polarised):


Be sure to get the values right: 3x 1 kΩ, 3x 1 MΩ, and 1x 10 kΩ (next to the ATmega).

(note: I used three 100 kΩ resistors i.s.o. of 1 MΩ, as that’s what I had lying around)

Next, add the 4x 0.1 µF capacitors and the IC socket – lots of soldering to do on that one:


Then the MCP1702 regulator and the electrolytic capacitor (both are polarised, so here too, make sure you put them in the right way around), as well as the male 6-pin FTDI header:


Soldering the RFM12B wireless radio module takes a bit of care. It’s easiest if you start off by adding a small solder dot and hold the radio while making the solder melt again:


Then solder the remaining pins (I tend to get lazy and skip those which aren’t used, hence not all of them have solder). I also added the 3-pin orange 16 MHz ceramic resonator, the antenna wire, the two port headers, and the big screw terminal for connecting power:


Celebration time – we’ve completed the assembly of the LED Node v2!

Here’s a side view, with the ATmega328 added – as you can see it’s much flatter than v1:


And here’s a top view of the completed LED Node v2, in all its glory:


You can now connect the FTDI header via a USB BUB, and you should see the greeting of the RF12demo sketch, which has been pre-loaded onto the ATmega328.

To get some really fancy effects, check out the Color-shifting LED Node post from a while back on this weblog. You can adjust it as needed and then upload it through FTDI.

Next step is to attach your RGB strip (it should match the 4-pin connector on the far left). Be sure to use fairly sturdy wires as there are up to 2 amps going through each color pin and a maximum of 6 amps total through the “+” connector pin!

Lastly, connect a 12V DC power supply (making absolutely sure to get the polarity right!) and you will have a remote-controllable LED strip. Enjoy!

Meet the LED Node v2

In Hardware on Dec 14, 2012 at 00:01

The LED Node has been around for a while, but I wasn’t 100% happy with it. In principle, the LED Node v1 is a JeeNode plus 1.5 MOSFET Plugs plus an optional Room Board.

There is a small but significant difference with regular JeeNodes (apart from their very different shape), in that all three MOSFETs are tied to pins with hardware PWM support. This is important to get flicker-free dimming, i.e. if you want to have clean and calm color effects. Software PWM doesn’t give you that (unless you turn all other interrupt sources off), and even with hardware PWM it requires a small tweak of the standard Arduino library code to work well.

The neat thing about the LED Node is the wireless capability, so you can control the unit in all sorts of funky ways.

But I didn’t like the very sharp pulses this board generates, which can cause problems with color shifts over long strips and also can produce a lot of RF interference, due to the LED driving current ringing. The other thing which didn’t turn out to be as useful as I thought was the room board part.

So here’s the new LED Node v2:


The big copper areas on the left are extra-wide traces and cooling pads, dimensioned to support at least 2 Amps for each of the RGB colors, for a total of 6 A, i.e. 72 W LED strips @ 12 V. But despite the higher specs, this board will actually be lower profile, because it uses a different type of MOSFETs. They are surface mounted and come pre-soldered so you don’t have to fiddle with them (soldering such small components on relatively large copper surfaces requires a good soldering iron and some expertise).

This new revision has the extra resistors to reduce ringing, and replaces the room board interface with two standard 6-pin port headers: one at the very end, and one on the side. These are ports 1 and 4, respectively, matching a standard JeeNode and any plugs you like. If you want, you could still hook up a Room Board, but this is now no longer the only way to use the LED Node.

Wanna add an accelerometer or compass to make your LED strips orientation aware? Well… now you can! And then place them inside your bike wheels? Could be fun :)

Details to be posted on the Café wiki soon, as well as in the Shop.

Extracting data from P1 packets

In Software on Dec 1, 2012 at 00:01

Ok, now that I have serial data from the P1 port with electricity and gas consumption readings, I would like to do something with it – like sending it out over wireless. The plan is to extend the homePower code in the node which is already collecting pulse data. But let’s not move too fast here – I don’t want to disrupt a running setup before it’s necessary.

So the first task ahead is to scan / parse those incoming packets shown yesterday.

There are several sketches and examples floating around the web on how to do this, but I thought it might be interesting to add a “minimalistic sauce” to the mix. The point is that an ATmega (let alone an ATtiny) is very ill-suited to string parsing, due to its severely limited memory. These packets consist of several hundreds of bytes of text, and if you want to do anything else alongside this parsing, then it’s frighteningly easy to run out of RAM.

So let’s tackle this from a somewhat different angle: what is the minimal processing we could apply to the incoming characters to extract the interesting values from them? Do we really have to collect each line and then apply string processing to it, followed by some text-to-number conversion?

This is the sketch I came up with (“Look, ma! No string processing!”):

Screen Shot 2012 11 29 at 20 44 16

This is a complete sketch, with yesterday’s test data built right into it. You’re looking at a scanner implemented as a hand-made Finite State Machine. The first quirk is that the “state” is spread out over three global variables. The second twist is that the above logic ignores everything it doesn’t care about.

Here’s what comes out, see if you can unravel the logic (see yesterday’s post for the data):

Screen Shot 2012 11 29 at 20 44 49

Yep – that’s just about what I need. This scanner requires no intermediate buffer (just 7 bytes of variable storage) and also very little code. The numeric type codes correspond to different parameters, each with a certain numeric value (I don’t care at this point what they mean). Some values have 8 digits precision, so I’m using a 32-bit int for conversion.

This will easily fit, even in an ATtiny. The moral of this story is: when processing data – even textual data – you don’t always have to think in terms of strings and parsing. Although regular expressions are probably the easiest way to parse such data, most 8-bit microcontrollers simply don’t have the memory for such “elaborate” tools. So there’s room for getting a bit more creative. There’s always a time to ask: can it be done simpler?

PS. I had a lot of fun come up with this approach. Minimalism is an addictive game.

Smart metering

In Hardware on Nov 29, 2012 at 00:01

JeeLabs has entered the 21st century…

The electricity company just installed a new “smart meter” – because they want to track consumed and produced electricity separately, something the total count on the old Ferraris-wheel meter cannot provide:

DSC 4279

See that antenna symbol on there? Its green LED is blinking all the time.

At the bottom on the right-hand side is an RJ11 jack with a “P1” connection. This is a user-accessible port which allows you to get readings out once every 10 seconds. It’s opto-coupled with inverted TTL logic, generating a 9600 baud serial stream from what I’ve read. Clearly something to hook up one of these days.

The gas meter hanging just beneath it was also replaced:

DSC 4280

Why? Because it sends its values out periodically over wireless to the smart meter, which then in turn sends it out via GPRS to the utilities company.

Apparently these gas counter values are only reported once an hour. Makes sense, in a way: gas consumption is more or less driven by heating demands, and aggregated over many households these probably vary fairly slowly – depending on outside temperature, wind, humidity, and how much the sun is shining. Not nearly as hard to manage as the electricity net, you just have to keep the gas pressure within a reasonable range.

Electricity is another matter. And now it’s all being monitored and reported. Not sure how often, though – every 2 months, 15 minutes, 10 seconds? How closely will big brother be watching me? First internet & phone tracking, and now this – I don’t like it one bit…

Welcome to the 21st century. Everything you do is being recorded. For all future generations to come.

Sensing with an Optocoupler

In Hardware on Nov 27, 2012 at 00:01

The OpenTherm setup keeps me thinking…

I haven’t given up on the OpenTherm Gateway yet, but I’ve also been toying with related ideas for some time to try and just listen in on that current/voltage conversation using a self-powered JeeNode, which then reports what it sees as wireless packets.

It’s all based on Optocouplers, so here’s a first circuit to try things out:

JC s Grid page 47

A very simple test setup, which I’m going to feed a ±10V sine wave @ 50 Hz, just because the component tester on my oscilloscope happens to generate exactly such a signal. The 1 kΩ resistor is internal to the component tester, in fact. Here’s what comes out:


The yellow trace is the voltage over the IR LED inside the optocoupler, the blue trace is the voltage on the OUT pin. VCC is a 3x AA Eneloop battery pack @ 3.75V – what you can see is that the LED starts to conduct at ≈ 0.8V, and generates just enough light at 0.975V for the photo transistor to start conducting as well, pulling down the output voltage. With 1.01V over the LED, it already generates enough light for the output to drop to almost 0V.

In other words: within a range of just 41 mV at about 1V, the optocoupler “switches on”.

So much for the first part of this experiment. My hope is that this behavior will be just right to turn this MCT62 optocoupler into a little OpenTherm current “snooper” – stay tuned…

OpenTherm relay

In Software on Nov 20, 2012 at 00:01

Now that the OpenTherm Gateway has been verified to work, it’s time to think about a more permanent setup. My plan is to send things over wireless via an RFM12B on 868 MHz. And like the SMA solar inverter relay, the main task is to capture the incoming serial data and then send this out as wireless packets.

First, a little adapter – with 10 kΩ resistors in series as 5V -> 3.3V “level converters”:

DSC 4253

(that’s an old JeeNode v2 – might as well re-use this stuff, eh?)

And here’s the first version of the otRelay.ino sketch I came up with:

Screen Shot 2012 11 11 at 00 26 19

The only tricky bit in here is how to identify each message coming in over the serial port. That’s fairly easy in this case, because all valid messages are known to consist of exactly one letter, then 8 hex digits, then a carriage return. We can simply ignore anything else:

  • if there is a valid numeric or uppercase character, and there is room: store it
  • if a carriage returns arrives at the end of the buffer: bingo, a complete packet!
  • everything else causes the buffer to be cleared

This isn’t the packet format I intend to use in the final setup, but it’s a simple way to figure out what’s coming in in the first place.

It worked on first try. Some results from this node, as logged by the central JeeLink:

  L 10:38:07.582 usb-A40117UK OK 14 84 56 48 49 65 48 48 48 48
  L 10:38:07.678 usb-A40117UK OK 14 66 52 48 49 65 50 66 48 48
  L 10:38:08.558 usb-A40117UK OK 14 84 56 48 49 57 48 48 48 48
  L 10:38:08.654 usb-A40117UK OK 14 66 52 48 49 57 51 53 48 48
  L 10:38:09.566 usb-A40117UK OK 14 84 49 48 48 49 48 65 48 48
  L 10:38:09.678 usb-A40117UK OK 14 66 68 48 48 49 48 65 48 48
  L 10:38:10.574 usb-A40117UK OK 14 84 48 48 49 66 48 48 48 48
  L 10:38:10.686 usb-A40117UK OK 14 66 54 48 49 66 48 48 48 48
  L 10:38:11.550 usb-A40117UK OK 14 84 48 48 48 70 48 48 48 48
  L 10:38:11.646 usb-A40117UK OK 14 66 70 48 48 70 48 48 48 48
  L 10:38:12.557 usb-A40117UK OK 14 84 48 48 49 50 48 48 48 48

One of the problems with just relaying everything, apart from the fact that it’s wasteful to send it all as hex characters, is that there’s quite a bit of info coming out of the gateway:

Screen Shot 2012 11 10 at 22 44 51

Not only that – a lot of it is in fact redundant. There’s really no need to send the request as well as the reply in each exchange. All I care about are the “Read-Ack” and “Write-Data” packets, which contain actual meaningful results.

Some smarts in this relay may reduce RF traffic without losing any vital information.

OpenTherm data processing

In Software on Nov 19, 2012 at 00:01

Before going into processing the data from Schelte Bron’s OpenTherm Gateway, I’d like to point to OpenTherm Monitor, a multi-platform application he built and also makes freely available from his website.

It’s not provided for Mac OSX, but as it so happens, this software is written in Tcl and based on Tclkit, by yours truly. Since JeeMon is nothing but an extended version of Tclkit, I was able to extract the software and run it with my Mac version of JeeMon:

  sdx unwrap otmonitor.exe
  jeemon otmonitor.vfs/main.tcl
Heh – nothing beats “re-using” one’s own code in new and mysterious ways, eh?

Here’s the user interface which pops up, after setting up the serial port (it needed some hacking in the otmonitor.tcl script):

Screen Shot 2012 11 10 at 22 35 47

I left this app running for an hour (vertical lines are drawn every 5 minutes), while raising the room temperature in the beginning, and running the hot water tap a bit later on.

Note the high error count: looks like the loose wires are highly susceptible to noise and electrostatic fields. Even just moving my hand near the laptop (connected to the gateway via the USB cable) could cause the Gateway to reset (through its watchdog, no doubt).

Still, it looks like the whole setup works very nicely! There’s a lot of OpenTherm knowledge built into the otmonitor code, allowing it to extract and even control various parameters in both heater and thermostat. As the above window shows, all essential values are properly picked up, even though this heater is from a different vendor. That’s probably the point of OpenTherm: to allow a couple of vendors to make their products inter-operable.

But here’s the thing: neither the heater nor the thermostat are near any serial or USB ports over here, so for me it would be much more convenient to transmit this info wirelessly.

Using a JeeNode of course! (is there any other way?) – stay tuned…

PS. Control would be another matter, since then the issue of authentication will need to be addressed, but as I said: that’s not on the table here at the moment.

Watts up?

In Hardware on Nov 14, 2012 at 00:01

Ok, so all the solar panels are in place and doing their thing (as much as this season allows, anyway). But seeing that live power usage on my desk all day long kept tempting me to try and optimise the baseline consumption just a tad more…

Previous readings have always hovered around 115 Watts, lately. Since the JeeLabs server + router + internet modem use about 30 W together, that leaves roughly 85 W unaccounted for. Note that this is without fridges, boilers, heat circulation pumps, gas heaters, or other intermittent consumers running. This baseline is what we end up consuming here no matter what – vampire power from devices in “standby” and other basic devices you want to keep running at all times, such as the phone and internet connection.

It’s not excessive, but hey: 100 W day-in-day-out is still over 850 kWh on a yearly basis.

Well, today I managed to get the baseline down waaay further:

DSC 4242

That’s including the JeeLabs server + router + modem. So the rest of the house at JeeLabs is consuming under 40 W. Perfect: I’ve reached my secret goal of a baseline under 50 W!

Here’s how that “idle” power consumption was reduced this far:

  • I turned off an old & forgotten laptop and Ethernet switch, upstairs – whoops!
  • I removed another gigabit Ethernet switch under my desk (more on that later)
  • the 10-year old Mac Mini + EyeTV + satellite dish setup has been dismantled and replaced by a small all-in-one TV drawing 0.5W in standby (the monitor is re-used)
  • I’m switching to DVB-C (i.e. coax-based) reception, available from the internet modem by upgrading to the cheapest triple-play subscription with “analog + digital” channels
  • that means: no settop box, just the internet modem (already on anyway) and a new low-end but modern Sharp 22″ TV / DVB-C / DVD-player / USB-recorder

As it turns out, the Mac Mini (about 10 years old) plus the master-slave AC mains switch controlling everything else were drawing some 20 W – day in day out. Bit silly, and far too much unnecessary technology strung together (though working, most of the time).

The other biggie: no more always-on Ethernet switches, just the WRT320N wireless router in front of the server, with a second wired gigabit connection to my desk. That’s two really fast connections where it matters, everything else uses perfectly-fine WiFi.

The main reason for having an Ethernet switch near my desk was to allow experimenting with JeeNode-based EtherCards, Raspberry Pi’s, and so on. But… 1) that switch was really in the wrong place, it would be far more convenient to have Ethernet in the electronics corner at JeeLabs, and 2) why keep that stuff on all the time, anyway?

So instead, I’m now re-using a spare Airport Express as wireless-to-wired Ethernet extension router. Plug it in, wait a minute for it to settle down, and voilá – instant wired Ethernet anywhere there is an AC mains socket:

DSC 4243

And if I need more connections, I can route everything through that spare Ethernet switch.

It’s not the smallest solution out there, but who cares. Why didn’t I think of all this before?

Accessing the SMA inverter via Bluetooth

In Software on Nov 6, 2012 at 00:01

As pointed out in recent comments, the SMA solar PV inverter can be accessed over Bluetooth. This offers various goodies, such as reading out the daily yield and the voltage / power generation per MPP tracker. Since the SB5000TL has two of them, and my panels are split between 12 east and 10 west, I am definitely interested in seeing how they perform.

Besides, it’s fun and fairly easy to do. How hard could reading out a Bluetooth stream be?

Well, before embarking on the JeeNode/Arduino readout, I decided to first try the built-in Bluetooth of my Mac laptop, which is used by the keyboard and mouse anyway.

I looked at a number of examples out there, but didn’t really like any of ’em – they looked far too complex and elaborate for the task at hand. This looked like a wheel yearning to be re-invented… heh ;)

The trouble is that the protocol is fully packetized, checksummed, etc. The way it was set up, this seems to also allow managing multiple inverters in a solar farm. Nothing I care about, but I can see the value and applicability of such an approach.

So what it comes down to is to send a bunch of hex bytes in just the right order and with just the right checksums, and then pulling out a few values from what comes back by only decoding what is relevant. Fortunately, the Nanode SMA PV Monitor project on GitHub by Stuart Pittaway already did much of this (and a lot more).

I used some templating techniques (in good old C) which are probably worth a separate post, to generate the proper data packets to connect, initialise, login, and ask for specific results. And here’s what I got – after a lot of head-scratching and peering at hex dumps:

    $ make
    cc -o bluesman main.cpp
    logged in
    daily yield: 2886 Wh @ Sun Jun 29 15:38:03     394803
    total generated power: 75516 W
    AC power: 432 W
    451f: DC voltage = 181.00 V
    451f: DC voltage = 142.62 V
    DC voltage
    251e: DC power = 252 W
    251e: DC power = 177 W
    DC power

The clock was junk at the time, but as you can see there are some nice bits of info in there.

One major inconvenience was that my 11″ MacBook Air tended to crash every once in a while. And in the worst possible way: hard kernel panic -> total reboot needed -> all unsaved data lost. Yikes! Hey Apple, get your stuff solid, will ya – this is awful!

The workaround appears to be to disable wireless and not exit the app while data is coming in. Sounds awfully similar to the kernel panics I can generate by disconnecting an FTDI USB cable or BUB, BTW. Needless to say, these disruptions are extremely irritating while trying to debug new code.

Next step… getting this stuff to work on an ATmega – stay tuned!

Verifying synchronisation over time

In AVR, Software on Nov 5, 2012 at 00:01

(Perhaps this post should be called “Debugging with a scope, revisited” …)

The syncRecv.ino sketch developed over the last few days is shaping up nicely. I’ve been testing it with the homePower transmitter, which periodically sends out electricity measurements over wireless.

Packets are sent out every 3 seconds, except when there have been no new pulses from any of the three 2000 pulse/kWh counters I’m using. So normally, a packet is expected once every second, but at night when power consumption drops to around 100 Watt, only every third or fourth measurement will actually lead to a transmission.

The logic I’m using was specifically chosen to deal with this particular case, and the result is a pretty simple sketch (under 200 LOC) which seems to work out surprisingly well.

How well? Time to fire up that oscilloscope again:


This is a current measurement, collected over about half an hour, i.e. over 500 reception attempts. The screen was set in 10s trace persistence mode (with “false colors” and “background” enabled to highlight the most recent traces and keep showing each one, so all the triggers are superimposed on one another.

These samples were taken with about 300 W consumption (i.e. 600 pulses per hour, one per 6s on average), so the transmitter was indeed skipping packets fairly regularly.

Here’s a typical single trigger, giving a bit more detail for one reception:


Lots of things one can deduce from these images:

  • the mid-level current consumption is ≈ 8 mA, that’s the ATmega running
  • the high-level current increases by another 11 mA for the RFM12B radio
  • almost all receptions are within 8..12 ms
  • most missing packets cause the receiver to stay on for up to some 18 ms
  • on a few occasions, the reception window is doubled
  • when that happens, the receiver can be on, but still no more than 40 ms
  • the 5 ms after proper reception are used to send out info over serial
  • the ATmega is on for less than 20 ms most of the time (and never over 50 ms)
  • it looks like the longer receptions happened no more than 5 times

If you ignore the outliers, you can see that the receiver stays on well under 15 ms on average, and the ATmega well under 20 ms.

This translates to a 0.5% duty cycle with 3s transmissions, or a 200-fold reduction in power over leaving the ATmega and RFM12B on all the time. To put that in perspective: on average, this setup will draw about 0.1 mA (instead of 20 mA), while still receiving those packets coming in every 3 seconds or so. Not bad, eh?

There’s always room for improvement: the ATmega could be put to sleep while the radio is receiving (it’s going to be idling most of that time anyway). And of course the serial port debugging output should be turned off for real use. Such optimisations might halve the remaining power consumption – diminishing returns, clearly!

But hey, enough is enough. I’m going to integrate this mechanism into the homeGraph.ino sketch – and expect to achieve at least 3 months of run time on 3x AA (i.e. an average current consumption of under 1 mA total, including the GLCD).

Plenty for me – better than both my wireless keyboard and mouse, in fact.

It’s all about timing

In Software on Oct 27, 2012 at 00:01

The previous post showed that most of the power consumption of the homeGraph.ino sketch was due to the RFM12B receiver being on all the time. This is a nasty issue which comes back all the time with Wireless Sensor Networks: for ultra-low power scenarios, it’s simply impossible to keep the radio on at all times.

So how can we pick up readings from the new homePower.ino sketch, used to report energy consumption and production of the new solar panels?

The trick is timing: the homePower.ino sketch was written in such a way that it only sends out a packet every 3 seconds. Not always, only when there is something to report, but always exactly on the 3 second mark.

That makes it possible to predict when the next packet can be expected, because once we do receive a packet, we know that we don’t have to expect one for another 3 seconds. Aha, so we could turn the receiver off for a while!

It’s very easy to try this out. First, the code which goes to sleep:

Screen Shot 2012 10 21 at 21 26 40

The usual stuff really – all the hard work is done by the JeeLib library code.

And here’s the new loop() code:

Screen Shot 2012 10 21 at 20 24 23

In other words: whenever a packet has been received, process it, then go to sleep for 2.8 seconds, then wake up and start the receiver again.

Here’s the resulting power consumption timing, as voltage drop over a series resistor:


I can’t say I fully understand what’s going on, but I think it’s waiting for the next packet for about 35 ms with the receiver enabled (drawing ≈ 18 mA), and then another 35 ms is spent generating the graph and sending the new image to the Graphics Board over software SPI (drawing 7 mA, i.e. just the ATmega).

Then the µC goes to sleep, leaving just the display showing the data.

So we’re drawing about 18 mA for say 50 ms every 3000 ms – this translates to a 60-fold reduction in average current consumption, or about 0.3 mA (plus the baseline current consumption). Not bad!

Unfortunately, real-world use isn’t working out quite as planned… to be continued.

Decoding the pulses

In Software on Oct 23, 2012 at 00:01

Receiving the packets sent out yesterday is easy – in fact, since they are being sent out on the same netgroup as everything else here at JeeLabs, I don’t have to do anything. Part of this simplicity comes from the fact that the node is broadcasting its data to whoever wants to hear it. There is no need to define a destination in the homePower.ino sketch. Very similar to UDP on Ethernet, or the CAN bus, for that matter.

But incoming data like this is not very meaningful, really:

  L 22:09:25.352 usb-A40117UK OK 9 2 0 69 235 0 0 0 0 103 0 97 18

What I have in mind is to completely redo the current system running here (currently still based on JeeMon) and switch to a design using ZeroMQ. But that’s still in the planning stages, so for now JeeMon is all I have.

To decode the above data, I wrote this little “homePower.tcl” driver:

Screen Shot 2012 10 21 at 01 52 07

It takes those incoming 12-byte packets, and converts them to three sets of results – each with the total pulse count so far (2000 pulses/KWh), and the last calculated instantaneous power consumption. Note also the “decompression” of the millisecond time differences, as applied on the sending side.

Calculation of the actual Watts being consumed (or produced) is trivial: there are 2000 pulses per KWh, so one pulse per half hour represents an average consumption (or production) of exactly one Watt.

To activate this driver I also had to add this line to “main.tcl”:

  Driver register RF12-868.5.9 homePower

And sure enough, out come results like this:

Screen Shot 2012 10 21 at 01 52 32

This is just after a reset, at night with no solar power being generated. That’s 7 Watt consumed by the cooker (which is off, but still drawing some residual power for its display and control circuits), and 105 Watt consumed by the rest of the house.

Actually, you’re looking at the baseline power consumption here at JeeLabs. I did these measurements late at night with all the lights and everything else turned off (this was done by staring at these figures from a laptop on wireless, running off batteries). A total of 112 Watt, including about 24 Watt for the Wireless router plus the Mac Mini running the various JeeLabs web servers, both always on. Some additional power (10W perhaps?) is also drawn by the internet modem downstairs, so that leaves only some 80 Watt of undetermined “vampire power” drawn around the house. Not bad!

One of my goals for the next few months will be to better understand where that remaining power is going, and then try to reduce it even further – if possible. That 80 W baseline is 700 KWh per year after all, i.e. over 20% of the total annual consumption here.

Here are some more readings, taken the following morning with heavy overcast clouds:

Screen Shot 2012 10 21 at 10 24 37

This also illustrates why the wiring error is causing problems: the “pow3” value is now a surplus (counting down), but there’s no way to see that in the measurement data.

I’ve dropped the packet sending rate to at most once every 3 seconds, and am very happy with these results which give me a lot more detail and far more frequent insight into our power usage around here. Just need to wait for the electrician to come back and reroute counter 3 so it doesn’t include solar power production.

Sending out pulses

In Software on Oct 22, 2012 at 00:01

With pulses being detected, the last step in this power consumption sketch is to properly count the pulses, measure the time between ’em, and send off the results over wireless.

There are a few details to take care off, such as not sending off too many packets, and sending out the information in such a way that occasional packet loss is harmless.

The way to do this is to track a few extra values in the sketch. Here are the variables used:

Screen Shot 2012 10 21 at 00 14 43

Some of these were already used yesterday. The new parts are the pulse counters, last-pulse millisecond values, and the payload buffer. For each of the three pulse counter sources, I’m going to send the current count and the time in milliseconds since the last pulse. This latter value is an excellent indication of instantaneous power consumption.

But I also want to keep the packet size down, so these values are sent as 16-bit unsigned integers. For count, this is not so important as it will be virtually impossible to miss over 65000 packets, so we can always resync – even with occasional packet loss.

For the pulse time differences, having millisecond resolution is great, but that limits the total to no more than about a minute between pulses. Not good enough in the case of solar power, for example, which might stop on very dark days.

The solution is to “compress” the data a bit: values up to one minute are sent as is, values up to about 80 minutes are sent in 1-second resolution, and everything above is sent as being “out of range”.

Here are the main parts of the sketch, the full “homePower.ino” sketch is now on GitHub:

Screen Shot 2012 10 21 at 00 36 00

Sample output, as logged by a process which always runs here at JeeLabs:

  L 22:09:25.352 usb-A40117UK OK 9 2 0 69 235 0 0 0 0 103 0 97 18

Where “2 0 69 235” is the cooker, “0 0 0 0” is solar, and “103 0 97 18” is the rest.

Note that results are sent off no more than once a second, and the careful distinction between having data-to-send pending and actually getting it sent out only after that 1000 ms send timer expires.

The scanning and blinking code hasn’t changed. The off-by-one bug was in calling setblinks() with a value of 0 to 2, instead of 1 to 3, respectively.

That’s it. The recently installed three new pulse counters are now part of the JeeLabs home monitoring system. Well… as far as remote sensing and reporting goes. Processing this data will require some more work on the receiving end.

Reporting serial packets

In Software on Oct 16, 2012 at 00:01

The RF12demo sketch was originally intended to be just that: a demo, pre-flashed on all JeeNodes to provide an easy way to try out wireless communication. That’s how it all started out over 3 years ago.

But that’s not where things ended. I’ve been using RF12demo as main sketch for all “central receive nodes” I’ve been working with here. It has a simple command-line parser to configure the RF12 driver, there’s a way to send out packets, and it reports all incoming packets – so basically it does everything needed:

  [RF12demo.8] A i1* g5 @ 868 MHz 
  DF I 577 10

  Available commands:
    <nn> i     - set node ID (standard node ids are 1..26)
                 (or enter an uppercase 'A'..'Z' to set id)
    <n> b      - set MHz band (4 = 433, 8 = 868, 9 = 915)
    <nnn> g    - set network group (RFM12 only allows 212, 0 = any)
    <n> c      - set collect mode (advanced, normally 0)
    t          - broadcast max-size test packet, with ack
    ...,<nn> a - send data packet to node <nn>, with ack
    ...,<nn> s - send data packet to node <nn>, no ack
    <n> l      - turn activity LED on PB1 on or off
    <n> q      - set quiet mode (1 = don't report bad packets)
  Remote control commands:
    <hchi>,<hclo>,<addr>,<cmd> f     - FS20 command (868 MHz)
    <addr>,<dev>,<on> k              - KAKU command (433 MHz)
  Flash storage (JeeLink only):
    d                                - dump all log markers
    <sh>,<sl>,<t3>,<t2>,<t1>,<t0> r  - replay from specified marker
    123,<bhi>,<blo> e                - erase 4K block
    12,34 w                          - wipe entire flash memory
  Current configuration:
   A i1* g5 @ 868 MHz

This works fine, but now I’d like to explore a real “over-the-wire” protocol, using the new EmBencode library. The idea is to send “messages” over the serial line in both directions, with named “commands” and “events” going to and from the attached JeeNode or JeeLink. It won’t be convenient for manual use, but should simplify things when the host side is a computer running some software “driver” for this setup.

Here’s the first version of a new rf12cmd sketch, which reports all incoming packets:

Screen Shot 2012 10 14 at 15 31 51

Couple of observations about this sketch:

  • we can no longer send a plain text “[rf12cmd]” greeting, that too is now sent as packet
  • the greeting includes the sketch name and version, but also the decoder’s packet buffer size, so that the other side knows the maximum packet size it may use
  • invalid packets are discarded, we’re using a fixed frequency band and group for now
  • command/event names are short – let’s not waste bandwidth or string memory here
  • I’ve bumped the serial line speed to 115200 baud to speed up data transfers a bit
  • there’s no provision (yet) for detecting serial buffer overruns or other serial link errors
  • incoming data is sent as a binary string, no more tedious hex/byte conversions
  • each packet includes the frequency band and netgroup, so the exchange is state-less

The real change is that all communication is now intended for computers instead of us biological life-forms and as a consequence some of the design choices have changed to better match this context.

Tomorrow, I’ll describe a little Lua script to pick up these “serial packets”.

Optocoupler current transfer

In Hardware on Oct 11, 2012 at 00:01

The past few days were about generating a linear ramp, in the form of a triangular wave, and as you saw, it was quite easy to generate – despite the lack of a function generator.

The result was a voltage alternating between about 0.6V and 3.0V in a linear fashion. Here’s why…

I want to see how the MCT62 optocoupler passes a signal through it. More specifically, how a linearly increasing voltage would come out. Let’s look at that chip schematic again:


So the idea is to apply that linear ramp through a current-limiting resistor into the opto’s LED. Then we put the photo-transistor in a simple 5V circuit, with again a current limiting resistor between collector and 5V – like this:

JC s Grid page 35

From left to right:

  • apply a triangle wave to the LED, which varies from 0.6 to 3.0V
  • there’s a 1 kΩ rsistor in series, so the maximum current will stay well under 3 mA
  • the phototransistor is hooked up as a normal DC amplifier
  • there’s another 1 kΩ pullup, so this too cannot draw more than 5 mA current


  • when the LED is off, the output will stay at 5V, i.e. transistor stays off
  • until the input rises above the 1.2V threshold of the (IR) LED, not much happens
  • as the voltage rises linearly, so will the current through the LED
  • depending on the transfer function the transistor current will rise accordingly
  • and as a consequence, the output voltage will drop

So if that behavior is linear, then the output voltage should drop linearly. Let’s have a look:


  • the YELLOW line is the triangle wave, as generated earlier
  • the PURPLE line is the voltage over the leftmost resistor
  • the BLUE line is the voltage on the transistor’s collector output
  • the RED line is the derivative of the BLUE line
  • the zero origin for all these lines in the image is at two divisions from the bottom

First of all, the purple line indeed rises slowly once we start rising above 1.0V, and it stays roughly 1.2V under the input signal (yellow line).

The blue line is the interesting one: it takes a bit of input current (i.e. LED light) for the transistor to start conducting, but once it does, the output voltage drops indeed. Once we’re above 2.0V, the blue line becomes quite linear. As indicated by the fact that the red line is fairly flat between horizontal divisions 5 and 7.

So in this range (and probably quite a bit above), we have a linear transfer from input current to output current. Or voltage … it’s all the same with resistors.

In terms of current, we can use the purple line: it’s flat with a diode current between 0.7 and 1.7 mA (and probably beyond).

The output voltage only drops to just over 2V, so the phototransistor is still far from reaching saturation (“conducting all out”).

So what’s the point of all this, eh?

Well, one thing this illustrates is that you can get a pretty clean signal across such an optocoupler, as long as you stay in the linear range of it all. There is no real speed limitation, so even audio signals could be sent across reasonably well – without making any electrical connection, just a little light beam!

It’s not hard to imagine how this could be done with discrete components even, sending the light to a glass fiber over a longer distance.

You can call it wireless signal transmission, albeit of a different type: optical!

One million packets

In AVR, Hardware, Software on Sep 15, 2012 at 00:01

Minutes after this weblog post goes live will be a new milestone for one of the JeeNodes.

This thing has been running for over two years, sending out nearly one million test packets:

Here are the entries from the data logged to file on my central server, minutes ago:

1:40   RF12-868.5.3 radioBlip age       740
1:40   RF12-868.5.3 radioBlip ping      999970

The counter is about to reach 1,000,000 – just after midnight today, in fact.

These log entries come from a JeeNode with a radioBlip sketch which just sends out a counter, roughly every 64 seconds, and goes into maximum low-power mode in between each transmission. That’s the whole trick to achieving very long battery lifetimes: do what ya’ gotta do as quickly as possible, then go go bck to ultra-low power as deeply as possible.

The battery is a 1300 mAh LiPo battery, made for some Fujitsu camera. I picked it because of the nice match with the JeeNode footprint.

Bit the big news is that this battery has not been recharged since August 21st, 2010!

Which goes to show that:

  • lithium batteries can hold a lot of charge, for a long time
  • JeeNodes can survive a long time, when programmed in the right way
  • sending out a quick packet does not really take much energy – on average!
  • all of this can easily be replicated, the design and the code are fully open source

And it also clearly shows that this sort of lifetime testing is really not very practical – you have to wait over two years before being sure that the design is as ultra-low power as it wast intended to be!

If we (somewhat arbitrarily) assume that the battery is nearly empty now, then running for 740 days (i.e. 17,760 hours) means that the average current draw is about 73 µA, including the battery’s self-discharge. Which – come to think of it – isn’t even that low. I suspect that with today’s knowledge, I could probably set up a node which runs at least 5 years on this kind of battery. Oh well, there’s not much point trying to actually prove it in real time…

One of the omissions in the original radioBlip code was that only a counter is being sent out, but no indication at all of the remaining battery charge. So right now I have no idea how much longer this setup will last.

As you may recall, I implemented a more elaborate radioBlip2 sketch a while ago. It reports two additional values: the voltage just before and after sending out a packet over wireless. This gives an indication of the remaining charge and also gives some idea how much effect all those brief transmission power blips have on the battery voltage. This matters, because in the end a node is very likely to die during a packet transmission, while the radio module drains the battery to such a point that the circuit ceases to work properly.

Next time, as a more practical way of testing, I’ll probably increase the packet rate to once every second – reducing the test period by a factor of 60 over real-world use.

Waiting a couple of years for the outcome is a bit silly, after all…

Another embedded ecosystem

In Hardware on Sep 14, 2012 at 00:01

(This post follows up on Reinhard’s clearvoyant comment yesterday..)

The ARM microcontrollers described in the past few days are a big step up from a “simple” ATmega328, but that’s only if you consider hundreds of kilobytes of flash storage and tens of kilobytes of RAM as being “a lot”.

Compared to notebooks and workstations, it’s still virtually nothing, of course.

But there’s another trend going on, with bang-for-the-buck going off the charts nowadays: small embedded Linux systems with integrated wired and/or wireless Ethernet. These are often based on Broadcom and Atheros chipsets – the same as found in just about every network router and gateway nowadays.

One particularly nice and low-cost example of this is the Carambola by 8devices:

Main image 3 40

It has 8 MB flash + 32 MB RAM (megabytes!), on-board WiFi, and runs OpenWrt Linux.

There are I2C and SPI interfaces, which can also be used as general-purpose I/O pins, so this thing will interface to a range of things right out of the box.

One gotcha is that the 2×20 pins are on a 2mm grid, not 0.1″. Small size has its trade-offs!

The board will draw up to 1.5 W @ 3.3V (i.e. roughly 500 mA), but that can easily be reduced to about 0.4 W for a blank board with no wired Ethernet attached.

Here are some more specs, obtained from within Linux on the Carambola itself:

Screen Shot 2012 09 13 at 21 02 59

As you can see, this unit (like many routers) is based on a MIPS architecture. And it’s actually quite a bit faster that the Bifferboard I described a while back.

Like most low-end ARM chips, these systems often lack hardware floating point (its all implemented in software, just like an Arduino does for the ATmega’s). Don’t expect any number-crunching performance from these little boards, but again it’s good to point out that boards like these are priced about the same as an Arduino Uno.

One of the benefits of Linux is that it’s a full-fledged operating system, with numerous tools and utilities (though you often still need to cross-compile) and with solid full-featured networking built in. The amount of open source software available for Linux (on a wide range of hardware) is absolutely staggering.

Among the drawbacks of Linux in the context of Physical Computing, is that it’s not strictly real time, so programming for it follows a different approach (busy loops for timing are not done, for example). Don’t expect to accurately pulse an I/O pin at a few hundred Hz or more, for example. Linux was also definitely not made for ultra-low power use, such as in remote wireless nodes which you’d like to keep up and running for months or years on a single battery – there’s simply too much going on in a complete operating system.

The other thing about Linux is that it can be somewhat intimidating if you’ve never used it before. Part of this comes from its strong heritage from the “Unix world”. But given the current trends, I strongly recommend trying it out and getting familiar with it – Linux is very mature: it has been around for a while and will remain so for a long time to come. With boards such as the Carambola illustrating just how cheap it can be to have a go at it.

Interesting times. Now if only new software developments would keep up with all this!

The ARM ecosystem

In Hardware, Software on Sep 13, 2012 at 00:01

Yesterday’s post presented an example of a simple yet quite powerful platform for “The Internet Of Things” (let’s just call it simple and practical interfacing, ok?). Lots of uses for that in and around the house, especially in the low-cost end of ATmega’s, basic Ethernet, and basic wireless communication.

What I wanted to point out with yesterday’s example, is that there is quite a bit of missed potential when we stay in the 8-bit AVR / Arduino world. There are ARM chips which are a least as powerful, at least as energy-efficient, and at least as low-cost as the ATmega328. Which is not surprising when you consider that ARM is a design, licensed to numerous vendors, who all differentiate their products in numerous interesting ways.

In theory, the beauty of this is that they all speak the same machine language, and that code is therefore extremely portable between different chips and vendors (apart from the inevitable hardware/driver differences). You only need one compiler to generate code for any of these ARM processor families:

arm2 arm250 arm3 arm6 arm60 arm600 arm610 arm620 arm7 arm7m arm7d arm7dm arm7di arm7dmi arm70 arm700 arm700i arm710 arm710c arm7100 arm720 arm7500 arm7500fe arm7tdmi arm7tdmi-s arm710t arm720t arm740t strongarm strongarm110 strongarm1100 strongarm1110 arm8 arm810 arm9 arm9e arm920 arm920t arm922t arm946e-s arm966e-s arm968e-s arm926ej-s arm940t arm9tdmi arm10tdmi arm1020t arm1026ej-s arm10e arm1020e arm1022e arm1136j-s arm1136jf-s mpcore mpcorenovfp arm1156t2-s arm1156t2f-s arm1176jz-s arm1176jzf-s cortex-a5 cortex-a7 cortex-a8 cortex-a9 cortex-a15 cortex-r4 cortex-r4f cortex-r5 cortex-m4 cortex-m3 cortex-m1 cortex-m0 cortex-m0plus xscale iwmmxt iwmmxt2 ep9312 fa526 fa626 fa606te fa626te fmp626 fa726te

In practice, things are a bit trickier, if we insist on a compiler “toolchain” which is open source, with stable releases for Windows, Mac, and Linux. Note that a toolchain is a lot more than a C/C++ compiler + linker. It’s also a calling convention, a run-time library choice, a mechanism to upload code, and a mechanism to debug that code (even if that means merely seeing printf output).

In the MBED world, the toolchain is in the cloud. It’s not open source, and neither is the run-time library. Practical, yes – introspectable, not all the way. Got a problem with the compiler (or more likely the runtime)? You’re hosed. But even if it works perfectly – ya can’t peek under the hood and learn everything, which in my view is at least as important in a tinkering / hacking / repurposing world.

Outside the MBED world, I have found my brief exploration a grim one: commercial compiler toolchains with “limited free” options, and proprietary run-time libraries everywhere. Not my cup of tea – and besides, in my view gcc/g++ is really the only game in town nowadays. It’s mature, it’s well supported, it’s progressing, and it runs everywhere. Want a cross compiler which runs on platform A to generate code for platform B? Can do, for just about any A and B – though building such a beast is not necessarily easy!

As an experiment, I wanted to try out a much lower-cost yet pin-compatible alternative for the MBED, called the LCPXpresso (who comes up with names like that?):


Same cost as an Arduino, but… 512 KB flash, 64 KB RAM, USB, Ethernet, and tons of digital + analog I/O features.

Except: half of that board is dedicated to acting as an upload/debug interface, and it’s all proprietary. You have to use their IDE, with “lock-in” written on every page. Amazing, considering that the ARM chip can do serial uploading via built-in ROM! (i.e. it doesn’t even have to be pre-flashed with a boot loader)

As an experiment, I decided to break free from that straight-jacket:

DSC 3832

Yes, that’s right: you can basically throw away half the board, and then add a few wires and buttons to create a standard FTDI interface, ready to use with a BUB or other 3.3V serial interface.

(there’s also a small regulator mod, because the on-board 3.3V regulator seems to have died on me)

The result is a board which is pin-compatible with the MBED, and will run more or less the same code (it has only 1 user-controllable LED instead of 4, but that’s about it, I think). Oh, and serial upload, not USB anymore.

Does this make sense? Not really, if that means having to manually patch such boards each time you need one. But again, keep in mind that the boards cost the same as an Arduino Uno, yet offers far more than even the Arduino Mega in features and performance.

The other thing about this is that you’re completely on your own w.r.t. compiling and debugging code. Well, not quite: there’s a gcc4mbed by Adam Green, with pre-built x86 binaries for Windows, Mac, and Linux. But out of the box, I haven’t found anything like the Arduino IDE, with GUI buttons to push, lots of code examples, a reference website, and a community to connect with.

For me, personally, that’s not a show stopper (“real programmers prefer the command line”, as they say). But getting a LED to blink from scratch was quite a steep entry point into this ARM world. Did I miss something?

Two more notes:

  • Yes, I know there’s the Maple IDE by LeafLabs, but I couldn’t get it to upload on my MacBook Air notebook, nor get a response to questions about this on the forum.

  • No, I’m not “abandoning” the Atmel ATmega/ATtiny world. For many projects, simple ways to get wireless and battery-operated nodes going, I still vastly prefer the JeeNode over any other option out there (in fact, I’m currently re-working the JeeNode Micro, to add a bit more functionality to it).

But its good to stray outside the familiar path once in a while, so I’ll continue to sniff around in that big ARM Cortex world out there. Even if the software exploration side is acting surprisingly hostile to me right now.

Interesting gateway

In Hardware on Sep 12, 2012 at 00:01

A while back, I came across this product, called the “mbed Internet of Things Gateway”:


It’s an ARM microcontroller with an Ethernet port, a µSD storage slot, and an RFM12B wireless transceiver. Very nicely packaged in an extruded-aluminium case with laser-cut front and back panels. Here’s what’s inside:

DSC 3833

Not that much circuitry, as you can see – because all the heavy-lifting is done by the MBED board on the left.

That’s a 32-bit microcontroller, with built-in Ethernet and USB, plenty of I/O pins, and lots of features to connect to SPI, I2C, CAN, and other types of devices. Not to mention the 512 KB flash and 32 KB RAM memory – plenty to implement some serious functionality.

MBED comes with an intriguing “cloud-based” compiler and build environment, which is surprisingly effective. Here’s how it works, out of the box:

  1. plug the MBED into USB and it’ll present itself as a memory disk with one HTML file on it
  2. double-click that file to go to the MBED web site
  3. you get a web-based equivalent of a standard Windows IDE, plus a large code sharing community
  4. create your project online, enter your own code, and hit the compile button
  5. if the code compiles successfully, you end up with a file in your download folder
  6. copy that file to the MBED’s USB drive
  7. press on the MBED’s reset button, and that’s it … uploaded and running!

This is a very elegant workflow. No need to install any software to develop for MBED. And you can continue work wherever you are, as long as internet works and you have your MBED with you. You do need to sign up and register a (free) account on that MBED site – in return, they’ll do all the compiles for you.

This board is an exciting development. The cost is higher than with just a JeeNode + EtherCard, but there is also a lot more possible when you don’t have to fight the ATmega328’s strict flash and RAM memory constraints.

I’ll have more to say about this hardware and software tomorrow – stay tuned…

Switching with a P-MOSFET

In AVR, Hardware on Sep 8, 2012 at 00:01

One reason for yesterday’s exploration, is to figure out a way around a flaw of the RFM12B wireless radio module.

Let me explain – the RFM12B module has a clock output, which can be used to drive a microcontroller. The idea being that you can save a crystal that way. Trouble is that this clock signal has to be present on power-up, even though it can be configured over SPI in software, because otherwise the microcontroller would never start running and hence never get a chance to re-configure the radio. A nasty case of Catch 22 (or a design error?).

In short: the radio always powers up with the crystal oscillator enabled. Even when not using that clock signal!

The problem is that an RFM12B draws about 0.6 mA in this mode, even though it can be put to sleep to draw only 0.3 µA (once running and listening to SPI commands). In the case of energy harvesting, where you normally get very tiny amounts of energy to run off, this startup hurdle can be a major stumbling block.

See my low-power supply weblog post about how hard that can be, and may need extra hardware to get fixed.

So I’m trying to find a way to keep that radio powered down until the microcontroller is running, allowing it to be put to sleep right away.

For ultra-low power use, yesterday’s PNP transistor approach is not really good enough.

This is where an interesting aspect of MOSFETs comes in: they make great power switches, because all they need is a gate voltage to turn them on or off. When on, their resistance (and hence voltage drop) is near zero, and the voltage on the gate doesn’t draw any current. Just like a water faucet doesn’t consume energy to keep water running or blocked, only to change the state – so do MOSFETs.

But many MOSFETs typically require several volts to turn them on, which we may or may not have when running at the lower limit of 1.8V of an ATmega or ATtiny. So the choice of MOSFET matters.

Just like yesterday, we’ll need a P-channel MOSFET to let us switch the power supply rail:

JC s Grid page 32

Note the subtly different placement of the resistor. With a PNP transistor, it was needed to limit the current through the base (which then got wasted, but that current is needed to make the transistor switch). With a MOSFET there is no current, but now we need to make sure that the MOSFET stays off until a low voltage is applied.

Except that now R can be very large. It’s basically a pull-up, and can be extremely weak, say 10 MΩ. That means that when pulled low, the leakage current will be only 0.3 µA.

The trick is to find a P-MOSFET type which can switch using a very low gate voltage, so that it can still be fully switched on. I’ve ordered a couple of types to test this out, and will report once they arrive and measurements can be made.

All in all, this is a very nice solution, though – just 2 very simple components. The main drawback is that we still need to reserve an I/O pin for this.

Tomorrow, I’ll explore a refinement which does not even need an extra I/O pin.

Summer break

In News on Jul 1, 2012 at 00:01

Ok, time to sign off for the summer break. This weblog will be off the air until September 1st – same as last year.

But unlike last year, the shop will stay open during the break: Martyn and Rohan Judd will be taking over all JeeLabs shop duties from the UK this summer. We’re making a range of preparations to get everything going smoothly, but please note that there will be some “reduced availability” issues during this time – i.e. a few more items out of stock than usual, and occasional delays while trying to prepare packages and get things out the door.

It’s been yet another truly fascinating year here. Somewhat fewer new products out the door than I would have wanted, but also quite a bit more work behind the scenes to make sure this all remains focused on fooling around with physical computing, wireless networking, and ultra-low power computing. And even though there has been another unplanned break early this year, things are actually starting to work out a lot better these days. As I’ve learned after over 1000 posts, the trick is to stay ahead of the weblog by a comfortably large margin, instead of having the daily publishing schedule dictate how to spend my time and my energy. This summer break will give me an excellent opportunity to relax, re-focus, and then re-launch into the next yearly cycle – IOW: onwards!

Until then, I’ll leave you with a view of one of the more chaotic corners of the JeeLabs work area:

2012 07 02 09 34

If physical computing – or even just technology in general – is your thing, then maybe some of these past 1075 posts will encourage you to follow your passion, nurture your curiosity, cherish your fascination, challenge your boundaries, and … be creative! Because there is infinite fun in creating and in learning from what others create.

To be continued in September. Have a wonderful time!

Note – Please send all questions about the shop, payments, and shipping to email address order_assistance at jeelabs dot org during the summer break – that way it will reach both the people handling the shop and me. Note also that I will be reading email only once a week during this period.

Assembling the EmonTX

In Hardware on Jun 29, 2012 at 00:01

The guys at OpenEnergyMonitorhi Glyn and Trystan! – have been working on a number of open source energy monitoring kits for some time now. With solar panels coming here soon, I thought it’d be nice to try out their EmonTX unit – which is partly derived from a bunch of stuff here at JeeLabs. Here’s the kit I got recently:

DSC 3351

Following these excellent instructions, assembly was a snap (I added the 868 MHz RFM12B wireless module):

DSC 3352

Whee, assembling kits is fun! :)

I had some 30A current clamps from SeeedStudio lying around anyway, so that’s what I’ll be using.

The transformer is a 9 VAC type, to help the system detect zero crossings, so that real power factors can be calculated. Unfortunately, this transformer doesn’t (yet) power the system (but it now looks like it might in a future version), so this thing also needs either FTDI or USB to power it.

Here are my first updated measurement results, using the voltage_and_current.ino sample sketch in EmonLib:

    0.27 4.17 260.39 0.02 0.06 
    -0.02 0.71 260.77 0.00 -0.03 
    0.03 0.71 260.74 0.00 0.04 
    -0.02 0.71 260.56 0.00 -0.03 
    0.02 0.71 260.63 0.00 0.03 
    -100.93 105.81 260.88 0.41 -0.95 
    -97.68 102.16 260.94 0.39 -0.96 
    -99.07 104.42 260.98 0.40 -0.95 
    -97.15 102.57 260.74 0.39 -0.95 
    -97.69 102.29 260.91 0.39 -0.95 

The values printed out are:

  • realPower
  • apparentPower
  • Vrms
  • Irms
  • powerFactor

These readings were made with a clamp on one wire of a 25W lightbulb load – first off, then on. The mains voltage estimated from the 9V transformer is a bit high – it’s usually about 230V around here. My plan is to measure and report two independent power consumers and one producer (the solar panel inverter), so I’ll dive into this in more detail anyway. But that’ll have to wait until after the summer break.

Speaking of which: the June discount ends tomorrow, just so you know…

Update – I have disconnected the burden resistors, since the SCT-013-030 has one built in. See comments.

Level interrupts

In Hardware, Software on Jun 26, 2012 at 00:01

The ATmega’s pin-change interrupt has been nagging at me for some time. It’s a tricky beast, and I’d like to understand it well to try and figure out an issue I’m having with it in the RF12 library.

Interrupts are the interface between real world events and software. The idea is simple: instead of constantly having to poll whether an input signal changes, or some other real-world event occurs (such as a hardware count-down timer reaching zero), we want the processor to “somehow” detect that event and run some code for us.

Such code is called an Interrupt Service Routine (ISR).

The mechanism is very useful, because this is an effective way to reduce power consumption: go to sleep, and let an interrupt wake up the processor again. And because we don’t have to keep checking for the event all the time.

It’s also extremely hard to do these things right, because – again – the ISR can be triggered any time. Sometimes, we really don’t want interrupts to get in our way – think of timing loops, based on the execution of a carefully chosen number of instructions. Or when we’re messing with data which is also used by the ISR – for example: if the ISR adds an element to a software queue, and we want to remove that element later on.

The solution is to “disable” interrupts, briefly. This is what “cli()” and “sei()” do: clear the “interrupt enable” and set it again – note the double negation: cli() prevents interrupts from being serviced, i.e. an ISR from being run.

But this is where it starts to get hairy. Usually we just want to prevent an interrupt to happen now – but we still want it to happen. And this is where level-interrupts and edge-interrupts differ.

A level-interrupt triggers as long a an I/O signal has a certain level (0 or 1) and works as follows:

JC s Grid page 22

Here’s what happens at each of those 4 points in time:

  1. an external event triggers the interrupt by changing a signal (it’s usually pulled low, by convention)
  2. the processor detects this and starts the ISR, as soon as its last instruction finishes
  3. the ISR must clear the source of the interrupt in some way, which causes the signal to go high again
  4. finally, the ISR returns, after which the processor resumes what it had been doing before

The delay from (1) to (3) is called the interrupt latency. This value can be extremely important, because the worst case determines how quickly our system responds to external interrupts. In the case of the RFM12B wireless module, for example, and the way it is normally set up by the RF12 code, we need to make sure that the latency remains under 160 µs. The ISR must be called within 160 µs – always! – else we lose data being sent or received.

The beauty of level interrupts, is that they can deal with occasional cli() .. sei() interrupt disabling intervals. If interrupts are disabled when (1) happens, then (2) will not be started. Instead, (2) will be started the moment we call sei() to enable interrupts again. It’s quite normal to see interrupts being serviced right after they are enabled!

The thing about these external events is that they can happen at the most awkward time. In fact, take it from me that such events will happen at the worst possible time – occasionally. It’s essential to think all the cases through.

For example: what happens if an interrupt were to occur while an ISR is currently running?

There are many tricky details. For one, an ISR tends to require quite a bit of stack space, because that’s where it saves the state of the running system when it starts, and then restores that state from when it returns. If we supported nested interrupts, then stack space would at least double and could easily grow beyond the limited amount available in a microcontroller with limited RAM, such as an ATmega or ATtiny.

This is one reason why the processor logic which starts an ISR also disables further interrupts. And re-enables interrupts after returning. So normally, during an ISR no other ISRs can run: no nested interrupt handling.

Tomorrow I’ll describe how multiple triggers can mess things up for the other type of hardware interrupt, called an edge interrupt – this is the type used by the ATmega’s (and ATtiny’s) “pin-change interrupt” mechanism.

Low power – µA’s in perspective

In AVR, Hardware on Jun 23, 2012 at 00:01

Ultra-low power computing is a recurring topic on this weblog. Hey – it’s useful, it’s non-trivial, and it’s fun!

So far all the experiments, projects, and products have been with the ATmega from Atmel. It all started with the ATmega168, but since some time it’s now all centered around the ATmega328P, where “P” stands for pico power.

There’s a good reason to use the ATmega, of course: it’s compatible with the Arduino and with the Arduino IDE.

With an ATmega328 powered by 3.3V, the lowest practical current consumption is about 4 µA – that’s with the watchdog enabled to get us back out of sleep mode. Without the internal watchdog, i.e. if we were to rely on the RFM12B’s wake-up timer, that power-down current consumption would drop considerably – to about 0.1 µA:

Screen Shot 2012 06 22 at 22 03 30

Whoa, that’s a factor 40 less! Looks like a major battery-life improvement could be achieved that way!

Ahem… not so fast, please.

As always, the answer is a resounding “that depends” – because there are other power consumers involved, and you have to look at the whole picture to understand the impact of all these specs and behaviors.

First of all, let’s assume that this is a transmit-only sensor node, and that it needs to transmit once a minute. Let’s also assume that sending a packet takes at most 6 ms. The transmitter power consumption is 25 mA, so we have a 10,000:1 sleep/send ratio, meaning that the average current consumption of the transmitter will be 2.5 µA.

Then there’s the voltage regulator. In some scenarios, it could be omitted – but the MCP1702 and MCP1703 used on JeeNodes were selected specifically for their extremely low quiescent current draw of 2 µA.

The RFM12B wireless radio module itself will draw between 0.3 µA and 2.3 µA when powered down, depending on whether the wake-up timer and/or the low-battery detector are enabled.

That’s about 5 to 7 µA so far. So you can see that a 0.1 µA vs 4 µA difference does matter, but not dramatically.

I’ve been looking at some other chips, such as ATXmega, PIC, MSP430, and Energy Micro’s ARM. It’s undeniable that those ATmega328’s are really not the lowest power option out there. The 8-bit PIC18LF25K22 can keep its watchdog running with only 0.3 µA, and the 16-bit MSP430G2453 can do the same at 0.5 µA. Even the 32-bit ARM EFM32TG110 only needs 1 µA to keep an RTC going. And they add lots of other really neat extra features.

In terms of low power there are at two more considerations: other peripherals and battery size / self-discharge.

In a Room Node, there’s normally a PIR sensor to detect and report motion. By its very nature, such a sensor cannot be shut off. It cannot even be pulsed, because a PIR needs a substantial amount of time to stabilize (half a minute or more). So there’s really no other option than to keep it powered on at all times. Well, perhaps you could turn it off at night, but only if you really don’t care what happens then :)

Trouble is: most PIR sensors draw a “lot” of current. Some over 1 mA, but the most common ones draw more like 150..200 µA. The PIR sensor I’ve found for JeeLabs is particularly economical, but it still draws 50..60 µA.

This means that the power consumption of the ATmega really becomes almost irrelevant. Even in watchdog mode.

The other variable in the equation is battery self-discharge. A modern rechargeable Eneloop cell is quoted as retaining 85% of its charge over 2 years. Let’s assume its full charge is 2000 mAh, then that’s 300 mAh loss over 2 years, which is equivalent to about 17 µA of continuous self-discharge.

Again, the 0.1 µA vs 4 µA won’t really make such a dramatic difference, given this figure. Definitely not 40-fold!

As you can see, every microamp saved will make a difference, but in the grand scheme of things, it won’t double a battery’s lifetime. There’s no silver bullet, and that Atmel ATmega328 remains a neat Arduino-compatible option.

That doesn’t mean I’m not peeking at other processors – even those that don’t have a multi-platform IDE :)

Boost revisited

In Hardware on Jun 4, 2012 at 00:01

The AS1323 boost converter mentioned a while back claims an extra-ordinarily low 1.65 µA idle current when unloaded. At the time, I wasn’s able to actually verify that, so I’ve decided to dive in again:

Screen Shot 2012 05 17 at 15 28 15

A very simple circuit, but still quite awkard to test, due to its size:

DSC 3224

Bottom right is incoming power, bottom left is boosted 3.3V output voltage. Input voltage is 1.65V for easy math.

The good news is that it works, and it shows me an average current draw of 4.29 µA:


The yellow line is the output voltags, with its characteristic boost-decay cycle. For reference: the top of the graph is at 3.45V, so the output voltage is roughly between 3.30 and 3.36V (it rises a bit with rising supply voltage).

The blue line is the voltage over a resistor inserted between supply ground and booster ground. I’m using 10 Ω, 100Ω, or 1 kΩ, depending on expected current draw (to avoid a large burden voltage). So this is the input current.

The red line is the accumulated current, but it’s not so important, since the scope also calculates the mean value.

Note that there’s some 50 Hz hum in the current measurement, and hence also in its integral (red line).

Aha! – and here’s the dirty little secret: the idle current is specified in terms of the output voltage, not the input voltage! So in case of a 1.65V -> 3.3V idle setup, you need to double the current (since we’re generating it from an input half as large as the 3.3V out), and you need to account for conversion losses!

IOW, for 100% efficiency, you’d expect 1.6 µA * (3.3V / 1.65V) = 3.2 µA idle current. Since the above shows an average current draw of 4.29 µA, this is about 75% efficient.

Not bad. But not that much better than the LTC3525 used on the AA Power Board, which was ≈ 20 µA, IIRC.

More worrying is the current draw when loaded with 10 µA, which is more similar to what a sleeping JeeNode would draw, with its wireless radio and some sensors attached:


Couple of points to note, before we jump to conclusions: the boost regulator is now cycling at a bit higher frequency of 50 Hz. Also, I’ve dropped the incoming voltage to a more realistic 1.1V, i.e. 1/3rd of the output.

With a perfect circuit, this means the input current should be around 30 µA, but it ends up being about 52 µA, i.e. 57% efficiency. I have no idea why the efficiency is so low, would have expected about 70% from the datasheet.

Further tests with 1.65V in show that 1 µA out draws 6.72 µA, 10 µA out draws 29.6 µA, 100 µA out draws 261 µA, 1 mA out draws 2.51 mA, and 10 mA out draws 30.9 mA. Not quite the 80..90% efficiency from the datasheet.

My hunch is that the construction is affecting optimal operation, and that better component choices may need to be made – I just grabbed some SMD caps and a 10 µH SMD inductor I had lying around. More testing needed…

For maximum battery life, the one thing which really matters is the current draw while the JeeNode is asleep, since this is the state it spends most of its time in. So minimal consumption with 5..10 µA out is what I’m after.

To keep things in perspective: 50 µA average current drawn from one 2000 mAh AA cell should last over 4 years. A JeeNode with Room Board & PIR (drawing 50 µA, i.e. 200 µA from the battery) should still last almost a year.

Update – when revisiting the AA Power Board, I now see that it uses 25 µA from 1.1V with no load, and 59 µA with 10 µA load (down to 44 µA @ 1.5V in). The above circuit works (but does not start) down to 0.4V, whereas the AA Power Board works down to 0.7V – low voltages are not really that useful, since they increase the current draw and die quickly thereafter. Another difference is that the above circuit will work up to 2.3V (officially only 2.0V), and the AA Power Board up to at least 6V (which is out of spec), switching into step-down mode in this case.

Improved VCC measurement

In AVR, Software on May 12, 2012 at 00:01

As shown in this post, it is possible to read out the approximate level of VCC by comparing the internal 1.1 V bandgap with the current VCC level.

But since this is about tracking battery voltage on an ultra-low power node, I wanted to tinker with it a bit further, to use as little energy as possible when making that actual supply voltage measurement. Here’s an improved bandgap sketch which adds a couple of low-power techniques:

Screen Shot 2012 05 09 at 15 42 39

First thing to note is that the ADC is now run in noise-canceling-reducing mode, i.e. a special sleep mode which turns off part of the chip to let the ADC work more accurately. With as nice side-effect that it also saves power.

The other change was to drop the 250 µs busy waiting, and use 4 ADC measurements to get a stable result.

The main delay was replaced by a call to loseSomeTime() of course – the JeeLib way of powering down.

Lastly, I changed the sketch to send out the measurement results over wireless, to get rid of the serial port activity which would skew the power consumption measurements.

Speaking of which, here is the power consumption during the call to vccRead() – my favorite graph :)


As usual, the red line is the integral of the current, i.e. the cumulative energy consumption (about 2300 nC).

And as you can see, it takes about 550 µs @ 3.5 mA current draw to perform this battery level measurement. The first ADC measurement takes a bit longer (25 cycles i.s.o. 13), just like the ATmega datasheet says.

The total of 2300 nC corresponds to a mere 2.3 µA average current draw when performed once a second, so it looks like calling vccRead() could easily be done once a minute without draining the power source very much.

The final result is pretty accurate: 201 for 5V and 147 for a 4V battery. I’ve tried a few units, and they all are within a few counts of the expected value – the 4-fold ADC readout w/ noise reduction appears to be effective!

Update – The latest version of the bandgap sketch adds support for an adjustable number of ADC readouts.

How low can it go?

In AVR, Hardware on May 7, 2012 at 00:01

While experimenting with various alternate power sources for a JeeNode, I was curious as to just how low it could go in terms of voltage and still function as a simple wireless transmit node.

Made the following mods to push things a bit more than usual:

  • adjusted the fuses to set the brownout level to 1.8V iso 2.7V (efuse: 0x06)
  • changed the RFM12B’s low-battery level to 2.2V iso 3.1V (rf12_control: 0xC040)
  • removed the voltage regulator from a JeeNode, and keep just the electrolytic cap
  • changed the radioBlip sketch to run at 8 MHz, i.e. 16 MHz clock % 2

This is the same setup as with the Tiny Lithium discharge setup described a few days ago, BTW.

Here’s the JeeNode-under-test (JUT?) – the cap I used here is again 100 µF:

DSC 3070

One pair of wires is from the power supply, the other from the multimeter.

And then it’s just a matter of hooking it up to a power supply and gradually lowering the supply voltage.

And the result is … 3.0, 2.9, 2.8, 2.7, 2.6, 2.5, 2.4, 2.3, 2.2, 2.1, 2.0, 1.9, 1.85 Volt still works!

Anything lower than that and the sketch stops sending out packets once a minute – but then again, that’s probably just the brownout detector of the ATmega kicking in!

To get it back up, I re-connected the power supply at 2.1 V and the node started its blips again… lower didn’t work, my hunch is that the RFM12B’s clock circuit needs that slightly higher voltage level to start oscillating.

TD – Cost Control

In Hardware on Mar 20, 2012 at 00:01

Welcome to the Tuesday Teardown series, about looking inside the technology around us.

Over two years ago (gosh, time flies), I reported about a low-cost AC metering device called Cost Control:

It seems to be available from several sources, not just Conrad and ELV, under different brand names. Not sure they are identical on the inside, but the interesting bit is that they transmit on 868 MHz and seem to go down to fairly low power levels as well as all the way up to 16A:

DSC 2976

So let’s have a look inside, eh? Here’s the back side of the PCB:

DSC 2971

No much to see, other than a thick bare copper wire, which probably acts as the shunt resistor.

The rest appears to be built around 3 main chips, two of which are epoxied in, so I can’t see what they are:

DSC 2970

Flipping this thing over, we can see the different sections. I had expected a special purpose AC power measuring chip, but it looks like this thing is built around a quad LM2902 op-amp:

DSC 2972

Note the discrete diode soldered on the flip side – the topmost solder joint looks pretty bad!

The rest of the analog circuitry and the MPU of some kind running at 4-something MHz is here:

DSC 2974

The 24LC02 is a 2 Kbit I2C EEPROM, for the node ID and some calibration constants, I presume.

And here’s the wireless transmitter, running off a 16 MHz crystal:

DSC 2975

Being 16 MHz, it’s a bit unlikely that this is a HopeRF RFM12B (or its transmit-only variant), alas. The blob at the center bottom goes to an antenna wire on the other side of the board.

Would love to be able to decode the wireless signal (1 packet every 5s, very nice!). Either that, or find out how they are measuring the power from 1..3600W – the remote actually displays in tenths of a Watt.

PS – See also this forum discussion about decoding.

Pick a frequency – any frequency

In Hardware on Mar 19, 2012 at 00:01

A week ago, there was a post about various clock options and their accuracy.

These clocks generate a stable pulse or sinewave, basically. But what if you need a different frequency?

Suppose you get a very accurate 1 pulse-per-second (i.e. 1 Hz) signal from somewhere, but you want to keep track of time in microseconds? IOW, you need a 1 MHz clock, preferably just as accurate. One way to do this, is to use a “Voltage Controlled Oscillator” (VCO). It can be any frequency really – the idea is to divide its output down to 1 Hz and then compare it with your reference clock. If it’s either too slow or too fast, adjust the voltage used to set the precise frequency of the VCO, and bingo – within no time (heh, so to speak), your VCO will be “locked” onto the reference and generate its target frequency, at just about the same accuracy as the 1 pps reference.

My Rubidium clock came with a 63.8976 MHz VCO as part of the bargain:

DSC 2920

With no control voltage it generates a sinewave-ish very high frequency signal from just a 3.3V power supply:


That frequency is not as awkward as it looks: 638976 = 3 * 13 * 16384, so you can get 100 Hz out of it with a few simple dividers, as well as any integral fraction of that (including 1 Hz). Another way of going about this is to divide the clock by a simple power of two, say 256 or 4096, and then pass the resulting square wave to an ATmega’s timer/counter input. I haven’t hooked up this VCO to the Rb clock yet, since there’s a bit more logic involved – look up “phase locked loop” (PLL) if you’re interested.

Another source of very stable clock signals is the GPS navigation system (see also this note). Their clocks are used to be made a little bit jittery for civilian use, but this averages out over time, so you can still lock onto it and get a very accurate long-term reference. Look up Allen variance to find out more about short- vs long-term stability – it’s fascinating stuff, but as with most things: once you get into the details it can become quite complex.

To summarize: with a VCO you can produce any frequency you like given some stable reference. So I’m happy with my 10 MHz @ 10 ppt atomic clock, for those rare cases when I’ll need it. And for its geek factor, of course…

Do all these extreme accuracies matter? Well, apart from TDMA, think of it this way: an 868 MHz RFM12B wireless radio with 1 ppm accuracy may be off by 868 Hz. That’s no big deal because the RFM12B’s receiver uses Automatic Frequency Control (AFC) to tune itself into the incoming signal, but with bandwidths in the kilohertz range, you can see that all of a sudden a couple of ppm isn’t so academic any more!

Rubidium Clock – part 2

In Hardware on Mar 11, 2012 at 00:01

After yesterday’s intro of my “get your own atomic clock”, which is really just doodling, here’s the next step:

DSC 2952

The clock, and the PCB panel it came attached to, has been placed in an all-plastic enclosure along with a little 15V @ 1.7A switching power supply. This thing needs quite a bit of power and actually gets quite hot. Nevertheless, I expect that placing it inside this relatively small plastic enclosure will not be a problem because much of the heat seems to be generated simply to keep the Rubidium “physics package” inside at a certain fixed temperature. For that same reason, I suspect that the heat sink on which this clock is mounted is not so much meant to draw heat away, but to maintain a stable temperature and improve stability.

Speaking of stability… here are the specs of this unit from eBay:

Screen Shot 2012 03 10 at 13 26 51

To get an idea: 10 to the power -11 frequency stability is less than 0.3 milliseconds per year error!

This particular unit (they are not all identical, even when called “FE-5680A”) also needs a 5V logic supply.

I haven’t yet decided how to bring out various signals, so I’ll hook up the 50Ω BNC connector on the back first and wait with the rest. Also needed: a LED power-on light, LED indicators for the “output valid” and “1 pulse-per-second” signals (via a one-shot to extend the 1 µs second pulse), and a 7805 regulator. Here’s the front – so far:

DSC 2953

I don’t intend to keep this energy-drain running at all times, but it’ll be there at the flick of a switch to generate a stable 10 MHz signal when needed. One of the things you can do with it is calibrate other clocks, and compare their accuracy + drift over time and temperature.

Geeky stuff. For a lot more info about precise time and frequency tracking, see the Time Nuts web site.

Tomorrow, I’ll describe some of the trade-offs w.r.t. time for JeeNodes and wireless sensors.

Can’t be done

In Hardware on Jan 27, 2012 at 00:01

As you may know from posts a short while ago, I’ve been working on creating an ultra-low power supply, providing just enough energy to a JeeNode or JeeNode Micro to let it do a little work, report some data over wireless, and then go to sleep most of the time.

I even designed a PCB for this thing and had a bunch of them produced:

Screen Shot 2012 01 25 at 01 57 45

The good news is that it works as intended and that I’ll be using this circuit for some projects.

The bad news is that they won’t be available as kits in the shop. Ironically, this was the first time where I actually had a batch of kits all wrapped up and ready to go, ahead of time.

But the reality is that I can’t pull it off. For two different reasons:

  • The circuit is connected to live AC mains @ 230 VAC and that means there is a serious risk if you build this stuff, try it out, and then hurt yourself because of some mistake. And even after that, there is the risk that the whole circuit is not properly protected, exposing these voltages (even just humidity and condensation).

  • The other risk is that once everything works, it gets built-in for permanent use and becomes part of your house. What if it gets wet or malfunctions for some other reason, and your house burns down?

As supplier, I’d be liable (rightly so, BTW – there is no excuse for selling stuff which might be dangerous).

The hardest part of all is that even if an accident has nothing to do with this Low-power Supply, I still have to prove that this stuff is safe under any circumstance and that it complies with all regulations!

I’m not willing to go there. Life’s too short and I don’t have the pushing power to go through it all.

Having said this, I do intend to use this supply myself and create all sorts of nodes for use here at JeeLabs. Because I know the risks, I know which failsafe features have been built into the supply, and I’m ok with it:

DSC 2894

The design is available in the Café, to document what I’ve done and for others to do whatever they like with it.

I’m not happy about this decision, in fact I hate it. I’m really proud of finding out that it is possible to create sensor nodes which run off just 12 mW of AC mains power. But the right thing to do is to stop here.

Frequency generator

In Hardware on Jan 11, 2012 at 00:01

A long time ago, I got this DDS-60 kit, which is a small circuit based on an AD9851 DDS chip:

DDS 60  top 400a

It has everything on board to generate a sine wave from 0..30 MHz, and my intention was to hook it up to a JeeNode (as part of a long term plan of mine to set up a more extensive wirelessly-controlled electronics lab):

DSC 2850

Never got it to work at the time, but now with the new scope, there really is no excuse anymore. First check, as indicated in their build instructions, is to verify that the crystal oscillator is feeding a 30 MHz clock into the chip:


Looking good. Very impressive rise and fall times, BTW.

When driven at 30 MHz, the AD9851 output frequency is settable in steps of 0.006984919 Hz. In other words, a multiplier of 1000 will generate a sine wave of ≈ 7 Hz.

Here’s the output when programmed to generate 10 MHz (multiplier 1431655765, i.e. 9999999.9977 Hz):


Whoa… it’s 10 MHz, but a far cry from a sine wave. Ah, but that’s not really surprising: this thing uses DDS to synthesize a sine wave, as recently described on this weblog. With 30 MHz sample rate, i.e. 3 samples per wave, it’s not really possible to create a decent 10 MHz sine wave (not even a symmetrical shape in fact, as you can see).

But the AD9851 has a trick up its sleeve: it includes a “6x” multiplier option, which causes it to internally generate a reference frequency which is 6x the incoming clock, i.e. 180 MHz in this case.

Using that, and adjusting the frequency setting to work at 180 MSa/sec, we get a much better approximation:


Still not perfect, but by analyzing the FFT of this signal, we can find out what’s going on:


This output signal is made up mostly of a 10 MHz sine, with another peak at 90 MHz.

Unfortunately, the output circuit on this board isn’t working yet (this is what probably threw me off when trying this circuit before), so I can’t test the effect of the 60 MHz low-pass filter yet. It won’t filter out the 30 MHz residue visible in that last picture, but should definitely reduce the frequency components over 60 MHz.

Ok, all the important bits seem to work – I “only” need to troubleshoot that analog back end a bit more.

Update – I found the problem: the SMD trimpot was fractured, i.e. no contact. I’ve replaced it with a fixed 220 Ω resistor for now – this brings the output to ≈ 680 mVpp (or 350 mVpp into 50 Ω) – the sine wave output is now considerably cleaner, but several of the frequency peaks ≥ 90 MHz are still present:


I suspect that the 30 MHz clock is “feeding through” somewhere, perhaps better decoupling would avoid that.

Happy tinkering in 2012!

In News on Jan 1, 2012 at 00:01

Happy 2012. We each have roughly 5,000 waking hours ahead of us in 2012. Let’s use ’em – slowly and wisely.

As my contribution to slowing down, I’d like to encourage everyone interested in Computing Stuff tied to the Physical World to deepen your understanding and broaden your experience. So allow me to introduce a little tinkering kit, for those of you who are into ATmega’s and wireless stuff – the JeeNode Block:

DSC 2827  Version 3

This is a recent experiment to fool around with the JeeNode form factor, as a way to create a little self-contained unit which needs no wires to operate. I’m using these blocks to try things out on my desktop (you know, the real physical one), without turning it into a huge spaghetti bowl of power supply wires, USB cables, and test hookups.

It’s basically just a JeeNode, but with a different layout (and RFM12B’s “INT” pin reallocated from PD2 to PB0):

DSC 2829

It’s exactly the right size to support simple low-cost 5×7 cm prototyping boards (lets not call ’em “shields”, ok?):

DSC 2828

The three headers at the bottom are: 8 digital I/O pins, 4 power pins, and 6 analog I/O pins. The two headers at the top are JeeNode ports 1 and 2. There’s a reset button, an LED, and an FTDI header for uploading new code. The 3x AA battery pack will power the whole thing at 3.6 .. 4.5 V, depending on the type of batteries used. There’s a regulator on board to run at 3.3V, as with all the other JeeNode variants.

Note that this is not a product in the shop. It’s just an exploration by yours truly. And it’s also a one-time offer:

As special encouragement to “start 2012 by tinkering”, I’ll add a JeeNode Block PCB and a prototype board for free to the first three dozen or so people who order a JeeNode from the shop and ask for it. You can then simply re-use all the JeeNode parts for this board (except for the JeeNode’s PCB), since everything is more or less the same. A few missing components will also be included: extra headers, an LED, and the reset button. To take advantage of this offer, select “JeeNode w/ extra Block” from the pop-up list on the order page. Note: this offer is limited to at most one Block per person.

If you come up with a neat project for the JeeNode Block, I encourage you to share your invention on the forum.

Happy 2012. With 5000 hours to discover your passion, extend your knowledge, and unleash your creativity.

The JeeNode, as seen from 15.24 km

In AVR, Hardware on Dec 19, 2011 at 00:01

(that’s 50,000 feet, or 9.47 miles – if those units mean more to you)

This post was prompted by a message on the forum, about what this whole “JeeNode” thing is, really.

Here are a JeeNode v6 and an Arduino Duemilanove, side by side:

DSC 2826

Let me start by saying, tongue-in-cheek: it’s all Arduino’s fault!

Because – let’s face it – the core of each Arduino and each JeeNode is really 95% the same: an Atmel AVR ATmega328P chip, surrounded by a teeny little bit of support circuitry and placed on a printed circuit board. So part of the confusion comes from the fact that the Arduino introduced its own conventions, moving it further away from the underlying common ATmega technology.

The differences between an Arduino Duemilanove and a JeeNode v6 – which resemble each other most – are:

  • the JeeNode has a “skinnier” shape, incompatible with Arduino “shields”
  • the Arduino runs at 5V, whereas the JeeNode runs at 3.3V (this carries through to all I/O pins)
  • the JeeNode includes a wireless radio module, called the RFM12B by HopeRF
  • the Arduino includes an FTDI <-> USB interface, while the JeeNode relies on an external one

There are many other differences, of course – so let’s continue this list a bit:

  • the Arduino’s “eco-system” is far, far bigger than the JeeNode’s (translation: everyone who finds out about JeeNodes probably already knows about the Arduino platform, and usually already has one or more of ’em)
  • this carries through to articles, websites, books, and discussion forums – Arduino is everywhere
  • you can do lots of stuff with an Arduino without ever touching a soldering iron, whereas the JeeNode is really not usable without some soldering (even if just to solder on a few pin headers)
  • different pinouts… it’s one big conspiracy to confuse everyone, of course! (just kidding: see below)

My reasons for coming up with the JeeNode have been documented in the past on this weblog, and can be summarized as: 1) running at 3.3V for lower power consumption and to better match modern sensors and chips, and 2) supporting a simpler form of expandability through the use of “plugs” – little boards which can be mixed and matched in many different combinations.

On the software side, JeeNodes remain fully compatible with the Arduino IDE, a convenient software environment for Windows, Mac, and Linux to develop “sketches” and upload them to the board(s).

The biggest stumbling block seems to be the way pins are identified. There are 4 conventions, all different:

  • Atmel’s hardware documentation talks about pins on its internal hardware ports, in a logical manner: so for example, there is a port “D” with 8 I/O pins numbered 0..7 – the sixth one would be called PD5.
  • Then there is the pin on the chip, this depends on which chip and which package is being referred to. On the 28-pin DIP package used for an ATmega328P, that same PD5 pin would be identified as pin 11. That’s the 11th pin, counting from the left side of the chip with pin 1 at the top.
  • The Arduino run-time library has software to control these pins. For a digital output pin, you can set it to “1” for example, by writing digitalWrite(5,1). This resembles PD5, but it fails for other pins (PB0 is “8” in Arduino-land, and PC1 is “1” if used as an analog input, or “15” if used otherwise – go figure…).
  • The JeeNode organizes several pins as part of 6-pin “Ports” (no relation to Atmels terminology!), each of them having 1 digital and 1 analog-or-digital pin.

The thing about JeeNode Ports is that there are 4 of them, and they can all be used for plugs in the same way. To support this, there’s a Ports library which lets you define port objects. This is an abstraction layer on top of the Arduino runtime. The reason is that it lets you associate a port object with a header on the JeeNode:

    Port myport (2);

Then you can connect your hardware / sensor / plug / whatever to the header marked “P2” on the JeeNode, and access it as follows:


This happens to be the same pin as in the examples above, i.e. PD5 of an ATmega, pin 11 of the 28-pin DIP chip, and digitalWrite(5,1) on an Arduino. This also means that there are numerous ways to perform the same action of setting pin 11 of the chip to a logical “1” (i.e. 3.3V or 5V):

  • the “raw” C-level access, using Atmel’s register conventions and definitions (fastest, by far):

        PORTD |= 1 << 5;  // or ...
        PORTD |= _BV(5);  // same thing
        bitSet(PORTD, 5); // same thing, using an Arduino macro
  • the Arduino way of doing things:

        digitalWrite(5, 1);
  • the JeeNode Ports library way of doing things, as shown above:

        Port myport (2);
  • … let’s throw in an extra bullet item, since every other list in this post appears to come in fours ;)

The one (minor) benefit you get from using the Ports approach on a JeeNode, is that if you attach your hardware to a different port, say port 3, then you only need to change a single line of code (to “Port myport (3);” in this case). The rest of the code, i.e. everywhere where its pins are being read or written, can then remain the same.

For an overview of all pinout differences, see also this weblog post. For full details, see the JeeNode PDF docs.

Messy signals

In Hardware on Dec 16, 2011 at 00:01

Digital circuits work with 0’s and 1’s, right?

Well, yes, but that doesn’t mean the analog voltages and currents are necessarily very “clean”. To fabricate a somewhat extreme example, I connected a JeeNode without regulator and without 10 µF capacitor to a 3x AA battery pack, and made it run this simple loop:

Screen Shot 2011 12 01 at 17 07 09

Sleep for 3 seconds, then send an SPI command to the RFM12B wireless module. Note that the RFM12B is not set to receive or transmit mode – the ATmega just sends it 2 bytes over SPI.

Let’s look at the variation in voltage and the current consumption (this shows the benefit of an MSO, BTW):


The ATmega wakes up, sends four 16-bit commands over SPI (the compressed timeline is a bit misleading) and powers off again. The whole process takes less than 200 µs. The four SPI transfers are: wake up the RFM12B, send it the 0xFE00 soft reset, and then two more to send the RFM12B back to sleep. You can even see the ≈ 0.6 mA baseline increase while the RFM12B is awake and idling. The SPI bus runs at 1 MHz in this example i.s.o. 2 MHz, because the ATmega is running off its 8 MHz internal RC oscillator, but the sketch was compiled for 16 MHz.

The current spikes are not so important. It’s normal for switching signals to consume relatively much energy (that’s why turning off the clock saves so much power). The problem here, is that these current fluctuations have such a large impact on the supply voltage – one of the spikes causes the supply to briefly drop more than 0.25V!

This is why “decoupling” capacitors are used around digital chips, even a lowly ATmega consuming just a few milliamps (it’s running at 8 MHz here, BTW). There is a 0.1 µF cap on the JeeNode board, but it’s not enough.

Here’s the same circuit, with both signals in close up:


Nasty stuff. I’m not 100% convinced that the real waveforms look exactly like this (the scope and probe might be distorting it a bit), but there’s no question that each SPI pin change has a substantial impact on the supply rail.

Here’s the same, with a 0.1 µF capacitor added near the battery pack:


And with a 470 µF electrolytic cap (both showing just the scope’s measurement results):


Note that the 0.1 µF cap has much more effect, relatively, than the 470 µF one. It’s better for HF noise reduction.

Does it matter? Yes, probably. Although all these setups work fine, the variation in voltage is fairly large, and could cause problems when operating at lower voltage levels, nearer the specified limits. Also, such currents might generate a fair bit of Electromagnetic Interference (EMI).

By adding more capacitors very near to the power consumers, i.e. the ATmega and RFM12B, this can be reduced. Such decoupling capacitors will act like little charge buffers, helping the supply cope with such sudden changes.

There’s much more to it than that (there always is). At switching frequencies of 1 MHz and above, the impedance of a wire starts to matter a lot. In fact, it’s amazing that digital circuits work at all – even without any HF design!

I’ll investigate further, but for now just remember: when in doubt, add caps … everywhere.

Inside the RF12 driver

In Software on Dec 10, 2011 at 00:01

This is the first of 3 posts about the RF12 library which drives the RFM12B wireless modules on the JeeNode, etc.

The RF12 driver is a small but reasonably complex bit of software. The reason for this is that it has some stringent time constraints, which really require it to be driven through interrupts.

This is due to the fact that the packet data rate is set fairly high to keep the transmitter and receiver occupied as briefly as possible. Data rate, bandwidth, and wireless range are inter-related and based on trade-offs. Based on some experimentation long ago, I decided to use 49.2 kBaud as data rate and 134 KHz bandwidth setting. This means that the receiver will get one data byte per 162 µs, and that the transmitter must be fed a new new byte at that same rate.

With an ATmega running at 16 MHz, interrupt processing takes about 35 µs, i.e. roughly 20% of the time. It works down to 4 MHz in fact, with processor utilization nearing 100%.

Even with the ATtiny, which has limited SPI hardware support, it looks like 4 MHz is about the lower limit.

So how does one go about in creating an interrupt-driven driver?

The first thing to note is that interrupts are tricky. They can lead to hard-to-find bugs, which are not easy to reproduce and which happen only when you’re not looking – because interrupts won’t happen exactly the same way each time. And what’s worse: they mess with the way compiled code works, requiring the use of the “volatile” datatype to prevent compiler optimizations from caching too much. Just as with threads – a similar minefield – you need to prepare against all problems in advance and deal with weird things called “race conditions”.

The way the RF12 driver works, is that it creates a barrier between the high-level interface (the user callable API), and the lower-level interrupt code. The public calls can be used without having to think about the RFM12B’s interrupts. This means that as far as the public API is concerned, interrupt handling can be completely ignored:

RF12 driver structure

Calling RF12 driver functions from inside other interrupt code is not a good idea. In fact, performing callbacks from inside interrupt code is not a good idea in general (for several reasons) – not just in the RF12 driver.

So the way the RF12 driver is used, is that you ask it to do things, and check to find out its current state. All the time-critical work will happen inside interrupt code, but once a packet has been fully received, for example, you can take as much time as you want before picking up that result and acting on it.

The central RF12 driver check is the call:

    if (rf12_recvDone()) ...

From the caller’s perspective, its task is to find out whether a new packet has been received since the last call. From the RF12’s perspective, however, its task is to keep going and track which state the driver is currently in.

The RF12 driver can be in one of several states at any point in time – these are, very roughly: idling, busy receiving a packet, packet pending in buffer, or busy transmitting. None of these can happen at the same time.

These states are implemented as a Finite State Machine (FSM). What this means, is that there is a (private) variable called “rxstate“, which stores the current state as an integer code. The possible states are defined as a (private) enum in rf12.cpp (but it can also have a negative value, this will be described later).

Note that rxstate is defined as a “volatile” 8-bit int. This is essential for all data which can be changed inside interrupt code. It prevents the compiler from applying certain optimizations. Without it, strange things happen!

So the “big picture” view of the RF12 driver is as follows:

  • the public API does not know about interrupts and is not time-critical
  • the interrupt code is only used inside the driver, for time-critical activities
  • the RF12 driver is always in one of several well-defined “states”, as stored in rxstate
  • the rf12_recvDone() call keeps the driver going w.r.t. non time-critical tasks
  • hardware interrupts keep the driver going for everything that is time-critical
  • “keeping things going” is another way of saying: adjusting the rxstate variable

In a way, the RF12 driver can be considered as a custom single-purpose background task. It’ll use interrupts to steal some time from the running sketch, whenever there is a need to do things quickly. This is similar to the milliseconds timer in the Arduino runtime library, which uses a hardware timer interrupt to keep track of elapsed time, regardless of what the sketch may be doing. Another example is the serial port driver.

Interrupts add some overhead (entering and exiting interrupt code is fairly tedious on a RISC architecture such as the ATmega), but they also make it possible to “hide” all sorts of urgent work in the background.

In the next post, I’ll describe the RF12 driver states in full detail.

PS. This is weblog post number 900 ! (with 3000 comments, wow)

Same RFM12B’s, but flatter

In Hardware on Dec 3, 2011 at 00:01

This is to announce that from now on JeeNodes will be fitted with a different type of RFM12B wireless module:

DSC 2681

Previous module on the left, new module on the right.

The difference? It’s just a bit flatter, that’s all. As you can see, it’s the same board – with a low-profile crystal:

DSC 2682

For most purposes, the change is irrelevant. The module is electrically identical: same pinout & commands, same RF12 library, etc. But in some scenarios, the lower profile might be useful, i.e. when the MPU is also SMD.

This is also why the flat model is going to used with the new JeeNode Micros.

Running off a 6800 µF cap

In Hardware, Software on Nov 15, 2011 at 00:01

The running on charge post described how to charge a 0.47 Farad supercap with a very small current, which drew only about 0.26 W. A more recent post improved this to 0.13 W by replacing the voltage-dropping resistor by a special “X2” high voltage capacitor.

Nice, but there was one pretty awkward side-effect: it took ages to charge the supercap after being plugged-in, so you had to wait an hour until the sensing node would start to send out wireless packets!

As it turns out, the supercap is really overkill if the node is sleeping 99% of the time in ultra low-power mode.

Here’s a test I did, using a lab power supply feeding the following circuit:

JC s Doodles page 21

The resistor is dimensioned in such a way that it’ll charge the capacitor with 10 mA. This was a mistake – I wanted to use 1 mA, i.e. approximately the same as 220 kΩ would with AC mains, but it turns out that the ATtiny code isn’t low-power enough yet. So for this experiment I’m just sticking to 10 mA.

For the capacitor, I used a 6,800 µF 6.3V type. Here’s how it charges up under no load:

DSC 2745

A very simple RC charger, with zener cut-off. So this thing is supplying 3.64 V to the circuit within mere seconds. That’s with 10 mA coming in.

Then I took the radioBlip sketch, and modified it to send out one packet every second (with low-power sleeping):

DSC 2746

The blue line is the serial output, which are two blips caused by this debug code around the sleep phase:

Screen Shot 2011 11 02 at 17 30 23

This not only makes good markers, it’s also a great way to trigger the scope. Keep in mind that the first blip is the ‘b’ when the node comes out of sleep, and the second one is the ‘a’ when it’s about to go sleeping again.

So that’s roughly 10 ms in the delay, then about 5 ms to send the packet, then another 10 ms delay, and then the node enters sleep mode. The cycle repeats once a second, and hence also the scope display refresh.

The yellow line shows the voltage level of the power supply going into the JeeNode (the scale is 50 mV per division, but the 0V baseline is way down, off the display area). As you can see, the power drops about 40 mV while the node does its thing and sends out a packet.

First conclusion: a 6,800 µF capacitor has plenty of energy to drive the JeeNode as part of a sensor network. It only uses a small amount of its charge as the JeeNode wakes up and starts transmitting.

But now the fun part: seeing how little the voltage drops, I wanted to see how long the capacitor would be able to power the node without being “topped up” with new charge.

Take a look at this scope snapshot:

DSC 2747

I turned on “persistence” so that old traces remain on the screen, and then turned off the lab power supply. What you’re seeing is several rounds of sending a packet, each time with the capacitor discharged a little further.

The rest of the time, the JeeNode is in ultra low-power mode. This is where the supply capacitor gets re-charged in normal use. In that last experiment it doesn’t happen, so the scope trace runs off the right edge and comes back at the same level on the left, after the next trigger, i.e. 1 second later.

Neat huh?

The discharge is slightly higher than before, because I changed the sketch to send out 40-byte packets instead of 4. In fact, if you look closely, you can see three discharge slopes in that last image:

JC s Doodles page 21

A = the first delay(10) i.e. ATmega running
B = packet send, i.e. RFM12B transmitting, ATmega low power
C = the second delay(10), only ATmega running again

Here I’ve turned up the scale and am averaging over successive samples to bring this out more clearly:

DSC 2750

You can even “see” the transmitter startup and the re-charge once all is over, despite the low resolution.

So the conclusion is that even a 6,800 µF capacitor is overkill, assuming the sketch has been properly designed to use ultra low-power mode. And maybe the 0.13 W power supply could be made even smaller?

Amazing things, them ATmega’s. And them scopes!

A “beefy” power supply

In Hardware on Nov 14, 2011 at 00:01

In a comment on the daily weblog, Jörg pointed to a very interesting chip which can directly switch 220 V.

All the parts are available as through-hole, so I decided to give it a go:

DSC 2743

I used the LNK302, with a 2.00 kΩ / 2.32 kΩ 1% resistance divider to select the output voltage. At the left there’s a fusible 100 4700 Ω resistor, a diode, and a 3.3 µF (400V!) electrolytic cap for (high-voltage) DC input.

The circuit officially only works with input voltages above 70 V, but that’s a conservative spec. It actually works fine from my 30 VDC lab supply, which means I can safely poke around in it and see how it behaves.

Time to fire up the scope again. Here’s the output with a 1 mA load:

DSC 2736

Channel 1 (yellow) is the output, but AC coupled, i.e. just the fluctuations, while channel 2 (blue) is hooked up to the same pin but in DC-coupled mode.

As you can see, the output is roughly 3.8V with brief but fairly large spikes of almost 0.3V. Basically, the switching chip periodically connects the input voltage to the output (through an inductor, and charging a 100 µF cap).

The fun begins when you start loading the supply a bit. Here’s what it does at 10 mA:

DSC 2737

Similar spikes, at roughly 10 KHz (quite a bit of variation in timing). Now 25 mA:

DSC 2738

More of the same, the repetition rate doubles to around 20 KHz, and the voltage drops a bit. Let’s go for 50 mA:

DSC 2739

It’s getting a bit jittery now, doubling its frequency every once in a while. And here’s 75 mA:

DSC 2741

Nice and steady output, the ripple voltage is under 0.2V now. Still holding at 3.2V.

Can we pull more current out of this circuit? Not really, I’m afraid – see what happens at around 80 mA:

DSC 2742

Going full speed now at around 65 KHz, but there’s simply not enough energy: the output collapses to 1.32 V.

With roughly 70 mA @ 3.2 V, input power consumption is about 20 mA @ 30 V. This isn’t stellar (37% efficiency), but also not really indicative of what it will do at 220 V, since I’m running the chip way out of spec.

I’ll need to do some tests at the full 220 VAC to make sure this behavior is similar under real-world conditions, but from what I can tell, 50..65 mA is probably about the limit of what this circuit can supply at about 3.3V. Which would be plenty for a JeeNode in full transmit mode BTW, including some additional circuitry around it.

One problem is the fairly large ripple voltage. It would be better to dimension the circuit for a 5V output, or even 12V, and then add the usual linear regulator to get it down to 3.3V for the logic circuit. This could actually be quite practical in combination with a small 12V relay (which isn’t affected by such voltage fluctuations).

Note that a circuit like this – even if it were to supply only 5 mA – would be plenty to drive a JeeNode which sits mostly in low-power mode and only occasionally needs to activate its RFM12B wireless module.

So all in all: a very interesting (non-isolated) option!

Update – Also ok on 220 V: 65 mA @ 3.0 V (draws 1.25 W, i.e. 15 %). With 2 mA @ 3.7 V, power consumption is 0.40 W (vs. ≈ 8 mW delivered, i.e. 2 % efficiency). At 80 mA, the voltage drops to 2.5 V – above that it collapses.

A sketch for the LED Node

In AVR, Software on Nov 4, 2011 at 00:01

The LED Node presented yesterday needs some software to make it do things, of course. Before writing that sketch, I first wrote a small test sketch to verify that all RGB colors worked:

Screen Shot 2011 10 26 at 20 59 17

That’s the entire sketch, it’ll cycle through all the 7 main colors (3 RGB, and 4 combinations) as well as black. There’s some bit trickery going on here, so let me describe how it works:

  • a counter gets incremented each time through the loop
  • the loop runs 200 times per second, due to the “delay(5)” at the end
  • the counter is a 16-bit int, but only the lower 11 bits are used
  • bits 0..7 are used as brightness level, by passing that to the analogWrite() calls
  • when bit 8 is set, the blue LED’s brightness is adjusted (pin 5)
  • when bit 9 is set, the red LED’s brightness is adjusted (pin 6)
  • when bit 10 is set, the green LED’s brightness is adjusted (pin 9)

Another way to look at this, is as a 3-bit counter (bits 8..10) cycling through all the RGB combinations, and for each combination, a level gets incremented from 0..255 – so there are 11 bits in use, i.e. 2048 combinations, and with 200 steps per second, the entire color pattern repeats about once every 10 seconds.

Anyway, the next step is to write a real LED Node driver. It should support the following tasks:

  • listen to incoming packets to change the lights
  • allow gradually changing the current RGB color setting to a new RGB mix
  • support adjustable durations for color changes, in steps of 1 second

So the idea is: at any point in time, the RGB LEDs are lit with a certain PWM-controlled intensity (0..255 for each, i.e. a 24-bit color setting). The 0,0,0 value is fully off, while 255,255,255 is fully on (which is a fairly ugly blueish tint). From there, the unit must figure out how to gradually change the RGB values towards another target RGB value, and deal with all the work and timing to get there.

I don’t really want to have to think about these RGB values all the time though, so the LED Node sketch must also support “presets”. After pondering a bit about it, I came up with the following model:

  • setting presets are called “ramps”, and there can be up to 100 of them
  • each ramp has a target RGB value it wants to reach, and the time it should take to get there
  • ramps can be chained, i.e. when a ramp has reached its final value, it can automatically start another ramp
  • ramps can be sent to the unit via wireless (of course!), and stored in any of the presets 1..99
  • preset 0 is special, it is always the “immediate all off” setting and can’t be changed
  • to start a ramp, just send a 1-byte packet with the 0..99 ramp number
  • to save a ramp as a preset, send a 6-byte packet (preset#, R, G, B, duration, and chain)
  • preset 0 is special: when “saving” to preset #0 it gets started immediately (instead of being saved)
  • presets 0..9 contain standard fixed ramps on power-up (presets 1..9 can be changed afterwards)
  • the maximum duration of a single ramp is 255 seconds, i.e. over 4 minutes

Quite an elaborate design after all, but this way I can figure out a nice set of color transitions and store them in each unit once and for all. After that, sending a “command” in the form of a 1-byte packet is all that’s needed to start a ramp (or a series of ramps) which will vary the lighting according to the stored presets.

Hm, this “ledNode” sketch has become a bit longer than expected – I’ll present that tomorrow.

Meet the LED Node

In AVR, Software on Nov 3, 2011 at 00:01

More than a year has passed, and I still haven’t finished the RGB LED project. The goal was to have lots of RGB LED strips around the house, high up near the ceiling to provide indirect lighting.

The reason is perhaps a bit unusual, but that has never stopped me: I want to simulate sunrise & sunset (both brightness and colors) – not at the actual time of the real sunset and sunrise however, but when waking up and and when it’s time to go to bed. Also in the bedroom, as a gentle signal before the alarm goes off.

Now that winter time is approaching and mornings are getting darker, this project really needs to be completed. The problem with the previous attempt was that it’s pretty difficult to achieve a really even control of brightness and colors with software interrupts. The main reason is that there are more interrupt sources (the clock and the RFM12B wireless module), which affect the timing in subtle, but visibly irregular, ways.

So I created a dedicated solution, called the LED Node:

Screen Shot 2011 10 26 at 11 48 54

It’s basically the combination of a JeeNode, one-and-a-half MOSFET Plugs, and a Room Board, with the difference that all MOSFETs are tied to I/O pins which support hardware PWM. The Room Board option was added, because if I’m going to put 12V power all over the house anyway for these LEDs, and if I want to monitor temperature, humidity, light, and motion in most of the rooms, then it makes good sense to combine it all in one.

Here is my first build (note that all the components are through-hole), connected to a small test strip:

DSC 2706

The pinouts are pre-arranged to connect to a standard common cathode anode RGB strip, and the SHT11 temp / humidity sensor is positioned as far away from the LEDs as possible, since every source of heat will affect its readings. For the same reason, the LDR is placed at the end so it can be aimed away from the light-producing LED strip. I haven’t thought much about the PIR mounting so far, but at least the 3-pin header is there.

The LED Node is 18 by 132 mm, so that it fits inside a U-shaped PVC profile I intend to use for these strips. There can be some issues with color fringing which require care in orienting the strips to avoid problems.

Apart from some I/O pin allocations required to access the hardware PWM, the LED Node is fully compatible with a JeeNode. It’s also fully compatible with Arduino boards of course, since it has the same ATmega328. There’s an FTDI header to attach to a USB BUB for uploading sketches and debugging.

The MOSFETS easily support 5 m of LED strips with 30 RGB LEDs per meter without getting warm. Probably much more – I haven’t tried it with heavier loads yet.

Here’s what I used as basic prototype of the whole thing, as presented last year:

DSC 2710

Tomorrow, I’ll describe a sketch for this unit which supports gradual color changes.

Running LED ticker #2

In Software on Nov 2, 2011 at 00:01

After the hardware to feed yesterday’s scrolling LED display, comes the software:

Screen Shot 2011 10 25 at 18 31 38

The code has also been added to GitHub as tickerLed.ino sketch.

Fairly basic stuff. I decided to simply pass incoming messages as is, but to do the checksum calculation and start/end marker wrapping locally in the sketch. The main trick is to use a large circular 1500-character FIFO buffer, so that we don’t bump into overrun issues. The other thing this sketch does is throttling, to give the unit time to collect a message and process it.

Here’s what the display shows after power-up, now that the JeeNode is feeding it directly:

DSC 2705

(that’s the last part of the “Hello from JeeLabs” startup message built into the sketch)

So now, sending a 50-byte packet to node 17 group 4 over wireless with this text:

    <L1><PA><FE><MA><WC><FE>Yet another beautiful day!

… will update line 1 on page A and make the text appear. The unit rotates all the messages, and can do all sorts of funky stuff with transitions, pages, and time schedules – that’s what all the brackets and codes are about.

Time to put the end caps on and close the unit!

Running LED ticker

In Hardware on Nov 1, 2011 at 00:01

For a while, these “ticker tape” displays were quite the rage. They probably still are in shop displays where LCD screens haven’t taken over:

590996 BB 00 FB EPS

This one is powered from 12 VDC, and has a serial RS232 interface with a funky command structure to put stuff into its line- and page-memory. Lots of options including lots of scroll variations, blinking, and beeping.

Well, a friend lent me one a while back and we thought it’d be fun to add a wireless interface via a JeeNode.

Time to take it apart, eh?

It didn’t take long to figure out the way to hook up to it, and there was some useful info in this discussion from a few years back.

I decided to use this P4B serial adapter from Modern Device to interface the JeeNode to the RS232 signals:

DSC 2702

It’s convenient because it plugs right into the FTDI connector, and it has the onboard trickery needed to generate “RS232’ish” voltages. The whole thing is mounted on foam board using double-sided tape and the JeeNode can easily be unplugged to upload a new sketch.

Here’s the whole “mod” on the back side of the display:

DSC 2701

There’s a wire to the +5V side of a large cap, used to drive the displays no doubt. The JeeNode’s internal regulator will convert that down to 3.3V, as usual.

The positioning of the JeeNode is tricky, because the entire enclosure (apart from the smoked glass at the front) is metal – not so good for getting an RF signal across. I decided to place the antenna at the far edge, since the end caps are made from plastic. Hopefully this will allow good enough reception to operate the unit while closed.

The serial pins are conveniently brought out on some internal pads, right next to the RJ11 jack used for RS232:

DSC 2703

I haven’t hooked up RX and TX yet, because I still need to find out which is which.

I’ve verified that I can communicate with the unit through a USB-BUB wired through to the P4B adapter, i.e. straight from my laptop.

Next step is to write a sketch for the JeeNode…

CC-RT: Pin assignments

In AVR, Hardware on Oct 29, 2011 at 00:01

Part 4 of the Crafted Circuits – Reflow Timer series.

Now that all the pieces of the circuit are known, more or less (I’ll assume that the MAX31855 can be used), it’s time to figure out whether everything will fit together. One issue I’d like to get out of the way early on, is pin assignments on the ATmega. There are 20 I/O pins: 14 digital, of which 6 PWM, and 6 digital-or-analog.

The best thing would be to make this as compatible with existing products as possible, because that simplifies the re-use of libraries. For this reason, I’ll hook up the RFM12B wireless module in the same way as on a JeeNode:

  • D.2 = INT0 = RFM12B INT
  • D.10 = SS = RFM12B CS
  • D.11 = MOSI = RFM12B SI
  • D.12 = MISO = RFM12B SO
  • D.13 = SCK = RFM12B SCK

5 I/O pins used up – let’s see how many the rest needs:

  • 2 LED’s = 2 pins
  • 2 buttons = 2 pins
  • buzzer = 1 pin
  • LCD + backlight = 7 pins
  • thermocouple = 3 pins
  • SSR output = 1 pin

Total 5 + 16 = 21 pins. Whoa, we’re running out of pins!

Unfortunately, we’re not there yet: the thermocouple chip consumes about 1 mA, so we need a way to power it down if we want a serious auto power-off option. That’s one extra pin.

Also, it would be very nice if this thing can be programmed like a regular Arduino or JeeNode, i.e. using D0 and D1 as serial I/O. That also would help a lot during debugging and in case we decide to use the serial port for configuration. Hm, another 2 pins.

And lastly, I’d like to be able to measure the current battery voltage. Drat, yet another (analog) pin.

All in all we seem to need 5 more pins than are available on an ATmega168/328 28-DIP chip!

The good news is that there are usually a few ways to play tricks and share pins for multiple purposes. One easy way out would be to just use an I/O expander (like the LCD-plug) and gain 5 I/O pins right away. But that’s cheating by throwing more hardware at the problem. Let’s look at some other options:

  • the SSR output can be combined with one of the LEDs, since a red LED will probably be used to indicate “heater on” anyway
  • the thermocouple chip is a (read-only) SPI chip, which means that its SCK and SO pins can be shared with those of the RFM12B
  • one way to free the button pins is to put the buttons on data lines used by the LCD – with extra resistors to let the LCD output work even while pressed
  • the buttons and LEDs could be combined, as on the Blink Plug (this is mildly confusing, since pressing a button always lights its LED as well), but this would prevent sharing the SSR output with the red LED
  • multiple buttons could be tied to a single analog input pin by adding some extra resistors, but this rules out the use of pin-change interrupts
  • yet another trick is to combine a high-impedance analog input (for measuring battery voltage) with a pin which is usually used as output, such as one of the LCD data pins

I’m inclined to adopt the first three tricks. That frees five pins – one can be used to power the thermocouple chip and two would be D0 and D1 to support standard serial I/O. We could have up to 5 push buttons this way.

So all in all, the 28-pin ATmega seems to be just right for the Reflow Timer. Depending on the complexity of the sketch, either an ATmega168 or an ATmega328 could be used. My current reflow sketch fits in either one.

With luck, the Reflow Timer can remain compatible with Arduino, RBBB, JeeNode, etc. and it will support sketch uploads in exactly the same way as with JeeNodes and RBBB’s, i.e. through an FTDI 6-pin header with a USB-to-FTDI interface such as the USB-BUB.

Let’s try and come up with a tentative pin allocation:

  • D.0 and D.1 = serial I/O via FTDI pins
  • D.2 and D.10 .. D.13 = RFM12B, as above
  • D.3 = LCD backlight (supports hardware PWM)
  • D.4 = buzzer
  • D.5 and D.6 = LED outputs (both support PWM)
  • D.8 and D.9 = thermocouple power and chip select
  • A.0 = battery voltage readout
  • A.1 .. A.5 and D7 = LCD (4 data + 2 control)
  • A.1 .. A.5 = shared with up to 5 push buttons

Several pins could be changed if this will simplify the board layout later – but hey, ya gotta start somewhere!

Note that I’m using D.X as shorthand for digital pins, and A.Y for analog pins, matching Arduino terminology (where A.Y can also be used as digital pin => D.(Y+14)).

The next step will be to work out more electrical details, i.e. figure out how to add some new features.

Picking an ATtiny

In AVR, Hardware on Oct 25, 2011 at 00:01

The reason I’m using an ATtiny for the AC current measurement setup, is their differential ADC + gain, and because it is fairly easy to develop for them using the same Arduino IDE as used by, eh, well, Arduino’s and JeeNodes. There are good installers for Windows, Mac OSX, and Linux.

There are several very interesting alternatives other than ATmega and ATtiny, such as the Microchip Technologies PIC and the Texas Instruments MSP430 series. But while they each are attractive for a number of reasons, they either have only a development environment for Windows, or they don’t support standard gcc, or they just don’t offer enough of an advantage over the Atmel AVR series to justify switching. So for something like the AC current node, which doesn’t even need to run off a battery, I’d rather stick to the Arduino IDE and carry over much of what is already available for it. In terms of cost, the differences are minimal.

The trouble with the ATtiny85 I’ve been using for the AC current sensor is that it only has at most 6 I/O pins, while the RFM12B needs 5 to operate (with polling instead of interrupts, this could be reduced to 4).

I’ve tried hard to find tricks to re-use pins, so that the differential ADC pins can be used during measurements while still supporting the RFM12B somehow. I even considered using a 1-pin OOK transmitter instead of the RFM12B. But in the end I gave up – the hassle of finding a solution, figuring out how to support this in software, and still have a decent way of debugging this (read: ISP) … it didn’t add up.

It’s much easier to pick a chip which is slightly less I/O-pin limited, such as the 14-pin ATtiny84:

Screen Shot 2011 10 17 at 14 10 11

One drawback w.r.t. to the ATtiny85, is that it has no 2.56V bandgap reference, only 1.1V – but it does have differential ADC inputs with optional 20x gain stage, which is what made AC current measurements possible.

It turns out that the ATtiny84 has enough I/O pins (a whopping 11!) to support an SPI interface to the RFM12B as well as 2 complete JeeNode-like ports. There is in fact enough I/O capability here to hook up a Room Board.

The 8 Kb flash rom size is sufficient for the RF12 driver plus a bit of application-specific functionality. The 512-byte RAM is not huge, but should also be sufficient for many purposes (two full RF12 buffers will use up less than a third of what’s available). And lastly, there are 512 bytes of EEPROM – more than enough.

To be honest, I’ve been fooling around with this chip for some time, since it could be used to create even smaller PCB’s than the JeeNode. But it has taken me quite a while to get the SPI working (both hardware- and software-based), which was essential to support the RFM12B wireless module. The good news is that it now does, so this is what I’m going to use for the rest of the AC current sensor experiments – wireless is on the way!

Stay tuned …

No, wait. One more factoid: this weblog was started exactly 3 years ago. Celebration time – cheers!

Charging a supercap

In Hardware on Oct 24, 2011 at 00:01

This is a quick experiment to see how this very low-power direct AC mains supply behaves:

JC s Doodles page 20

Note that I’ve built the 200 kΩ value from two resistors in series. This reduces the voltage over each one, and offers a slight security if one of them shorts out. The max 1 mA or so of current these resistors will let through is not considered lethal – but keep in mind that the other side is a direct connection, so if that happens to be the live wire then it’s still extremely dangerous to touch it!

One idea would be to add a “fusible” 100 Ω @ 0.5 W resistor in series with the 200 kΩ. These are metal-film resistors which will disconnect if they overheat, without releasing gases or causing flames. I can’t insert it in the other wire due to the voltage issue, so I’m not really sure it actually would make things any safer.

Here’s my first test setup of this circuit, built into a full-plastic enclosure:

DSC 2687

It took 20 minutes to reach 1.8V, the absolute minimum for operating an ATtiny. This is not a practical operating voltage, because whenever the circuit draws 1 mA or more, that voltage will drop below the minimum again.

The RFM12B wireless module will need over 2.2V to operate, and draw another 25 mA in transmit mode. The only way to make this work will be to keep the transmit times limited to the absolute minimum.

Still, I’m hoping this crude power supply will be sufficient. The idea is to run on the internal 8 MHz RC oscillator with a startup divider of 8, i.e. @ 1 MHz. The brown-out detector will be set to 1.8V, and the main task right after startup will be to monitor the battery voltage until it is considered high enough to do more meaningful work.

With 3.5V power, an ATtiny draws ≈ 600 µA @ 1 MHz in active mode and 175 µA in idle mode, so in principle it can continue running at this rate indefinitely on this power supply. But for “fast” (heh) startup, it’ll be better to use sleep mode, or at least take the system clock down well below 1 MHz.

This might be a nightmare to debug, I don’t know. Then again, I don’t have to use the AC mains coupled supply to test this. A normal low-voltage DC source plus supercap would be fine with appropriately adjusted resistors.

After 35 minutes, the voltage has risen to 2.7V – sure, take your time, hey, don’t rush because of me!

Another 5 minutes pass – we’re at a whopping 3.0V now!

Time for a cup of coffee…

After 45 minutes the charge on the 0.47F supercap has reached 3.3V – yeay! I suspect that this will be enough to operate the unit as current sensor and send out one short packet. We’ll see – it’ll all depend on the code.

After 1 hour: 3.75V, which is about as high as it will go, given the 5.1V zener and the 2x 0.6V voltage drop over the 1N4148 diodes. Update: my test setup tops out at 3.93V – good, that means it won’t need a voltage regulator.

Apparently, supercaps can have a fairly high leakage current (over 100 µA), but this decreases substantially when the supercap is kept charged. In an earlier test, I was indeed able to measure over 2.7V on a supercap after 24 hours, once it had been left charged for a day or so. In this current design the supply will be on all the time, so hopefully the supercap will work optimally here.

Not that it matters for power consumption: a transformerless supply such as this draws a fixed amount of current, regardless of the load. Here’s the final test, hooked up to live mains without the isolation transformer:

DSC 2688

Of this energy, over 95% is dissipated and wasted by the resistors. The rest goes into either the load or the zener.

Funny Eneloop battery

In Hardware on Oct 23, 2011 at 00:01

I don’t know what happened…

DSC 2685

Weird wrinkled plastic wrapper. No leakage or bulge. The other side:

DSC 2686

Maybe this thing got short-circuited and became very hot? Maybe the wrapping has some heat-shrinking properties on purpose, to report this condition even if you look well after things have cooled down again?

I’ve been switching to Eneloop AA batteries everywhere since over a year now, due to the three nice chargers I’ve got. The advantage over NiCad and NiMh is that these really retain their charge for more than a year – perfect for wireless sensor nodes. When fully charged, each cell supplies 1.3V @ 1900 mAh. I’m also re-using these batteries over and over again in the wireless keyboards and mice we have (my mouse runs out once a month).

But that’s the end of the line for this one!

ELRO energy monitor decoding #2

In Hardware on Oct 22, 2011 at 00:01

Yesterday’s post showed how to try and figure out the data sent out by the ELRO wireless units. There’s a lot of guesswork in there, but the results did look promising.

The last guess was about how the data bytes are organized in the packet – which is usually the hardest part. Ok, so now if I treat these as low-to-high 8-bit bytes, then the two packets give me:

    12 1 231 4 16 0 0 213 81 17    hex: 0C 01 E7 04 10 00 00 D5 51 11
    12 1 232 4 16 0 0 214 81 17    hex: 0C 01 E8 04 10 00 00 D6 51 11

There’s not much more one can do with this, because all the packets contain the same information. Now it’s time to add a load to the monitor, so that it will report some more interesting values. I used a 75 W light bulb, so the instantaneous consumption reported should be around that value, and there will probably be a slowly-increasing cumulative power consumption reported as well.

Here are a bunch of packets, using the most recent decoding choices:

Screen Shot 2011 10 16 at 16 10 13

Great – the first four repetitions have again mostly zero’s, with a minor fluctuation in what is presumably the line voltage again. Looks like each reading gets sent out 15 (!) times or so. Or maybe a few more which aren’t recognized – the pulse width thresholds might still be off slightly.

And then values start coming in. Let’s see what this gives when decoded as bytes:

    200 8 1 231 0 32 0 0 216 81 33
    200 8 1 230 0 32 0 0 215 81 33
    200 8 1 230 0 32 0 0 215 81 33
    200 8 1 229 0 32 0 0 214 81 33
    200 8 1 230 32 32 243 2 236 82 33
    200 8 1 230 32 32 242 2 235 82 33
    200 8 1 229 32 32 240 2 232 82 33
    200 8 1 229 32 32 239 2 231 82 33
    200 8 1 229 32 32 239 2 231 82 33
    200 8 1 229 32 32 240 2 232 82 33
    200 8 1 230 32 32 240 2 233 82 33
    200 8 1 229 32 32 239 2 231 82 33
    200 8 1 229 32 32 238 2 230 82 33

Hm. That fourth byte could indeed be the line voltage, but the rest?

Let’s try the 25 W lamp:

    200 8 1 229 10 32 246 0 214 82 33
    200 8 1 229 10 32 246 0 214 82 33
    200 8 1 228 10 32 246 0 213 82 33

Aha – 10 is ≈ 1/3rd of 32. And 246 is ≈ 1/3rd of 2*256+238. Maybe these are amps and watts, not cumulative values after all. No wonder it’s transmitting so often – a lost transmission will cause an inaccurate reading.

Here’s a 700 W load (my reflow grill):

    200 8 1 230 44 33 255 26 29 83 33
    200 8 1 230 44 33 246 26 20 83 33
    200 8 1 230 43 33 221 26 250 82 33

Checking the load with another meter tells me it’s more like 680 W and 2.89 A @ 231 V.

Well, well: the bytes 221 and 26, when interpreted as little-endian int are 6887, i.e. the wattage in 0.1 W steps!

Let’s try amps the same way. With no load, there were values 16 and 32, so probably bits 4..7 of that second byte are used for something else. Let’s try 43 and 33-32, little-endian: could it be 2.99 A?

If all these guesses are correct, then the 75 W lamp readings are: 0.32 A and 75.0 W, and the 25 W lamp readings are: 0.10 A and 24.6 W – hey, these are indeed all pretty much spot on, yippie!

Here’s the other unit with no load plugged in:

    102 12 1 232 0 16 0 0 210 81 17
    102 12 1 232 0 16 0 0 210 81 17

The first two bytes differ. Perhaps a unit ID with its own header checksum?

It looks like the 6-byte is either 16 or 32, perhaps indicating an auto-scale amps range. Also, note how the next to last byte changes from 81 to 82 to 83 on these readings. I suspect that the packet checksum is actually 16 bits.

Great, I think I can implement a decoder for them now. It would be nice to get the checksum validation right, but even without this it will be useful.

These two units are going to be used for some varying loads here at JeeLabs: probably the dish washer and the washing machine. Since they send out readings once every 10 seconds, that should give me sufficient info to correlate with the house meter downstairs.

The readout unit is of no use to me anymore, so I’ve taken it apart:

DSC 2683

Simple OOK receiver, and single-sided board. Not many surprises, really:

DSC 2684

There are low-cost temperature (an NTC ???) and humidity sensors in there, as well as a buzzer (you can set an alarm when a certain power level is exceeded).

The only interesting bit is that the power for the receiver is switched via a transistor, so presumably it synchronizes its reception timing to when the units transmit (the display supports up to 3 units).

One nasty habit of these ELRO units is that they send out a lot of packets: each one sends about 15 per every 10 seconds, and it looks like this takes well over one second per transmission. I’m glad they use the 433 MHz band, otherwise they’d cause quite a few collisions with all the wireless RF12 stuff going on at JeeLabs.


ELRO energy monitor decoding

In Hardware on Oct 21, 2011 at 00:01

I recently found this set at


The battery-powered receiver is a bit large and ugly (10×13 cm), but what I was after were the measurement units, which transmit wirelessly on the 433 MHz band, using OOK.

That was a good reason to dust off the ookScope project and adjust them to work with the latest Arduino IDE (sketch) and JeeMon (script).

Here is the result after over 1,000,000 pulses:


This is a histogram with counts on the horizontal axis and pulse widths on the vertical axis. Both are scaled in a somewhat peculiar logarithmic’ish way, but the main info is on the bottom status line: the packets contain 360 pulses (i.e. bit transitions) with maximum counts at pulse widths of 184, 360, and 460 µs.

I used very specific settings and thresholds to single out these packets:

Screen Shot 2011 10 16 at 14 28 48

So it only picks up packets with 360..362 bit transitions, and ignores all pulses under 40 µs (10 x 4 µs).

The two longer pulse widths might be the same “long” pulse, depending on whether that pulse comes after a short or a long pulse. Here are the first few pulse widths of a quick burst of packets (ignore the P and first int):

Screen Shot 2011 10 16 at 14 28 14

There’s clearly a pattern. If I apply the following translation:

  • pulse < 260 -> display as “-“
  • pulse 260..411 -> display as “.”
  • pulse > 411 -> display as “|”

… then this comes out (this is one long line, wrapped every 80 characters):

Screen Shot 2011 10 16 at 14 43 55

So it looks like there are short (< 260 µs) and long (> 411 µs) pulses, with always a pulse in the range 260..411 µs in between them. And if those dots contain no extra information anyway, then we might just as well omit them:

Screen Shot 2011 10 16 at 14 48 16

That leaves 181 bits of “data”, presumably. If I drop all packets which don’t end up with exactly 181 dashes and pipe symbols, then it turns out I get just a few patterns – here’s a group which changes halfway down, if you can spot the difference:

Screen Shot 2011 10 16 at 14 58 25

But there’s still too much regularity here, IMO. Note that there’s not a single run of three _’s or |’s in there (other than at the start of the line). In fact, all these are either _|’s or |_’s, back to back. So it looks like there are not 2 transitions per data bit, but 4. Let’s reduce the output further. I’ve replaced _| by “0” and |_ by “1” (assuming there are more 0’s than 1’s). I’ve also removed all duplicate lines, and inserted a count of them at the front:

Screen Shot 2011 10 16 at 15 16 03

Note the alternation of 1110 and 0001 in these lines. My hunch is that it’s a slowly varying measurement value, overflowing from 7 (binary 0111) to 8 (binary 1000). This would indicate that the bit order is low-to-high.

Note also that further down the packet, the bit pattern flips from 10 to 01, which is a difference of 1 in binary terms. That’s probably a checksum, and it’s not using exclusive or (since 4 bits have changed) but simple byte-summing. Furthermore, the checksum is 40 bits to the left of the changed value, so there are either 5 bytes from value to checksum, or 8 nibbles-plus-guard-bit units. Let’s try grouping them both ways:

Screen Shot 2011 10 16 at 15 32 18

There is no load right now. The 8-bit grouping is interesting, because then the value alternates between 231 (0b11100111) and 232 (0b11100100) … could this be the line voltage?

Tomorrow, I’ll continue this exploration – let’s see if the data can be extracted!

CC-RT: Choices and trade-offs

In Hardware on Oct 20, 2011 at 00:01

This is part 2 3 of the Crafted Circuits – Reflow Timer series.

There are many design choices in the Reflow Timer. The goal is to keep it as simple and cheap as possible, while still being usable and practical, and hopefully also convenient in day-to-day use.

Display and controls – there are several low-cost options: separate LEDs, 7-segment displays, a character LCD, or a graphics LCD. The LEDs would not allow displaying the current temperature, which seems like a very useful bit of info. To display a few numbers, a small character-based LCD is cheaper and more flexible than 7-segment displays (which need a lot of I/O lines). The only real choice IMO, is between a character-based and the graphics LCD. I’ve decided to go for a 2×16 display because A) fancy graphics can be done on a PC using the built-in wireless connection, and B) a character LCD is cheaper and sufficient to display a few values, status items, and menu choices. And if I really want a GLCD option, I could also use wireless in combination with the JeePU sketch.

For the controls, there’s really only one button which matters: START / STOP. The power switch might be avoided if a good auto-power implementation can be created in software. For configuration, at least one more button will be needed – with short and long button presses, it should be possible (although perhaps tedious) to go through a simple setup process. A third button might make it simpler, but could also slightly complicate day-to-day operation. So two or three buttons it is.

Temperature sensor – this is the heart of the system. There are essentially two ways to go: using an NTC resistor or using a thermocouple. The NTC option is considerably cheaper and can be read out directly with an analog input pin, but it has as drawback that it’s less accurate. In the worst case, accuracy might be so low that a calibration step will be needed.

Thermocouples don’t suffer from the accuracy issue. A K-type thermocouple has a known voltage differential per degree Celsius. The drawback is that these sensors work with extremely low voltages which require either a special-purpose chip or a very sensitive ADC converter. Since thermocouple voltages are based on temperature differences, you also need some form of tracking against the “cold junction” side of the thermocouple. Thermocouple-based sensing is quite tricky.

But the main reason to use them anyway, is mechanical: although there are glass-bead NTC’s which can withstand 300°C and more, these sensors come with short wires of only a few centimeters. So you need to somehow extend those wires to run from the heater to the Reflow Timer. And that’s where it gets tricky: how do you attach wires to that sensor, in an environment which will heat up well beyond the melting point of solder? And what sort of wire insulation do you use? Well… as it turns out, all the solutions I found are either very clumsy or fairly expensive. There’s basically no easy way to get a glass-bead NTC hooked up to the reflow timer in a robust manner (those wires out of the glass bead are also very thin and brittle). So thermocouple it is.

Thermocouple chip – for thermocouples, we’ll need some sort of chip. There seem to be three types:

  • dedicated analog, i.e. the AD597
  • dedicated digital, i.e. the MAX6675 or MAX31855
  • do-it-yourself, i.e. a sensitive ADC plus cold-junction compensator

The AD597 is used the the Thermo Plug and in my current reflow controller setup. It works well, with a voltage of 10 mV/°C coming out as analog signal. So with 250°C, we get 2.50V – this is a perfect match for an ATmega running at 3.3V. The only small downside, is that it needs an operating voltage which is at least 2V higher than the highest expected reading. If we need to go up to say 275°C (above what most ovens can do), then we’ll need a 4.75 V supply voltage for the AD597.

The MAX6675 doesn’t have this problem because the readout is digital, and works fine with supply voltages between 3.0 and 5.5V. But it’s a very pricey chip (over €14 incl VAT). Keeping these in stock will be expensive!

The MAX31855 is also a digital chip, and about half the price of the MAX6675. The main difference seems to be that it can only operate with a supply from 3.0 to 3.6V, which in our case is no problem at all (we need to run at 3.3V anyway for the RFM12B). I’ve no experience with it, but this looks like a great option for the Reflow Timer.

There is a slight issue with each of these chips, in that they do not exist in through-hole versions but only in a “surface mounted device” (SMD) style. The package is “8-SOIC”, i.e. a smaller-than-DIP 8-pin plastic chip:

8 SOIC sml

For people who don’t feel confident with soldering it might pose a challenge. There are no sockets for SMD, you really have to solder the chip itself. Then again, if you’re going to create a reflow setup for building SMD-based boards anyway, you might as well get used to soldering these size chips. Trust me, SOIC is actually quite easy.

(note: there is an all-DIP solution with the LT1025, but it needs an extra op-amp, so I’ve not checked further)


If we can use the MAX31855, then everything can be powered with 3.3V. This means that either 3x AA or 1x LiPo will work fine, in combination with a 3.3V regulator. I’ll stick with the MCP1702 regulator, even though it’s not the most common type, because of its low standby current – this will help reduce power in auto power-down mode.

But how much current do we need? To put it differently: how long will these batteries last? Let’s find out.

The prototype I have appears to use about 35 mA while in operation. Let’s take a safety margin and make that 50 mA in case we also need to drive an opto-coupler for the SSR option. And let’s say we use 2000 mAh AA cells, then we’ll get 40 hours of operation out of one set of batteries. Let’s assume that one reflow cycle takes 10 minutes, plus another 5 minutes for auto power-off, then we can use one set of batteries for 160 reflow cycles. Plenty!

We could even power the Reflow Timer with an AA Power Board, and still get about 50 cycles – but that would increase the cost and require some very small SMD components.

Let’s just go for the 3x AA setup, with either a DC or USB jack as possible alternative.

AC mains switching

For switching the heater, there are several options. The one I’m using now is a remote-controlled FS20 switch from Conrad (or ELV). It can be controlled directly by the RFM12B wireless module. An alternative would be the KAKU (a.k.a. Klik Aan Klik Uit or Home Easy) remote switch, which operates at 433 MHz and kan also be controlled directly from the RFM12B. The advantage of this setup is that you never need to get involved with AC mains – just place the remote switch between mains socket and heater (grill, oven, etc) and you’re done.

Another option is to use a Solid State Relay (SSR), which needs 5..10 mA of current through its built-in opto-coupler. I built this unit a while back to let me experiment with that. The benefit of such a configuration is that all the high-voltage AC mains stuff is tucked away and out of reach, and that the control signal is opto-isolated and can be attached to the Reflow Timer without any risk. Note that with SSR, the RFM12B module becomes optional.

Yet another option would be to use a mechanical relay, but I’d advise against that. Some heaters draw quite a bit of current (up to 10A) and will require a hefty relay, which in turn will require a hefty driver. Also, very few power relays can operate at 5V, let alone 3.3V – which means that a 3x AA powered approach would not work.

So, RF-controlled switch it is, with an extra header or connector to drive the LED in an SSR as option.

That’s about it for the main Reflow Timer circuit design choices, methinks.

Voltage levels

In AVR, Hardware on Oct 17, 2011 at 00:01

In yesterday’s post, I described the idea of powering the AC current detector via a transformer-less power supply, using a very large capacitor or a supercap.

That means the whole circuit ends up being connected to 220V AC mains. You might think that nothing changed, since the circuit was already connected to mains via the 0.1 Ω shunt, but there’s more to it – as always!

If the power supply is tied to AC mains, then that means the circuit’s GND and VCC are also tied to these wires. The problem is that these two things interfere with each other:

JC s Doodles page 19

Because now we have a signal coming from the voltage drop generated by the shunt which is referenced to the same voltage level as the GND of the circuit. In other words, that signal we’re trying to measure now swings around zero! And while the ATtiny has a differential input, which in principle only cares about the voltage differential between two pins, it’s not designed to deal with negative voltages.

Uh, oh – we’re in trouble!

I could use a capacitor to “AC-couple” the 50 Hz frequency into a voltage divider, but that effectively creates a high-pass filter which attenuates the 50 Hz and lets more of the noise through. Not a very nice outlook, and it’s also going to require a few additional passive components. I’m still aiming for a truly minimal component count.

But we’re in luck this time. The differential ADC appears to be perfectly happy with tying one side to ground. It might not be able to measure the negative swings, but it does the positive ones just fine. When I tried it on my existing setup, I still got more or less the same readings.

Still, we do have to be careful. A negative voltage on any input pin is going to seek its way through the ESD protection diodes present on each ATtiny I/O pin. Keep in mind that we’re dealing with a very low-impedance shunt, and large currents. So it’s important to limit the effect of negative swings to avoid damage to the chip. The easiest way to do so is to include a 1 kΩ resistor in series, i.e. between signal and ADC input pin. That way, even a 1 V negative voltage excursion will drive less than 1 mA current through the ESR diode, a value which is still well within specs. Even better, that 1 kΩ resistor can be combined with a 0.1 µF cap to ground, as low-pass for the ADC.

Good, so if that weak-supply-feeding-a-big-cap idea works, then the rest of the circuit ought to continue working as intended, even though we’re operating at the limit of the ATtiny’s ADC voltage range.

All that’s left to do then, is get that power supply right. Oh, wait: and figure out a way to get a wireless setup going. Oh, and also figure out a good enclosure to keep this dangerous hookup safely tucked away and isolated.

Oh well. Not there yet, but progress nonetheless!

Finding a power source

In Hardware on Oct 16, 2011 at 00:01

Assuming I can figure out a way to transmit wireless information from the ATtiny, I’d like to make that recent AC current change detector a self-contained and self-powered unit. At minimal cost, i.e. with as few parts as possible.

That’s a bit of a problem. Adding a transformer-based power supply, however feeble, or a ready-made AC/DC converter would probably triple the cost of the setup so far. Not good.

I really only need a teeny bit of power. The techniques to a get a JeeNode into low-power sensing have been well-researched and documented by now. It shouldn’t be too hard to make an ATtiny equally low-power.

First of all, this “power sensing node” really doesn’t have to be on all the time. Measuring power once every few seconds would be fine, and reporting over wireless only when there is a significant change in detected current. So for the sake of argument, let’s say we measure once a second, track the average of three to weed out intermittent spikes, and report only when that average changes 20% or more since the last value. For continuity, let’s also report once every 3 minutes, just to let the system know the node is alive. So that’s one packet with a 2-byte payload every 3 minutes most of the time, and one current measurement every second (with the same ADC sampling and filtering as before).

What this comes down to is that we need perhaps 3.3V @ 10 µA all the time, with a 30 mA peak current draw every couple of minutes.

A battery would do fine. Perhaps 2x AA or a CR123 1/2 AA. But it feels silly… this thing is tied to a power line!

Why not use a transformer-less power supply, as described in this well-known application note from MicroChip?

Well, there’s a problem. These types of supplies draw a constant amount of current, regardless of the load. Whatever the circuit doesn’t use is consumed by the zener diode. So to be able to drive a 30 mA peak, we’d need a power supply which constantly draws 30 mA, i.e. 6.6 watts of power. Whoa, no thanks!

Here’s a basic resistive transformer-less supply (capacitive would also be an option):

JC s Doodles page 19 copy

There is a way to reduce the current consumption, since we only need that 30 mA surge very briefly, and not very often: use a big fat capacitor on the end, which stores enough energy to provide the surge without the voltage collapsing too far. This might be a good candidate for a trickle-charged small NiMh cell or even a supercap!

Hm, let’s see. If the supply is dimensioned to only supply a very small amount of current, say 1 mA, then it would be more than enough to charge that capacitor and supply the current for the ATtiny while in power-down mode. A 0.47 F supercap (which I happen to have lying around) ought to be plenty. This power supply would draw 0.22 W – continuously. Still not stellar, but not worse than several other power bricks around here.

Alas, such a design comes with a major drawback: with such a small current feeding such a large cap, it will take ages for the initial voltage to build up. I did a quick test, and ended up waiting half an hour for the output to be useful for powering up an ATtiny + RFM12B. That’s a lot a waiting for when you plug in such a system for the first time, eager to see whether it works. It also means that the firmware in the ATTiny has to very careful at all times with the limited energy available to it.

Still, I’m tempted to try this. What’s half an hour in the grand scheme of things anyway?

CC-RT: Initial requirements

In AVR, Hardware, Software on Oct 14, 2011 at 00:01

Let’s get going with the CC-RT series and try to define the Reflow Timer in a bit more detail. In fact, let me collect a wish list of things I’d like to see in there:

The Reflow Timer should…

  • support a wide range of ovens, grills, toasters, and skillets
  • be self-contained and safe to build and operate
  • include some buttons and some sort of indicator or display
  • be created with through-hole parts as much as possible
  • (re-) use the same technologies as other JeeLabs products
  • be built on a custom-designed printed circuit board
  • use a convenient and robust mechanical construction
  • be very low-cost and simple to build

To start with that last point: the aim is to stay under € 100 as end-user price, including a simple toaster and whatever else is needed to control it. That’s a fairly limiting goal, BTW.

I’m sticking to “the same technologies” to make my life easy, both in terms of design and to simplify inventory issues later, once the Reflow Timer is in the shop. That translates to: an Arduino-like design with an ATmega328, and (for reasons to be explained next) an RFM12B wireless module.

Safety is a major concern, since controlling a heater tied to 220 V definitely has its risks. My solution to controlling an oven of up to 2000 W is the same as what I’ve been doing so far: use a commercially available and tested power switch, controlled via an RF signal. KAKU or FS20 come to mind, since there is already code to send out the proper signals through an RFM12B module. Range will not be an issue, since presumably everything will be within a meter or so from each other.

With wireless control, we avoid all contact with the mains power line. I’ll take it one step further and make the unit battery-operated as well. There are two reasons for this: if we’re going to uses a thermocouple, then leakage currents and transients can play nasty games with sensors. These issues are gone if there is no galvanic connection to anything else. The second reason is that having the AC mains cable of a power supply running near a very hot object is not a great idea. Besides, I don’t like clutter.

Having said this, I do not want to rule out a couple of alternatives, just in case someone prefers those: controlling the heater via a relay (mechanical or solid-state), and powering the unit from a DC wall wart. So these should be included as options if it’s not too much trouble.

To guard against heat & fire problems, a standard heater will be used with a built-in thermostat. The idea being that you set the built-in thermostat to its maximum value, and then switch the entire unit on and off via the remote switch. Even in the worst scenario where the switch fails to turn off, the thermostat will prevent the heater from exceeding its tested and guaranteed power & heat levels. One consequence of this is that the entire reflow process needs to unfold quickly enough, so that the thermostat doesn’t kick in during normal use. But this is an issue anyway, since reflow profiles need to be quick to avoid damaging sensitive components on the target board.

On the software side, we’ll need some sort of configuration setup, to adjust temperature profiles to leaded / unleaded solder for example, but also to calibrate the unit for a specific heater, since there are big differences.

I don’t think a few LEDs will be enough to handle all these cases, so some sort of display will be required. Since we’ve got the RFM12B on board anyway, one option would be to use a remote setup, but that violates the self-contained requirement (besides, it’d be a lot less convenient). So what remains is a small LCD unit, either character-based or graphics-based. A graphic LCD would be nice because it could display a temperature graph – but I’m not sure it’ll fit in the budget, and to be honest, I think the novelty of it will wear off quickly.

On the input side, 2 or 3 push buttons are probably enough to adjust everything. In day-to-day operation, all you really need is start/stop.

So this is the basic idea for the Reflow Timer so far:

JC s Doodles page 18

Ok, what else. Ah, yes, an enclosure – the eternal Achilles’ heel of every electronics project. I don’t want anything fancy, just something that is robust, making it easy to pick up and operate the unit. I’ve also got a somewhat unusual requirement, which applies to everything in the JeeLabs shop: it has to fit inside a padded envelope.

Enclosures are not something you get to slap on at the end of a project. Well, you could, but then you lose the opportunity of fitting its PCB nicely and getting all the mounting holes in the best position. So let’s try and get that resolved as quickly as possible, right?

Unfortunately, it’s not that easy. We can’t decide on mechanical factors before figuring out exactly what has to be in the box. Every decision is inter-dependent with everything else.

Welcome to the world of agonizing trade-offs, eh, I mean… product design!

AC measurement status

In AVR, Software on Oct 12, 2011 at 00:01

Before messing further with this AC current measurement stuff, let me summarize what my current setup is:

JC s Doodles page 17

Oh, and a debug LED and 3x AA battery pack, which provides 3.3 .. 3.9 V with rechargeable EneLoop batteries.

I don’t expect this to be the definitive circuit, but at least it’s now documented. The code I used on the ATtiny85 is now included as tiny50hz example sketch in the Ports library, eh, I mean JeeLib. Here are the main pieces:

Screen Shot 2011 10 07 at 00 32 58

Nothing fancy, though it took a fair bit of datasheet reading to get all the ADC details set up. This sketch compiles to 3158 bytes of code – lots of room left.

This project isn’t anywhere near finished:

  • I need to add a simple RC low-pass filter for the analog signal
  • readout on an LCD is nice, but a wireless link would be much more useful
  • haven’t thought about how to power this unit (nor added any power-saving code)
  • the ever-recurring question: what (safe!) enclosure to use for such a setup
  • and most important of all: do I really want a direct connection to AC mains?

To follow up on that last note: I think the exact same setup could be used with a current transformer w/ burden resistor. I ought to try that, to compare signal levels and to see how well it handles low-power sensing. The ATtiny’s differential inputs, the 20x programmable gain, and the different AREF options clearly add a lot of flexibility.


Capturing some test data

In Software on Oct 2, 2011 at 00:01

(Whoops, looks like I messed up the correct scheduling of this post!)

Coming soon: a bit of filtering to get better AC current readouts.

There are many ways to do this, but I wanted to capture some test measurements from the AC shunt first, to follow up on the 220V scope test the other day. That way I don’t have to constantly get involved with AC mains, and I’ll have a repeatable dataset to test different algorithms on. Trouble is, I want to sample faster and more data than I can get out over wireless. And a direct connection to AC mains is out of the question, as before.

Time to put some JeePlugs from my large pile to use:


That’s a 128 Kbyte Memory Plug and a Blink Plug. The idea is to start sampling at high speed and store it in the EEPROM of the Memory Plug, then power off the whole thing, connect it to a BUB and press a button to dump the saved data over the serial USB link.

Here’s the sketch I have in mind:

Screen Shot 2011-10-02 at 01.31.52.png

Note that saving to I2C EEPROM takes time as well, so there will be gaps in the measurement cycle with this setup. Which is why I’m sampling in bursts of 512. If that doesn’t give me good enough readings, I’ll switch to an interrupt driven mechanism to do the sampling.

Hm… there’s a fatal flaw in there. I’ll fix that and report the results tomorrow.

Direct connection summary

In Hardware on Sep 20, 2011 at 00:01

The different circuits described in the past few days all had problems (the first ACS Hall-effect hookup worked reasonably well, but was not sensitive enough for my purposes).

Here’s the circuit as it is now:

DSC 2631

Bottom side:

DSC 2632

(a slightly different layout would have avoided the overhang, had I known the complete setup in advance)

There are 3 independent circuits on there connected in series (so the same current passes through each of ’em):

JC s Doodles page 14

As you can see in the first image, there’s a little screw-less terminal block on the board. It lets me short out any combination of these setups – this was useful until everything had been built up, and also lets me rule out interference while focusing on a specific setup. Note also that this is an epoxy-based board, not the pressed-cardboard type which breaks easily and isolates less.

Let me just repeat that messing with AC mains voltages like this can be extremely dangerous. This circuit has all the wires completely exposed, and that’s intentional: I prefer to be reminded of the risks, instead of getting a false sense of security because it all looks ok! One way to deal with these risks is to completely stand back from the entire hookup and only transfer measurement data over wireless whenever applying power to this stuff.

Tomorrow I’ll describe a simpler setup to get to the bottom of the measurement anomalies I’ve been seeing so far. With many thanks for all your comments – there’s clearly a lot more to it than with simple logic level signals!

RFM12B Command Calculator

In Software on Sep 18, 2011 at 00:01

I wanted to learn a bit more about how to implement non-trivial forms in HTML + JavaScript. Without writing too much HTML, preferably…

There are a few versions of a tool floating around the web, which help calculate the constants needed to configure the RFM12B wireless module – such as this recent one by Steve (tankslappa).

Putting one and one together – it seemed like an excellent exercise to try and turn this into a web page:

Screen Shot 2011-09-17 at 15.47.21.png

If only I had known what I had gotten myself into… it took more than a full day to get all the pieces working!

In fairness, I was also using this as a way to explore idioms for writing this sort of stuff. There is a lot of repetitiveness in there, mixed with a frustrating level of variation in the weirdest places.

Due to some powerful JavaScript libraries, the result is extremely dynamic. There is no “refresh”. There is not even a “submit”. This is the sort of behavior I’m after for all of JeeMon, BTW.

The calculator has been placed on a new website, called The page is dynamically generated, but you can simply save a copy of it on your own computer because all the work happens in the browser. The code has been (lightly) tested with WebKit-based browsers (i.e. Safari and Chrome) and with FireFox. I’d be interested to hear how it does on others.

If you want to look inside, just view the HTML code, of course.

As I said, it’s all generated on-the-fly. From a page which is a pretty crazy mix of HTML, CSS, JavaScript, and Tcl. With a grid system and a view model thrown in.

You can view the source code here. I’ve explored a couple of ways of doing this, but for now the single-source-file-with-everything-in-it seems to work best. This must be the craziest software setup I’ve ever used, but so far it’s been pretty effective for me.

Why use a single programming language? Let’s mash ’em all together!

SMD current transformer

In Hardware on Sep 16, 2011 at 00:01

Here’s another attempt to detect whether an appliance is on and consuming power.

This time I’m going to use a tiny SMD transformer (DigiKey part# 811-1193-1-ND). It has a 1:100 turn ratio, with 0.007 Ω resistance in the primary “coil” (I think it’s just a single loop), and rated up to 10 A.

The problem with this part, which rules it out for serious use, is that it’s only meant for frequencies in the range 50 kHz to 500 kHz. Whoops, that’ll be off by three orders of magnitude when used with 50 Hz AC mains…

Oh well, since I had one lying around, least I can do is try it – right? Here’s the setup:


The hookup is a bit odd, but will become clear in future posts. The SMD transformer is only a few millimeters square, between the green jumper block and the black mini-breadboard.

I’m using the current transformer (CT) as follows:

JC's Doodles, page 12.png

… except that this schematic is for another experiment, still pending. In this case, there is no 0.1 Ω resistor on the primary side (it wouldn’t make a difference, next to the primary’s 7 mΩ), but there’s a 100 Ω “burden resistor” on the secondary side. This matches the data sheet for getting a 1 V/A sensitivity. For info on CT’s and burden resistors, see this OpenEnergyMonitor page.

Here are some first results, using the same sketch as in yesterday’s post:

    OK 17 164 19 0 0
    OK 17 17 16 0 0
    OK 17 66 16 0 0
    OK 17 42 19 0 0
    OK 17 241 163 0 0   <- 100 W light bulb
    OK 17 107 95 0 0
    OK 17 49 93 0 0
    OK 17 77 94 0 0
    OK 17 134 94 0 0
    OK 17 244 93 0 0
    OK 17 7 94 0 0
    OK 17 151 93 0 0
    OK 17 27 93 0 0
    OK 17 124 90 0 0
    OK 17 119 24 0 0    <- off
    OK 17 125 24 0 0
    OK 17 36 16 0 0
    OK 17 246 15 0 0
    OK 17 109 16 0 0
    OK 17 33 18 0 0
    OK 17 50 16 0 0

These are raw bytes, i.e. a 32-bit long int, sent once a second and received via wireless. I’m powering the JeeNode with an AA Power Board and standing back during testing, the 220 VAC is now a bit too close for (my) comfort…

The average measurement value with no current is now about 16 x 256 = 4 mV.

The high value when the light has been turned on is about 94 x 256 = 24 mV, a 20 mV increase.

Not bad… but it’s less sensitive than the ACS714 Hall-effect sensor. I didn’t even try it with a 1 W load. This is a far cry from the claimed 1 V/A sensitivity, no doubt because the 50 Hz signal is so far off from the design specs of this SMD current transformer. Saturation effects, or something…

I’ll continue these experiments in the next few days with another current transformer which is designed to work at 50 Hz, as well as with a reversed 220 : 6.3 V PCB transformer. Stay tuned!

Sending RF12 packets over UDP

In Software on Sep 13, 2011 at 00:01

As I mentioned in a recent post, the collectd approach fits right in with how wireless RF12 broadcast packets work.

Sounds like a good use for an EtherCard + JeeNode pair (or any other ENC28J60 + RFM12B combo out there):


The idea is to pass all incoming RF12 packets to Ethernet using UDP broadcasts. By using the collectd protocol, many different tools could be used to further process this information.

What I did was take the etherNode sketch in the new EtherCard library, and add a call to forwardToUDP() whenever a valid RF12 comes in. The main trick is to convert this into a UDP packet format which matches collectd’s binary protocol:

Screen Shot 2011-09-12 at 17.50.37.png

The sendUdp() code has been extended to recognize “multicast” destinations, which is the whole point of this: with multicast (a controlled form of broadcasting), you can send out packets without knowing the destination.

The remaining code can be found in the new JeeUdp.ino sketch (note that I’m now using Arduino IDE 1.0beta2 and GitHub for all this).

It took some work to get the protocol right. So before messing up my current collectd setup on the LAN here at JeeLabs, I used port 25827 instead of 25826 for testing. With JeeMon, it’s easy to create a small UDP listener:

And sure enough, eventually it all started to work, with RF12 packets getting re-routed to this listener:

Screen Shot 2011-09-12 at 20.04.08.png

Now the big test. Let’s switch the sketch to port 25826 and see what WireShark thinks of these packets:

Screen Shot 2011-09-12 at 17.15.16.png

Yeay – it’s workin’ !

The tricky part is the actual data. Since these are raw RF12 packets, with a varying number of bytes, there’s no interpretation of the data at this level. What I ended up doing, is sending the data bytes in as many collectd 64-bit unsigned int “counter values” as needed. In the above example, two such values were needed to represent the data bytes. It will be up to the receiver to get those values and convert them to meaningful readings. This decoding will depend on what the nodes are sending, and can be different for each sending JeeNode.

I’ve left the original web browser in as well. Here is the “JeeUdp” box, as seen through a web browser:

Screen Shot 2011-09-12 at 18.02.28.png

(please ignore the old name in the title bar, the name is now “JeeUdp”)

It’s not as automatic out of the box as I’d really like. For one, you have to figure out which IP address this unit gets from DHCP – one way to do so is to connect it to USB and open the serial console. The other bit is that you need to configure the unit to set its name, the UDP port, and the RF12 settings to use. There’s a “Configure” link of the web page to do this – a some point, I’d like to make JeeMon aware of this, so it can do the setup itself (via http). And the last missing piece of the puzzle is to hook this into the different drivers and decoders to interpret the data from these UDP packets in the same way as with a JeeLink on USB.

Ultimately, I’d like to make this work without any remote setup:

  • attach Ethernet and power to this box (any number of them)
  • each box starts reporting its status via UDP (including its IP address)
  • a central JeeMon automatically recognizes these units
  • you can now give a name to each box and fill in its RF12 configuration
  • packets start coming in, so now you can specify the type of each node
  • decoders kick in to pick up the raw data and generate meaningful “readings”
  • that’s it – easy deployment of JeeNode-based networks is on the horizon!

Not there yet, but all the essential parts are working!

Sensing power consumption

In Hardware on Sep 11, 2011 at 00:01

A small exploration of a circuit which went nowhere…

I’ve been looking for a low-cost way to detect whether an electrical appliance is drawing current. There would be several uses for that, from power-saving master/slave power strips, to helping figuring out what’s going on in the house by measuring the central meter, as I’m doing already.

Here’s my original (naïve) line of thought:

  • optocouplers are a great way to stay out of AC mains electrical trouble
  • many units will still work with currents down to well under 1 mA
  • all I want is detection – is a device ON or OFF?
  • the rest can be done with JeeNodes, wirelessly, as usual

Here’s the (pretty silly) first step, in series between power outlet and appliance:

JC's Doodles, page 11.png

A resistor to limit the current, and the opto-coupler to “light up” and produce a signal in a galvanically isolated manner. But don’t get your hopes up, this setup definitely won’t work!

  • First of all, the appliance will stop working, if the resistor is in the 100 kΩ range.
  • With resistance values much lower, the resistor will become a heater and self-destruct.
  • Besides, even a 100 W load will draw 0.5 A or so… way too much for the LED.
  • There’s also a small problem with the voltage being AC, not DC, but that’s easily solved with a diode.

It’s a pretty useful mental exercise to work through this and understand how voltage, current, and power interact. If you’re totally lost, you could check out this series of posts and this link.

Sooo… could this approach be made to work?

In this case, an indirect demonstration that it can’t possibly work is actually easier to argue: suppose there were a circuit with the opto-coupler’s LED in series between outlet and load. It only needs to cause a voltage drop of about 1.5V to get the IR led inside to light up. Now suppose the load draws 1000 W, i.e. nearly 5 A. Big problem: (due to P = E x I) the power consumed by the opto-coupler would be 7.5 W … an excessive amount of heat for such a tiny device (apart from the sheer waste!).

Even some form of magical “power reduction” would be tricky, since I do need to detect the power-OFF case, i.e. power levels down to perhaps 0.5 W of standby power consumption for modern appliances (some even less). For 220V, 0.5 W corresponds to an average current of only 2.2 mA.

Maybe there’s a clever way to do this, but I haven’t found it. Something with a MOSFET, mostly conducting, but opening up just a little perhaps 1000x per second, to very briefly allow the LED in an opto-coupler to detect the current (with an inductor to throttle it). Heck, an ATtiny could be programmed to drive that MOSFET, but now it’s all becoming an active circuit, which needs power, several components, etc.

The trouble with AC mains is those extremes – on the one hand you need decent sensitivity, on the other you need to deal with peak loads of perhaps 2000 W, with their hefty currents, and where heat can become a major issue. We’re talking about power levels ranging over more than three orders of magnitude!

Of course there is the current transformer. But those are a bit too expensive to use all over the place.

Another option would be to use a regular transformer in reverse, as in this article. With a little 220 => 6.3V transformer, the ratio would be 35:1. With 5A max, a 0.1 Ω resistor would have 0.5 W of heat loss and create a 0.5V difference, amplified up to 17.5 V. For a 2.2 mA “off” current (0.5 W standby), that voltage would drop to 7.7 mV, which is probably too low to measure reliably with the ATmega ADC. Still.. might be worth a try.

An interesting low-cost sensor is based on the Hall-effect to detect magnetic fields. Because wherever there is alternating current, there is a magnetic field.

With hall-effect sensors, the problem is again at the low end. Keep in mind that the goal is reliable detection of current, not measurement. Now, you could increase the sensitivity by increasing the intensity of the magnetic field – i.e. by creating a coil around the sensor, for example. Trouble is: since those wires need to also handle large power consumption levels, they are by necessity fairly thick. Thick wires make lousy coils, and are impossible to get close to the sensor.

There are other ways. Such as measuring light levels, in case of a lamp. Or heat in case of a TV (would be a bit sluggish). Or vibration in case of a fridge.

Maybe a hand-made 100-turn current transformer would work? That’s a matter of winding some thin enameled wire around a single mains-carrying insulated copper wire.

Hm… might give that a try.

The challenge is to find something reasonably universal, which does not require a direct galvanic connection to AC mains. It seems so silly: an ATmega can detect nanowatt levels of energy, yet somehow there is no practical path from AC power ON/OFF detection to that ATmega?

PS. A related (and more difficult) problem is “stealing” some ambient magnetic energy to power a JeeNode. You don’t need much while it’s asleep, but transmitting even just one tiny packet is another story.

JeeNode with a 32 KHz crystal

In AVR, Hardware on Jun 28, 2011 at 00:01

Another experiment: running a JeeNode on its internal 8 MHz RC clock while using the crystal input to run timer 2 as an RTC. This would allow going into a very low-power mode while still maintaining a much more accurate sense of time.

To this end, I modified a JeeNode SMD, by replacing the 16 MHz resonator with a 32 KHz crystal:

Dsc 2599

I didn’t even bother adding capacitors, these are probably not needed with this crystal (same as on the RTC Plug) since there is probably enough parasitic capacitance already.

The tricky part is the code, since the ATmega is now running in a not-so-Arduino-like mode. Could almost have used OptiBoot with this setup, but the only internal RC clock build for it is the LilyPad, which has an ATmega168.

I ended up using the ISP programmer. IOW, I now compile for “LilyPad w/ 328” and then bypass the bootstrap and serial-upload. Less convenient, but it works.

Here’s a quick test sketch which writes a dot on the serial line exactly once a second:

Screen Shot 2011 06 24 at 15.51.49

So this setup is working. It draws 4.10 ma, consistent with the recent current measurements: slightly less than with the 16 MHz resonator pre-scaled by two.

In idle mode, current use drops to 1.71 mA, not bad!

Now let’s power down with just the timer running as real time clock. There’s a special “power save” mode which does just that. The difference is that timer 0 will now be off, so there won’t be interrupts waking up the ATmega every 1024 µs (as side-effect, the millis() function will start to lose track of time):

Screen Shot 2011 06 24 at 16.15.04

A small adjustment is needed to make sure the serial port is finished before we go into low-power mode, hence the call to delay(3).

Hm, power consumption is still 0.67 mA – quite a bit, given that we’re really powering down most of the time.

Ah, wait. I forgot to turn off the radio. Doing that brings the reading down to 0.15 mA – and I forgot to turn off the ADC and other sub-systems. Now we’re down to … 0.05 mA, while still printing dots:

Screen Shot 2011 06 24 at 16.39.15

Note that these 1 second interrupts are very accurately timed, more so even than with the standard 16 MHz resonator. This could be used to perform time-domain tricks on the wireless side, i.e. waking up just in time whenever a “scheduled” packet is expected to come in – as described in yesterday’s post.

There’s probably more left to try. The delay is running on full power, waiting for the serial output to clear the USART. It could be done while in idle mode, for example. Anyway… that entire delay becomes superfluous when we stop sending out debugging output over the serial port.

So there you have it – a JeeNode running at ≈ 8 MHz, with a precise 32,768 Hz pulse feeding timer 2, in a way which supports low-power sketches while maintaining an accurate sense of time.

Wanna make a clock?

Time-controlled transmissions

In Hardware, Software on Jun 27, 2011 at 00:01

Receiving data on remote battery-powered JeeNodes is a bit of a dilemma: you can’t just leave the receiver on, because it’ll quickly drain the battery. Compare this to sending, where nodes can easily run for months on end.

The difference is that with a remote node initiating all transmissions, it really only has to enable the RFM12B wireless module very briefly. With a 12..23 mA current drain, brevity matters!

So how do you get data from a central node, to remote and power-starved nodes?

One way is to poll: let the remote node ask for data, and return that data in an ACK packet as soon as asked. This will indeed work, and is probably the easiest way to implement that return data path towards remote nodes. One drawback is that if all nodes start polling a lot, the band may become overloaded and there will be more collisions.

Another approach is to agree on when to communicate. So now, the receiver again “polls” the airwaves, but now it tracks real time and knows when transmissions for it might occur. This is more complex, because it requires the transmitter(s) and receiver(s) to be in sync, and to stay in sync over time.

Note that both approaches imply a difficult trade-off between power consumption and responsiveness. Maximum responsiveness requires leaving the receiver on at all times – which just isn’t an option. But suppose we were able to stay in sync within 1 ms on both sides. The receiver would then only have to start 1 ms early, and wait up to 2 ms for a packet to come in. If it does this once a second, then it would still be on just 0.2% of the time, i.e. a 500-fold power saving.

Let’s try this out. Here’s the timedRecv.pde sketch (now in the RF12 library):

Screen Shot 2011 06 24 at 20.54.37

It listens for incoming packets, then goes into low-power mode for 2 seconds, then it starts listening again. The essential trick is to report two values as ACK to the sender: the time the receiver started listening (relative to that receiver’s notion of time), and the number of milliseconds it had to wait for the packet to arrive.

There’s no actual data processing – I’m just interested in the syncing bit here.

The sender side is in the timedSend.pde sketch:

Screen Shot 2011 06 24 at 20.58.17

This one tries to send a new packet each time the receiver is listening. If done right, the receiver will wake up at just the right time, and then go to sleep again. The ACK we get back in the sender contains valuable information, because it lets us see how accurate our timing was.

Here’s what I get when sending a new packet exactly 2,096 milliseconds after an ACK comes in:

Screen Shot 2011 06 24 at 20.39.33

Not bad, one ack for each packet sent out, and the receiver only has to wait about 6 milliseconds with its wireless receiver powered up. I’ve let it run for 15 minutes, and it didn’t miss a beat.

For some reason, send times need to be ≈ 2.1 s instead of the expected 2.0 s.

Now let’s try with 2,095 milliseconds:

Screen Shot 2011 06 24 at 20.37.17

Something strange is happening: there’s consistently 1 missed packet for each 5 successful ones!

My hunch is that there’s an interaction with the watchdog timer on the receiver end, which is used to power down for 2000 milliseconds. I suspect that when you ask it to run for 16 ms (the miminum), then it won’t actually synchronize its timer, but will fit the request into what is essentially a free-running counter.

There may also be some unforeseen interaction due to the ACKs which get sent back, i.e. there’s a complete round-trip involved in the above mechanism

Hmm… this will need further analysis.

I’m using a standard JeeNode on the receiving end, i.e. running at 16 MHz with a ceramic resonator (specs say it’s 0.5% accurate). On the sender side, where timing is much more important, I’m using a JeeLink which conveniently has an accurate 16 MHz crystal (specs say it’s 10 ppm, i.e. 0.001% accurate).

But still – even this simple example illustrates how a remote can receive data while keeping its wireless module off more than 99% of the time.

Complete web client demo

In AVR, Software on Jun 17, 2011 at 00:01

After yesterday’s addition of DHCP to the EtherCard library, it’s only a small step to create a sketch which does everything needed for convenient stand-alone use on a local LAN.

Here’s a webClient.pde demo sketch which sets itself up via DHCP, then does a DNS lookup to find a server by name, then does a web request every 5 seconds and displays the first part of the result:

Screen Shot 2011 06 15 at 09.04.57

Sample output:

Screen Shot 2011 06 15 at 09.05.51

The total sketch is under 10 Kb, so there’s still lots of room to add the RF12 wireless driver, as well as a fair amount of application logic.

Who says a little 8-bit processor can’t be part of today’s internet revolution, eh?

Better resolution

In Hardware on Jun 13, 2011 at 00:01

Yesterday’s high-side DC power control circuit was not able to measure current in a very exact way. Each ADC step is about 3.3mv, and with a 0.1 Ω sense resistor, that translates to 33 mA steps (a bit less, now that the MOSFET turns out to have a slighty higher “on” resistance).

So let’s add an op-amp, to amplify the voltage:

Screen Shot 2011 06 05 at 18.27.58

That’s a standard way to amplify an input voltage by a factor of 11, so a 0.3V input should generate a 3.3V output voltage, which is nicely full-scale for the ATmega’s analog input when running at 3.3V.

But there’s a little mistake in this setup…

The voltage we’re measuring is not a small voltage above 0V, but a small voltage below 3.3V, so we’re in fact feeding the op-amp a voltage between about 3.0V and 3.3V. Amplified by 11 you get… something the op-amp can’t handle when powered from 3.3V, so it’ll simply return 3.3V all the time. Overflow!

Simple to fix though. Instead of tying that lower resistor to ground, we tie it to 3.3V as reference level. And lo and behold, I’m seeing a roughly 11x larger reading with the same setup as yesterday:

Dsc 2563

Now, the input voltage swings between about 2.4V and 3.3V, which is just fine as analog input.

The one thing to watch out for is that we’re sailing very close to the edge, or in op-amp speak: “close to the rail”. This circuit is working with an input voltage which is very close to the +3.3V power supply “rail”, and the output of the op-amp also needs to be able to swing up to that same +3.3V level. This requires some care in selecting a “RRIO” type op-amp (i.e. Rail-to-Rail Input and Output) – the chip I used here is the TLV2373, a dual op-aamp. It does fairly well, but the output can’t quite totally reach 0V or 3.3V. I suspect that most op-amps will have this problem: a tiny residual voltage on both sides of the output swing. Such is life, no op-amp is perfect.

Here is the test sketch I used for this experiment:

Screen Shot 2011 06 05 at 18.22.28

It’s set up for 4 channels, although this circuit only has one. This could be used as the basis for a 2-channel plug, since both the MOSFET and the op-amp are dual-channel.

The output measurements come in via wireless every 5 seconds, and a simple “1,21s” sent out to it (since this is node 21) will turn on the disk.

But this code doesn’t even come close to what I’d really like to implement: it needs to track power use, only switch off when a device is consistently below a preset level (could be different for each device), implement encryption to prevent unauthorized control, store settings in EEPROM, support configurable behavior after power loss, and power devices up in a staggered mode to reduce the load on the power supply, oh my – not there yet!

RF12 broadcasts and ACKs

In Software on Jun 10, 2011 at 00:01

In yesterday’s post, the general design of the RF12 driver was presented, and the format of the packets it supports.

The driver also support broadcasting, i.e. sending packets to all interested nodes, and ACKs, i.e. sending a short “acknowledge” packet from receiver to transmitter to let the latter know that the packet was properly received.

Broadcasting and ACKs can be combined, with some care: only one node should send back the ACK, so the usefulness of ACKs with broadcasts is limited if the goal was to reliably get a packet across to multiple listeners.

Broadcasts and ACKs use the HDR byte in each packet:

Rf12 Packets

There are three bits: C = CTL, D = DST, and A = ACK, and there is a 5-bit node ID. Node ID 0 and 31 are special, so there can be 30 different nodes in the same net group.

The A bit (ACK) indicates whether this packet wants to get an ACK back. The C bit needs to be zero in this case (the name is somewhat confusing).

The D bit (DST) indicates whether the node ID specifies the destination node or the source node. For packets sent to a specific node, DST = 1. For broadcasts, DST = 0, in which case the node ID refers to the originating node.

The C bit (CTL) is used to send ACKs, and in turn must be combined with the A bit set to zero.

To summarize, the following combinations are used:

  • normal packet, no ACK requested: CTL = 0, ACK = 0
  • normal packet, wants ACK: CTL = 0, ACK = 1
  • ACK reply packet: CTL = 1, ACK = 0
  • the CTL = 1, ACK = 1 combination is not currently used

In each of these cases, the DST bit can be either 0 or 1. When packets are received with DST set to 1, then the receiving node has no other way to send ACKs back than using broadcasts. This is not really a problem, because the node receiving the ACK can check that it was sent by the proper node. Also, since ACKs are always sent immediately, each node can easily ignore an incoming ACK if it didn’t send a packet shortly before.

Note that both outgoing packets and ACKs can contain payload data, although ACKs are often sent without any further data. Another point to make, is that broadcasts are essentially free: every node will get every packet (in the same group) anyway – it’s just that the driver filters out the ones not intended for it. A recent RF12 driver change: node 31 is now special, in that it will see packets sent to any node ID’s, not just its own.

It turns out that for Wireless Sensor Networks, broadcasts are quite useful. You just kick packets into the air, in the hope that someone will pick them up. Often, the remote nodes don’t really care who picked them up. For important events, a remote node can choose to request an ACK. In that case, one central node should always be listening and send ACKs back when requested. An older design of the Room Node sketch failed to deal with the case where the central node would be missing, off, or out of range, and would retry very often – quickly draining its own battery as a result. The latest code reduces the rate at which it resends an ACK, and stops asking for ACKs after 8 attempts. The next time an important event needs to be sent again, this process then repeats.

RF12 packet format and design

In Software on Jun 9, 2011 at 00:01

The RF12 library contains the code to support the RFM12B wireless module in an Arduino-like environment. It’s used for JeeNodes but also in several projects by others.

Here’s the general structure of a packet, as supported by the RF12 driver:

I made quite a few design decisions in the RF12 driver. One of the goals was to make the communication work in the background, so the driver is fully interrupt-driven. An important choice was to limit the packet size to 66 bytes of payload, to keep RAM use low and still allow just over 64 bytes of payload, enough to send data in 64-byte chunks with a teeny bit of additional info.

Another major design decision was to support absolutely minimal packet sizes. This directly affects the power consumption, because longer packets take more time, and the longer the receiver and transmitter are on, the more precious battery power they will consume. For this same reason, the transmission bit rate is set fairly high (about 50 kbits/sec) – a higher rate means the same message can be sent in less time. A higher rate also makes it harder for the receiver to still pick up a good packet, so I didn’t want to push this to the limit.

This is how the RF12 driver can support really short packets:

  • the network group is one byte and also doubles as second SYN byte
  • the node ID is small enough (5 bits) to allow a few more header bits in the same byte
  • there are only three header bits, as described in more detail in tomorrow’s post
  • there is only room for either the source node ID or the destination node ID

That last decision is a bit unusual. It means an incoming packet can only inform the receiver where it came from, or define which receiver the packet is intended for – not both.

This may seem like a severe limitation, but it really isn’t: just add the missing info in the payload and agree on a convention so that the receiver can pick it up. All the RF12 does is enforce a truly minimal design, you can add any info you like as payload.

As a result, a minimal packet has the following format:

That’s 9 bytes, i.e. 72 bits – which means that a complete (but empty) packet can be sent out in less than 1.5 ms.

Tomorrow, I’ll describe the exact logic used in the HDR byte, and how to use broadcasts and ACKs.

Hard disk power control

In Hardware on May 30, 2011 at 00:01

There’s a server running 24 hours a day, 7 days a week here at JeeLabs. It has two internal hard drives, one of them used as hourly backup for the system partition on the other disk. It’s running a couple of VM’s with all sorts of services, and it’s been running flawlessly for several months. Draws 10..15W.

Now, I’d like to attach a couple of extra hard disks to this server. A pair of disks for off-site backups (yes, there is a daily cloud backup, but I want a second fall-back system for some files), and a disk with stuff I rarely need, but don’t want to throw away. Disks are cheap – in fact I’ve got enough disks, so disk storage is actually free here. And while I’m at it: maybe add a little NAS for private stuff, since it’s been lying around and collecting dust anyway.

But I don’t want to have everything on-line all the time, for safety reasons and to keep power consumption low.

Why not use a JeeNode to control the power to these devices, which all run off a 12V supply? And why not just use one beefy switching supply, instead of that endless collection of power bricks?

Here’s a first idea:

Screen Shot 2011 05 29 at 18.49.52

Only one of the two channels on the MOSFET Plug is used here. And instead of switching a power LED or LED strip with it, it’s being used to control the power to the external disk drive.

There’s a flaw in this design, though: it’ll only work with ONE hard disk…

Tomorrow I’ll go into this and explain what’s going on, and why it can’t work with multiple disk drives. Hint: this setup only works if the JeeNode is controlled by wireless.

Avoiding memory use

In Hardware on May 25, 2011 at 00:01

On an ATmega328, memory is a scarce resource, as I’ve mentioned recently. Flash memory is usually OK, I’ve yet to run into the 30..32 Kb limit on code. But the crunch comes in all other types of memory – especially RAM.

This becomes apparent with the Graphics Board, which needs a 1 Kb buffer for its display, and with the EtherCard, which can often not even fit a full 1.5 Kb Ethernet packet in RAM.

The Graphics Board limitation is not too painful, because there’s the “JeePU“, which can off-load the graphics display to another JeeNode via a wireless connection. Something similar could probably be done based on I2C or a serial interface.

But the EtherCard limitation is awkward, because this essentially prevents us from building more meaningful web interfaces, and richer web server functionality, for example.

The irony is that there’s plenty of unused RAM memory, just around the corner in this case: the ENC28J60 Ethernet controller chip has 8 Kb RAM, of which some 3.5 Kb could be used for further Ethernet packet buffers… if only the code were written differently!

In fact, we have to wonder why we need any RAM at all, given that the controller has so much of it.

The problem with the EtherCard library, is that it copies an entire received frame to RAM before use, and that it has to build up an entire frame in RAM to send it out.

I’d like to improve on that, but the question is how.

A first improvement is already in the EtherCard library: using strings in flash memory. There’s also basic string expansion, which you can see in action in this code, taken literally from the etherNode.pde example sketch:

Screen shot 2011 05 24 at 22 57 26

The $D’s get expanded to integer values, supplied as additional arguments to buf.emit_p(). This simplifies generating web pages with values (and strings) inserted, but it doesn’t address the issue that the entire web page is still being constructed in a RAM buffer.

Wouldn’t it be nice if we could do better than that, especially since the packet needs to end up inside the ENC28J60 controller anyway?

Two possibilities: 1) generate directly to the ENC28J60’s RAM, or 2) generate a description of the result, so that the real output data can be produced when needed.

For now, this is all just a mental exercise. It looks like option #1 could be implemented fairly easily. The benefit would be that only the MAC + IP header needs to stay in RAM, and that the payload would go directly into the controller chip. A huge RAM saving!

But that’s only a partial solution. The problem is that it assumes an entire TCP/IP response would fit in RAM. For simple cases, that would indeed be enough – but what if we want to send out a multi-packet file from some other storage, such as an external Memory Plug or an SD card?

The current EtherCard library definitely isn’t up to this task, but I’d like to think that one day it will be.

So option #2 might be a better way: instead of preparing a buffer with the data that needs to be sent, we prepare a set of instructions which describe what is needed to generate the buffer. This way, we could generate a huge “buffer” – larger than available RAM – and then produce individual packets as needed, i.e. as the TCP/IP session advances, packet by packet.

This way, we could have some large “file” in a Memory Plug, and its contents would be copied to the ENC28J60’s RAM on-demand, instead of all in advance.

It seems like a lot of contortions to get something more powerful going for the EtherCard, but this approach is in fact a very common one, called Zero Copy. With “big” computers, the copying is done with DMA and special-purpose hardware, but the principle is the same: don’t copy bytes around more than strictly needed. In our case, that means copying bytes from EEPROM to ENC28J60, without requiring large intermediate buffers.

Hm, maybe it’s time to create a “ZeroCopy” library, sort of a “software based generic DMA” between various types of memory… maybe even to/from external I/O devices such as a serial port or I2C device?

It could perhaps be modeled a bit like Tcl’s “channels” and “fcopy” command. Not sure yet… we’ll see.

RF bootstrap design

In Software on May 24, 2011 at 00:01

After some discussion on the forum, I’d like to present a draft design for an over-the-air bootstrap mechanism, IOW: being able to upload a sketch to a remote JeeNode over wireless.

Warning: there is no release date. It’ll be announced when I get it working (unless someone else gets there first). This is just to get some thoughts down, and have a first mental design to think about and shoot at.

The basic idea is that each remote node contacts a boot server after power up, or when requested to do so by the currently running sketch.

Each node has a built-in unique 2-byte remote ID, and is configured to contact a specific boot server (i.e. RF12 band, group, and node ID).


First we must find out what sketch should be running on this node. This is done by sending out a wireless packet to the boot server and waiting for a reply packet:

  • remote -> server: intial request w/ my remote ID
  • server -> remote: reply with 12 bytes of data

These 12 bytes are encrypted using a pre-shared secret key (PSK), which is unique for each node and known only to that node and the boot server. No one but the boot server can send a valid reply, and no one but the remote node can decode that reply properly.

The reply contains 6 values:

  1. remote ID
  2. sketch ID
  3. sketch length in bytes
  4. sketch checksum
  5. extra sketch check
  6. checksum over the above values 1..5

After decoding this info, the remote knows:

  • that the reply is valid and came from a trusted boot server
  • what sketch should be present in flash memory
  • how to verify that the stored sketch is complete and correct
  • how to verify the next upload, if we decide to start one

The remote has a sketch ID, length and checksum stored in EEPROM. If they match with the reply and the sketch in memory has the correct checksum, then we move forward to step 3.

If no reply comes in within a reasonable amount of time, we also jump to step 3.


Now we need to update the sketch in flash memory. We know the sketch ID to get, we know how to contact the boot server, and we know how to verify the sketch once it has been completely transferred to us.

So this is where most of the work happens: send out a request for some bytes, and wait for a reply containing those bytes – then rinse and repeat for all bytes:

  • remote -> server: request data for block X, sketch Y
  • server -> remote: reply with a check value (X ^ Y) and 64 bytes of data

The remote node gets data 64 bytes at a time, and burns them to flash memory. The process repeats until all data has been transferred. Timeouts and bad packets lead to repeated requests.

The last reply contains 0..63 bytes of data, indicating that it is the final packet. The remote node saves this to flash memory, and goes to step 3.


Now we have the proper sketch, unless something went wrong earlier.

The final step is to verify that the sketch in flash memory is correct, by calculating its checksum and comparing it with the value in EEPROM.

If the checksum is bad, we set a watchdog timer to reset us in a few seconds, and … power down. All our efforts were in vain, so we will retry later.

Else we have the proper sketch and it’s available in flash memory, so we leave bootstrap mode and launch it.

That’s all!


This scheme requires a working boot server. If none is found or in range, then the bootstrap will not find out about a new sketch to load, and will either launch the current sketch (if valid), or hit a reset and try booting again a few seconds later.

Not only do we need a working boot server, that server must also have an entry for our remote ID (and our PSK) to be able to generate a properly encrypted reply. The remote ID of a node can be recovered if lost, by resetting the node and listening for the first request it sends out.

If the sketch hangs, then the node will hang. But even then a hard reset or power cycle of the node will again start the boot sequence, and allows us to get a better sketch loaded into the node. The only drawback is that it needs a hard reset, which can’t be triggered remotely (unless the crashing sketch happens to trigger the reset, through the watchdog or otherwise).

Errors during reception lead to a failed checksum at the end, which then leads to a reset and a new boot loading attempt. There is no resume mechanism, so such a case does mean we have to fetch all the data blocks again.


This is the hard part. Nodes which end up running some arbitrary sketch have the potential to cause a lot of damage if they also control real devices (lights are fairly harmless, but thermostats and door locks aren’t!).

The first line of defense comes from the fact that it is the remote node which decides when to fetch an update. You can’t simply send packets and make remote nodes reflash themselves if they don’t want to.

You could interrupt AC mains and force a reset in mains-powered nodes, but I’m not going to address that. Nor am I going to address the case of physically grabbing hold of a node or the boot server and messing with it.

The entire protection is based on that initial reply packet, which tells each remote node what sketch it should be running. Only a boot server which knows the remote node’s PSK is able to send out a reply which the remote node will accept.

It seems to me that the actual sketch data need not be protected, since these packets are only sent out in response to requests from a remote node (which asks for a specific sketch ID). Bad packets of any kind will cause the final checksums to fail, and prevent such a sketch from ever being started.

As for packets flying around in a fully operational home network: that level of security is a completely separate issue. Sketches can implement whatever encryption they like, to secure day-to-day operation. In fact, the RF12 library includes an encryption mechanism based on XTEA for just that purpose – see this weblog post.

But for a bootstrap mechanism, which has to fit in 4 Kb including the entire RF12 wireless packet driver, we don’t have that luxury. Which is why I hope that the above will be enough to make it practical – and safe!

RFM12B range testing

In Hardware on May 15, 2011 at 00:01

There have been many questions and discussions about the range achievable with the RFM12B wireless modules. Usually, my answers have been: 1) should be about 100m outside, and 2) gets through about two walls inside the house. But the most accurate answer really is a resounding “it depends” …

Because it really does. RF range will depend on a huge number of factors. What works for me may not work for you, and what works today may not work tomorrow.

Triggered by some recent discussions on the forum, and with the help of Steve Evans (@TankSlappa) who wrote a good set of sketches and did some tests, I’ve come up with two sketches and a setup to report reception quality.

This setup requires two RFM12B modules evidently, plus an LCD connected via the LCD Plug. I re-used one of my Mystery Boxes – one of so many projects here at JeeLabs waiting to get finished.

My sending unit is a JeeNode with an AA Power Board on the back. The rfRangeTX.pde transmitter sketch is very simple, and sends out 1-byte packets 10 times per second:

Screen Shot 2011 05 14 at 15.00.18

The receiver is based on a JeeNode USB with LCD and LiPo battery, so both units are portable / self-powered:

Dsc 2513

The code for the rfRangeRX.pde receiver sketch is too long to be shown in its entirety, but here’s an overview:

Screen Shot 2011 05 14 at 15.03.34

The display shows 4 fields:

  • top left = percentage of packets received in the last 5 seconds
  • top right = percentage of packets received in the last second
  • bottom left = sequence number of the last valid incoming packet
  • bottom right = history of last reception counts (10x 0.5s intervals)

Here, one packet was missed in the last second (98% is 1 out of 50, 90% is 1 out of 10):

Dsc 2509

And here, two packets were missed a few seconds ago:

Dsc 2511

Feel free to take these sketches as starting point for your own tests. You could do all sorts of funky range testing with this, from just seeing how much gets lost in a particular setup, to investigating the effect of different RFM12B baud rates, working in other bands, antenna optimization, identifying in house “cold spots”, and checking the effect of adding an extra pull-up resistor, as recently suggested here and here on the forum.

There are still quirks (i.e. bogus reports every 5s when no packets are coming in, due to byte wraparound).

Both sketches have been added to the RF12 library. There is probably a ton of neat stuff to add – please share your improvements and I’ll try to fold them in so others can use them too. By adding logic for two buttons (or a joystick as on the above RX unit), we could even configure the receiver in the field.

Many thanks to Steve E for coming up with the original idea and getting a first implementation going.

RF12 skeleton sketch

In Software on May 7, 2011 at 00:01

The RF12 library has all the code to drive an RFM12B wireless module, and supports full interrupt-drive sending and receiving of arbitrary packets up to 66 bytes in length.

Interrupt drivers are fiendishly hard to debug and get 100% right, but often well worth the effort. The result is code which behaves almost as if it’s running in the background, i.e. it makes the ATmega appear to support multi-tasking, with all I/O happening all by itself.

In the case of the RFM12B, this is quite important, because there are some very strict timing requirements as to how and when to exchange data with the hardware. Once a driver is interrupt-driven, the rest of the code doesn’t have to be as strict – all critical timing requirements are dealt with, even if you don’t poll the driver regularly.

But the logic of all this stuff can be a bit overwhelming at first. So, to help out, and prompted by a recent discussion on the forum, I’ve set up an example of how to write a sketch which can read and send packets:

#include <Ports.h>
#include <RF12.h>

MilliTimer sendTimer;
typedef struct { ... } Payload;
Payload inData, outData;
byte pendingOutput;

void setup () {
    // call rf12_initialize() or rf12_config()

static void consumeInData () {

static byte produceOutData () {
    return 1;

void loop () {
    if (rf12_recvDone() && rf12_crc == 0 && rf12_len == sizeof inData) {
        memcpy(&inData, (byte*) rf12_data, sizeof inData);
        // optional: rf12_recvDone(); // re-enable reception right away

    if (sendTimer.poll(100))
        pendingOutput = produceOutData();

    if (pendingOutput && rf12_canSend()) {
        rf12_sendStart(0, &outData, sizeof outData, 2);
        // optional: rf12_sendWait(2); // wait for send to finish
        pendingOutput = 0;

You’ll need to do a few things to get this going, which are all common sense really:

  • define a proper struct for the Payload contents you want to send/receive
  • set up the RF12 driver with the proper configuration settings
  • fill in the code to handle incoming data in inData
  • fill in the code to save new outgoing data to outData

This sketch will also work with an RFM12B Board and an Arduino.

One crucial detail is that you can’t just send data whenever you feel like it – you have to throttle the outgoing sends a bit using sendTimer , and you have to ask the RF12 driver for permission to send using rf12_canSend(). Failure to do this will “mess up the air waves” and severely interfere with RF communication between any nodes, even those that play nice.

To write a sketch which only sends, leave consumeInData() empty – don’t throw out the first “if”, because those rf12_recvDone() calls are still essential.

To write a sketch which only receives, simply make produceOutData() return 0. Removing the last two if’s is also ok, in this case.

Once you have your sketch working, you can start adding tricks to reduce power consumption: turning the RFM12B on and off, running at lower clock speeds, putting the ATmega into a low-power sleep state, etc.

OOK relay, revisited

In Software on Feb 3, 2011 at 00:01

With the modded RFM12B receiving 868 MHz signals, and the new OOK 433 Plug doing the same for the 433 MHz band, the new OOK relay is coming in sight.

Just a lousy bit of code. Elementary – I thought…

Except it wasn’t. Software always seems to take a lot more time (and concentration) than hardware. Silly!

Still, I think I managed to collect all the pieces lying around here from earlier experiments in that area, and combine them into a new ookRelay2.pde sketch.

It’s fairly elaborate and too long to show here, but I’ll pick out some pieces:

  • all the decoders live in the decoders.h file
  • since they all share common logic, each is derived from a common “DecodeOOK” class
  • the protocol for each decoder is the same: feed puse widths to nextPulse(), and it will return true whenever a valid packet has been decoded, then call getData() to get a pointer and byte count
  • the ookRelay2 sketch includes a variety of decoders, I hope we can improve/extend/add-more over time
  • there are two pulse sources: the 868 MHz receiver and the 433 MHz receiver
  • for each, a “DecoderInfo” table is defined with decoders to use for them
  • the runPulseDecoders() function does what the name says: evaluate each of the decoders in turn
  • when a decoder succeeds, data is added to an outgoing buffer (and optionally, printed to serial)
  • in this example, I send the accumulated data off to the RF12 wireless network, but Ethernet or any other transport mechanism could be used as well

With this out of the way, you can probably, eh… decode the following lines at the top op the ookrelay2 sketch:

Screen Shot 2011 02 02 at 23.30.36

And here’s the main loop, which is keeping things going:

Screen Shot 2011 02 02 at 23.31.24

The hard part is doing this efficiently with accurate timings, even though a lot of stuff is happening. That’s why there are two interrupt routines, which trigger on changes in 868 MHz and 433 MHz signals, respectively:

Screen Shot 2011 02 02 at 23.33.22

I’m still debugging, and I need to analyze just how much leeway there is to run all the decoders in parallel. Earlier today I had the 433 MHz reception going, but right now it seems this code is only picking up 868 MHz signals:

Screen Shot 2011 02 02 at 23.34.46

Oh well, it’s a start. Feel free to check out the code, which lives as example in the RF12 library.

Update – Bug fixed, now 433 MHz decoding works.

Meet the RFM12B Board

In Hardware on Feb 2, 2011 at 00:01

With the RFM12B becoming a nice low-cost option for low-volume wireless communication, and the RF12 library proving to be a solid software driver for it, it’s time to generalize a bit further…

Say hello to the new RFM12B Board:

Dsc 2448

This board adds a voltage regulator and 3.3V/5V level conversions, to be able to use the RFM12B on 5V systems such as the various Arduino’s out there, the RBBB, … anything you want, really.

There are 8 pins on this board, of which the 8th is a regulated 3.3V supply which can be used in other parts of the circuit – the voltage regulator will be able to supply at least 100 mA extra on that supply pin.

The other 7 pins are:

  • +5V
  • Ground
  • SPI clock (SCK) – Arduino digital 13
  • SPI data out (SDO) – Arduino digital 12
  • SPI data in (SDI) – Arduino digital 11
  • SPI select (SEL) – Arduino digital 10
  • IRQ – Arduino digital 2

Just hook each of those up to an Arduino, and you can use the RF12 library as is!

Sample output:

Screen Shot 2011 02 01 at 22.08.14

Look ma, just like a JeeNode or JeeLink!

With an 8-pin stacking header and a bit of bending, cutting, and soldering two wires (I used a jumper wire, cut in half), you can even stick this thing right onto an Arduino:

Dsc 2449

But of course using a normal solderless breadboard and some wire jumpers will work just as well.

Note that this board can also be tied to 3.3V systems – just use the bare PCB (and short out three solder jumpers), which then becomes a breakout board for the RFM12B. No need to mess with the 2.0 mm pin/pad distance on the RFM12B module itself.

Docs can be found in the Café, and the kit/pcb is now available in the shop, as usual.

Back-soon server

In Hardware, Software on Jan 31, 2011 at 00:01

Soon, I’m going to move the JeeLabs server to a new spot in the house. Out of sight, now that the setup is stable.

But to do so requires rerouting an ethernet cable to the internet modem downstairs.

To do it right, I’d like to have a “we will be back soon” surrogate server plugged into the internet modem while transitioning, so that the status is reported on-line:

Screen Shot 2011 01 30 at 16.36.21

I could plug in a temporary Linux box, of course, or a laptop. But I want to keep this option available at all times, so a dedicated solution would be more practical. That way I can easily take the server off-line at any moment.

Ah, but that’s easy, with an Ether Card and an RBBB:

Dsc 2434

This combination just needs a 5..6V power supply, and 6 wires between the RBBB and the Ether Card.

Here’s the backSoon.pde sketch, which I’ve added to the EtherCard library:

Screen Shot 2011 01 30 at 17.24.05

In this case, being able to configure the MAC address of the interface as well as the IP address is in fact quite convenient, because this way the modem needn’t notice the hardware switch.

Only needs about 6 Kb. Actually, I’ll probably add a wireless option and use a JeeNode instead, to report total hits every once in a while. But either way, such a “back-soon server” really doesn’t come any simpler than that!

So if one of these days you see that message while surfing at JeeLabs, you know where it’s coming from :)

PS. I’ve put the back-soon server on-line as a test, it can be reached at

New OOK and DCF relay

In Hardware on Jan 29, 2011 at 00:01

With all the pieces finally in place, and now that I’m getting a little bit more time to hack around again, this seemed like a good time to reconsider the OOKrelay.

So I combined a JeeNode USB, the new OOK 433 Plug, a Carrier Board with box, and half a Carrier Card:

Dsc 2429

In the top left I also added a DCF77 receiver from Conrad, attached to the Carrier Card prototyping board. It’s a bit hard to see, because the little receiver board is actually mounted upright. Here’s a better view:

Dsc 2430

A JeeNode USB was selected, because this thing will be powered permanently, and I chose to hide the connector inside the box to make it more robust. So all this needs is a little USB charger. The LiPo charge option might be useful if I decide to make this thing more autonomous one day (i.e. to record accurate power outage times).

Note that this is a modded JeeNode, as described here, to be able to receive 868 MHz OOK signals.

So what this thing can do – as far as the hardware goes – is listen for both 433 MHz and 868 MHz OOK signals at the same time, as well as pick up the DCF77 atomic clock signals from Frankfurt. Sending out 433/868 MHz OOK is possible too, but since the unit isn’t constantly listening for OOK packets, it’ll have to poll for such commands, which will introduce a small delay.

That’s the hardware, i.e. the easy part…

The software will be a lot more work. I’m going to adapt / re-implement the functionality from the OOKrelay sketch, i.e. this unit will decode and re-transmit all incoming data as RF12 packets, so that they can be picked up by a JeeLink hooked up to my PC/Mac. The clock signal will be useful to accurately time-stamp all receptions, and is really of more general use.

So far, the following I/O pins have been used:

  • one port for the OOK 433 Plug, i.e. one DIO and one AIO pin
  • one input pin for the modded JeeNode, to receive 868 MHz OOK signals
  • one input pin for the DCF77 signal

There is still lots of room left for expansion. A Pressure Plug perhaps, to track barometric pressure. Or a Memory Plug to save up the last data while the central receiver is unavailable. Or both, since these can combined on a single I2C port.

Absent from all this, is a display. First of all, squeezing a 2×16 LCD in there would have been very tight, but more importantly, now that there is a JeePU, there really is no need. I’m already sending the info out by wireless, so a remote graphical display is definitely an option – without PC – or I could use a central server to get this info to the right place(s). This box is intended to be hidden out of sight, somewhere centrally in the house.

Only thing I might consider is a small LED near the USB cable, to indicate that all is well. Maybe… I’m not too fond of blinking LEDs everywhere in the house :)

OOK reception with RFM12B

In Hardware, Software on Jan 27, 2011 at 00:01

A while back, JGJ Veken (Joop on the forum) added a page on the wiki on how the RFM12B can receive OOK.

I never got around to trying it … until now. In short: if you’re not afraid of replacing an SMD capacitor on the RFM12B wireless module, then it’s trivial!

Here’s what needs to be done – the capacitor on the left is 4.7 nF:

Screen Shot 2011 01 25 at 14.16.36

Unsolder it and replace it with a cap in the range 150..330 pF (I used 220 pF).

This cap appears to determine the time constant w.r.t. how fast the RSSI signal adapts to varying RF carrier signal strengths. With 4.7 nF, it’s a bit too sluggish to detect an OOK signal – which is nothing other than a carrier being switched on and off (OOK stands for: On / Off Keying).

The next trick is to connect the FSK/DATA/nFSS pin of the RFM12B via a 100 Ω resistor to AIO1 (a.k.a. analog 0, a.k.a. PC0, a.k.a. ATmega pin 23 – phew!):

Dsc 2427

As far as I can tell, this is a digital signal, so connecting it to AIO0 is really not a requirement. It might be more practical to connect it to one of the B0/B1 pins on the SPI/ISP header. Perhaps I should add a jumper in a future revision of the JeeNode PCB?

And lastly, the RFM12B must be placed in a special mode to get the RSSI signal onto that pin – i.e. compared to the RSSI threshold, also configured into the RFM12B (97 dBm).

All the pieces were there, and all I had to do was to follow the steps mentioned on the wiki page.

I made some changes to the code and added it as RF12MB_OOK.pde example sketch. Here is the main logic:

Screen Shot 2011 01 25 at 16.42.14

As you can see, all incoming data is forwarded using the normal RF12 mode packet driver.

Sample output:

Screen Shot 2011 01 25 at 16.56.39

It’s happily picking up FS20, EM10, S300, and KS300 packets, and the overall sensitivity seems to be excellent. And since it forwards all data as packets into the rest of the JeeNode network, I now have all the data coming in over a single JeeLink.

Sooo… with this “mod”, no separate OOK receiver is needed anymore for the 868 MHz frequency band!

PS. Haven’t done too many tests with this yet. Transmission is unaffected, as far as I can tell. Reception of packets with the RF12 driver still seems to work – it may be more susceptible to RF variations, but then again a “normal” packet uses FSK which is a constant carrier, so in principle this modification should not affect the ability of the RFM12B to receive standard FSK packets.

Easy Electrons – Transistor circuits #3

In Hardware on Jan 24, 2011 at 00:01

A third installement about transistors in this Easy Electrons series.

So far, I’ve shown how to get more current out of an I/O pin from an ATmega, since this will probably be the most common reason to use transistors in combination with a micro-controller. But these circuits all act as switches, i.e. they turn current on and off (or in the case of the voltage regulator: adjusting current flow to a certain value).

What if we wanted to control one or two DC motors for a little robot? Lots of fun stuff to do in that area, especially with wireless communication. To do this, we also need to be able to reverse the voltage placed on the motor, so we can make it turn forward or backward under software control. And if we want to make it a bit fancier, it would be nice if we could control the speed of the motor as well.

First things first. Reversing the direction of a motor can be done with a double-pole double-throw (DPDT) relay:

This low-tech solution will switch the +12V and the -12V poles to make the motor run clockwise or counter-clockwise. And if we were to use a transistor for the -12V (i.e. GND) side, we could also turn it on and off.

But that’s clunky! – let’s see if we can do differently. What we need is a way to place either a high or a low voltage on either side of the motor. Here’s a first (flawed!) attempt:

Look what happens when we put the proper voltages on A, B, C, and D:

  • with A high and B low, the left side of the motor is tied to “+”
  • with D low and C high, the right side of the motor is tied to “-“
  • it will start running

And now the other case:

  • with A low, B high, the left side of the motor is tied to “-“
  • with D high, C low, the right side of the motor is tied to “+”
  • it will start running in the opposite direction

And of course, when A = B = C = D = low, the motor will stop.

What the two transistors “on top” of each other do, is create sort of a push-pull circuit, since you can tie the central connection to either the “+” or the “-” voltage rail. This type of circuit is called an H bridge, due to it’s shape.

(note that I’ve left out 4 protection diodes, i.e. one across each C-E junction – they do need to be added in a real-world setup with DC motors)

There are several serious problems with this particular design, though:

  • to pull A or D high, we have to apply 12V, since 3.3V won’t be high enough to raise the base 0.7V above the emitter voltage level
  • if we pull A and B high, then we’ve got ourselves a short-circuit, with huge currents through both transistors on the left!
  • same for C and D…
  • and lastly, this thing needs a whopping 4 I/O pins

Let’s tackle that last point first: we can halve the I/O pin count by tying A and C together, and by tying B and D together. Now three out of the possible combinations will get us just what we want: stop, turn clockwise, turn counter-clockwise. But with both signals high, we still get a short circuit. Not good – we don’t want a software error to be able to start a fire…

The bigger problem though, electrically speaking, is that the input voltages involved are no longer suitable for an ATmega. This can be solved by adding an extra NPN transistor on both sides, for a total of 6 transistors. Instead of explaining the whole setup in detail, let me point you to some articles I found on the web:

  • this one describes the basic idea using relays
  • this page uses 6 transistors (lots more interesting pages on that site)

As you can see, it takes quite a few components to drive one small motor. Fortunately there are lots of H-bridge driver IC’s with various voltage- and current ratings. Some of these are quite small – such as the TC4424A I used on the DC motor plug, which is why I was able to actually put 2 of them on a single plug.

The second task we’d like to be able to do is control the motor speed.

This turns out to be fairly easy. The trick is to use pulse-width modulation (PWM). This is just a fancy term for a simple concept: we generate a set of pulses, and we control the on-time vs. off-time ratio of these pulses. As it turns out, DC motors are far too slow to follow these pulse trains if you generate them at 100 Hz or more. Instead, they will tend to average out the 0/1 values sent to them. And sure enough, a pulse train which is 100% off will cause the motor to stop, and a pulse train which is 100% on will cause the motor to run at full speed. Everything in between will lead to a motor running at intermediate speeds – simple!

For completeness’ sake, let me mention that the on-off power control circuits I’ve been describing in these last posts often use MOSFETs nowadays, instead of the traditional BJT transistors. For simple experiments and small DC motors, BJT’s are fine though.

Now if you think transistors are so great… wait ’till you see what MOSFETs can do!

I’ll go into those next week. Enough electronics for now.

Poor Man’s Mesh Network

In Software on Jan 16, 2011 at 00:01

The recent packet relay and Extended node ID posts set the stage for a very simple way to create a fairly long-range (and very low-cost) little Wireless Sensor Network around the house.

Say hello to the Poor Man’s Mesh Network :)

Screen Shot 2011 01 14 at 13.54.50

(the main computer doesn’t have to be a PC/Mac, an embedded Linux box would also work fine)

Well, that’s a bit presumptuous. It’s not really “mesh” in the sense that there is no dynamic or adaptive routing or re-configuration involved at all. All the packet routes are static, so failure in one of the relays for example, will make everything “behind” it unreachable. In the world of wireless, that matters, because there are always unexpected sources of interference, and occasionally they completely wipe out the ability of a network to communicate on a given frequency band. For example, a couple of times a year, the data I’m collecting from my electricity and gas meters here at JeeLabs just stops getting in. Not sure what’s going on, but I’m not touching that setup in any way, and it’s unrelated to power outages. It might not be an RF issue, but who knows.

So what I’m referring to is not a super-duper, full-fledged, kitchen-sink-included type of wireless network. But given the extremely low cost of the nodes, and the fact that the software needs less than 4 kb of code, I think it’s a good example of how far you can get when using simplicity as guiding principle. Even an 8-bit MPU with 8 Kb of flash and 512 bytes of RAM would probably be sufficient for sensor nodes – so with the ATmega328’s used in JeeNodes, we really have oodles of spare capacity to implement pretty nifty applications on top.

Most of the details of how this works have already been presented in previous posts.

The one extra insight, is that the packet relay mechanism can be stacked. We can get data across N relays if we’re willing to limit our data packets to 66-N bytes. So by sacrificing a slight reduction in payload length we can extend the number of relays, and hence the total maximum range, as much as we like. Wanna get 10 times as far? No problem, just place a bunch of relays in the right spots along the whole path. Note that ACK round-trip delays will increase in such a setup.

The big design trade-off here is that all packet routing is static, i.e. it has to be set up manually. Each sensor node (the stars in that diagram) needs to have a netgroup which matches a relay or central node nearby, and within each netgroup each sensor node has to have a unique ID.

It’s not as bad as it may seem though. First of all, the range of the RFM12B on 433, 868, and 915 MHz is pretty good, because sub-GHz radio waves are far less attenuated by walls and concrete floors than units operating at 2.4 GHz. This means that in a small home, you wouldn’t even need a relay at all. I get almost full coverage from one centrally-placed node here at JeeLabs, even though the house is full of stone walls and reinforced concrete floors. As I mentioned before, I expect to get to curb and to the far end of our (small) garden with one or two relay hops.

Second, this star topology is very easy to adjust when you need to extend it or make changes – especially if all packet relays are one “hop” away from the central node, i.e. directly talking to it. You can turn one relay off, make changes to the nodes behind it, and then turn it back on, and the rest of the network will continue to work just fine during this change.

I’ve extended the groupRelay.pde sketch a bit further, to be able to configure all the parameters in it from the serial/USB side. These settings are saved in EEPROM, and will continue to work across power loss. This means that a relay node can now be as simple as this:

Dsc 2412

IOW, a JeeLink, plugged into a tiny cheapo USB power adapter. All you need to do is pre-load the groupRelay sketch on it (once), adjust its settings (as often as you like), and plug it in where you need it. How’s that as maintenance-free solution? And you can add/drop/alter the netgroup structure of the entire network at any time, as long as you’re willing to re-configure the affected nodes. If some of them turn out to be hard to reach because they are at the limit of the range, just insert an extra relay and tell the central software about the topology change.

It doesn’t have to be a JeeLink of course. A JeeNode, or some home-brew solution would work just as well.

Now that this design has become a reality, I intend to sprinkle a lot more sensors around the house. There have been lots of little projects waiting for this level of connectivity, from some nodes outside near the entrance, to a node to replace one of the first projects I worked on at JeeLabs!

So there you go. Who needs complexity?

Nodes, Addresses, and Interference

In Software on Jan 14, 2011 at 00:01

The RF12 driver used for the RFM12B module on JeeNodes makes a bunch of assumptions and has a number of fixed design decisions built-in.

Here are a couple of obvious ones:

  • nodes can only talk to each other if they use the same “net group” (1..250)
  • nodes normally each have a unique ID in that netgroup (1..31)
  • packets must be 0..66 bytes long
  • packets need an extra 9 bytes of overhead, including the preamble
  • data is sent at approximately 50,000 baud
  • each byte takes ≈ 160 µs, i.e. a max-size packet can be sent in 12 milliseconds

So in the limiting case you could have up to 7,500 different nodes, as long as you keep in mind that they have to share the same frequency and therefore should never transmit at the same time.

For simple signaling purposes that’s plenty, but it’s obvious that you can’t keep a serious high-speed datastream going this way, let alone multiple data streams, audio, or video.

On the 433 or 868 MHz bands, the situation is often worse than that – sometimes much worse, because simple OOK (which is a simple version of ASK) transmitters tend to completely monopolize those same frequency bands, and more often than not, they don’t even wait for their turn so they also disturb transmissions which are already in progress! Add to that the fact that OOK transmitters often operate at 1000 baud or less, and tend to repeat their packets a number of times, and you can see how that “cheap” sensor you just installed could mess up everything!

So if you’ve got a bunch of wireless weather sensors, alarm sensors, or remotely controlled switches, chances are that your RF12-based transmissions will frequently fail to reach their intended destination.

Which is why “ACKs” are so important. These make it possible to detect when packets get damaged or fail to arrive altogehter. An ACK is just what the name says: an acknowledgement that the receiver got a proper packet. No more no less. And the implementation is equally simple, at least in concept: an ACK is nothing but a little packet, sent the other way, i.e. back from the receiver to the original transmitter.

With ACKs, transmitters have a way to find out whether their packet arrived properly. What they do is send out the packet, and then wait for a valid reply packet. Such an “ACK packet” need not contain any payload data – it just needs to be verifiably correct (using a checksum), and the transmitter must somehow be able to tell that the ACK indeed refers to its original packet.

And this is where the RF12 driver starts to make a number of not-so-obvious (and in some cases even unconventional) design decisions.

I have to point out that wireless communication is a bit different from its wired counterpart. For one, everyone can listen in. Radio waves don’t aim, they reach all nodes (unless the nodes are at the limit of the RF range). So in fact, each transmission is a broadcast. Whether a receiver picks up a transmitted packet is only a matter of whether it decides to let it through.

This is reflected in the design of the RF12 driver. At the time, I was trying to address both cases: broadcasts, aimed at anyone who cares to listen, and directed transmissions which target a specific node. The former is accomplished by sending to pseudo node ID zero, the latter requires passing the “destination” node ID as first argument to rf12_sendStart().

For the ACK, we need to send a packet the other way. The usual way to do this, is to include both source and destination node ID’s in the packet. The receiver then swaps those fields and voilá… a packet ready to go the other way!

But that’s in fact overkill. All we really need is a single bit, saying the packet is an ACK packet. And in the simplest case, we could avoid even that one bit by using the convention that data packets must have one or more bytes of data, whereas ACKs may not contain any data.

This is a bit restrictive though, so instead I chose to re-use a single field for either source or destination ID, plus a bit indicating which of those it is, plus a bit indicating that the packet is an ACK.

With node ID’s in the range 1..31, we can encode the address as 5 bits. Plus the src-vs-dest bit, plus the ACK bit. Makes seven bits.

Why this extreme frugality and trying to save bits? Well, keep in mind that the main use of these nodes is for battery-powered Wireless Sensor Networks (WSN), so reducing power usage is normally one of the most important design goals. It may not seem like much, but one byte less to send in an (empty) ACK packet reduces the packet length by 10%. Since the transmitter is a power hog, that translates to 10% less power needed to send an ACK. Yes, every little bit helps – literally!

That leaves one unused bit in the header, BTW. Whee! :)

I’m not using that spare bit right now, but it will become important in the future to help filter out duplicate packets (a 1-bit sequence “number”).

So here is the format of the “header byte” included in each RF12 packet:

Screen Shot 2011 01 13 at 23.35.02

And for completeness, here is the complete set of bytes sent out:

Screen Shot 2011 01 13 at 23.34.14

So what are the implications of not having both source and destination address in each packet?

One advantage of using a broadcast model, is that you don’t have to know where to send your packet to. This can be pretty convenient for sensor nodes which don’t really care who picks up their readings. In some cases, you don’t even care whether the data arrived, because new readings are periodically being sent anyway. This is the case for the Room Nodes, when they send out temperature / humidity / light-level readings. Lost one? Who cares, another one will come in soon enough.

With the PIR motion detector on Room Nodes, we do want to get immediate reporting, especially if it’s the first time that motion is being detected. So in this case, the Room Node code is set up to send out a packet and request an ACK. If one doesn’t come in very soon, the packet is sent again, and so on. This repeats a few times, so that motion detection packets reach their destination as quickly as possible. Of course, this being wireless, there are no guarantees: someone could be jamming the RF frequency band, for example. But at least we now have a node which tries very hard to quickly overcome an occasional lost packet.

All we need for broadcasts to work with ACKs, is that exactly one node in the same netgroup acts as receiver and sends out an ACK when it gets a packet which asks to get an ACK back. We do not want more than one node doing so, because then ACKs would come from different nodes at the same time and interfere with each other.

So normally, a WSN based on RFM12B’s looks like this:

Screen Shot 2011 01 13 at 23.35.09

The central node is the one sending back ACKs when requested. The other nodes should just ignore everything not intended for them, including broadcasts.

Note that it is possible to use more than one receiving node. The trick is to still use only a single one to produce the ACKs. If you’re using the RF12demo sketch as central receiver, then there is a convenient (but badly-named) “collect” option to disable ACK replies. Just give “1c” as command to the second node, and it’ll stop automatically sending out ACKs (“0c” re-enables normal ACK behavior). In such a “lurking” mode, you can have as many extra nodes listening in on the same netgroup as you like.

To get back to netgroups: these really act as a way to partition the network into different groups of nodes. Nodes only communicate with other nodes in the same netgroup. Nodes in other netgroups are unreachable, and data from those other nodes cannot be received (unless you set up a relay, as described a few days ago). If you want to have say hundreds of nodes all reporting to one central server, then one way to do it with RF12 is to set up a number of separate netgroups, each with one central receiving node (taking care of ACKs for that netgroup), and then collect the data coming from all the “central nodes”, either via USB, Ethernet, or whatever other mechanism you choose. This ought to provide plenty of leeway for home-based WSN’s and home-automation, which is what the RF12 was designed for.

So there you have it. There is a lot more to say about ACKs, payloads, and addressing… some other time.

Another topic worth a separate post, is using (slightly) different frequencies to allow multiple transmissions to take place at the same time. Lots of things still left to explore, yummie!

Packet relaying vs. storage

In Software on Jan 13, 2011 at 00:01

In yesterday’s post I introduced a groupRelay.pde sketch, which implements a packet relay.

This can be used to (approximately) double the range between sensor nodes and the central data-collecting node. I’ve got two uses for this myself:

  • To try and get through two layers of reinforced concrete here at JeeLabs, i.e. from the garage to the living room to the office floor where my central data-collecting node is. I can get through one floor just fine (easily, even with a few extra walls), but two is giving me trouble.

  • To have a simple way to work with multiple groups of JeeNodes around here for testing and experimentation, while still allowing me to “merge” one of the test groups with the main, eh, “production” group. This can easily be accomplished by turning a suitably-configured relay on or off.

Note that all traffic takes place in the same 868 MHz frequency band. This isn’t a way to double the amount of bandwidth – all the packets flying around here have to compete for the same RF air space. All it does is separate the available space into distinct logical groups, i.e. net groups, which can be used together.

To summarize from yesterday’s post, this is how the relay code works right now:

Screen Shot 2011 01 12 at 18.07.18

If you think of time as advancing from top to bottom in this diagram, then you can see how the packet comes in, then gets sent out, then the ACK comes in, and finally the ACK gets sent back to the originating node. Let’s call this the Packet pass-through (PPT) approach.

This is very similar to how web requests work across the internet. There is an “end-to-end” communication path, with replies creating one long “round trip”.

But that’s not the only way to do things. The other way is to use a Store-and-forward (SAF) mechanism:

Screen Shot 2011 01 12 at 18.07.30

In this case, the relay accepts the packet, stores it, and immediately sends back an ACK to the originating node. Then it turns around and tries to get the stored packet to its destination.

This is how email works, BTW. The SMTP servers on which email is built can all store emails, and then re-send those emails one step closer to their intended destination.

There are several differences between PPT and SAF:

  • with PPT, it takes longer for the originating node to get back an ACK
  • with SAF, you get an ACK right away, even before the destination has the data
  • with PPT, all failures look the same: no proper ACK is ever received
  • with SAF, you might have gotten an ACK, even though the destination never got the data
  • with PPT, the logic of the code is very simple, and little RAM is needed
  • with SAF, you need to store packets and implement timeouts and re-transmission

But perhaps most importantly for our purposes, PPT allows us to place payload data in the ACK packet, i.e. ACKs can contain replies, whereas with SAF, you can’t put anything in an ACK, because the originating node already got an empty ACK from the relay.

Since SAF is harder to implement, needs more storage, and can’t handle ACK reply data, it just an inferior solution compared to PPT, right?

Not so fast. The main benefit of SAF, is that it can deal with nodes which don’t have to be available at the same time. If the relay is always on, then it will always accept requests from originating nodes. But the destination nodes need not be available at that time. In fact, the destination node might use polling, and ask the intermediate relay node whether there is data waiting to be sent out to it. In effect, the SAF relay now becomes sort of a PO box which collects all incoming mail until someone picks it up.

The implications for battery-powered wireless networks are quite important. With an always-on relay node in the middle, all the other nodes can now go to sleep whenever they want, while still allowing any node to get data to any other node. The basic mechanism for this is that the low-power nodes sleep most of the time (yeay, micro power!) and then periodically contact the relay node in one of two ways:

  • sending out a packet they want to get to some other place
  • polling the relay to get data waiting for them back as ACK reply data

The “sleep most of the time” bit is an essential aspect of low-power wireless networks. They can’t afford to keep a node awake and listening for incoming wireless packets all the time. An RFM12B draws about 15 mA while in receive mode (more than an ATmega!), and keeping it on would quickly deplete any battery.

So if we want to create an ultra low-power wireless network, we will neeed a central relay node which is always on, and then all the other nodes can take control over when they want to send out things and ask for data from that central node whenever they choose to. Which means they could sleep 99.5% of the time and wake up for only a few milliseconds every second, for example. Which is of course great for battery life.

BTW, in case you hadn’t noticed: we’re now entering the world of mesh-networking…

But the drawbacks of SAF remain: more complex logic, and the need to be able to queue up a lot of packets. So we need one node which is always on, and has plenty of memory. Hmmm, ponder, ponder… I remember having seen something suitable.

Of course: the JeeLink! It draws power via USB and has a large DataFlash memory buffer. Whee, nice! :)

GLCD library

In Hardware, Software on Jan 5, 2011 at 00:01

There’s a new GLCD library to drive the 128×64 graphics LCD display on the Graphics Board. The library is called, wait for it… GLCDlib – with a wiki page and a web interface to the source code in subversion. There’s also a ZIP archive snapshot, but it probably won’t get updated with each future subversion change. For some notes about using subversion (“svn”), see this post.

The main class is “GLCD_ST7565”, it has the following members:

Screen Shot 2011 01 04 at 19.52.45

(some longer entries above were truncated, see the website for the full version)

The settings in this library have been hard-coded for use with the Graphics Board, which uses ports 1 and 4 to drive this display. If you want to use this with other I/O connections, you’ll need to change the #define’s at the top of the “GLCD_ST7565.cpp” source file in the library.

Here is the demo used in an earlier post, now included as “glcd_demo.pde” example sketch in the library:

Screen Shot 2011 01 04 at 19.49.49

This produces an output screen similar to this image. Note the use of flash-based string storage with “PSTR” to reduce RAM usage. It not an issue in this example, but more strings tend to rapidly consume RAM, leading to strange and hard-to-find bugs.

The nice thing about GLCDlib, is that you can also use it over wireless. There is a “GLCD_proxy” class, which sends all graphics commands out to another node. Each command is sent as a packet, complete with ACKs, retries, and resends to deal with lost packets.

The “JeePU.pde” example sketch implements the “host”, i.e. a JeeNode with Graphics Board, listening to incoming wireless requests. The “JeePU_demo.pde” sketch shows how to talk to such a remote JeePU node.

Because the transport layer (i.e. wireless or other comms mechanism) is separated out from the main graphics primitives, it is very easy to switch between a locally-connected GLCD and a remote one on a JeePU node. The magic is contained mostly in these lines:

Screen Shot 2011 01 04 at 20.02.40

The only other change needed to use a remote GLCD is to add these lines at the start of setup():

Screen Shot 2011 01 04 at 20.04.57

See the JeePU_demo.pde sketch for an example of how this can be used.

The JeePU node should be running in its own RF12 net group, because clients use broadcasts to send out the graphics commands. They do not need to know the node ID of the JeePU, just its net-group. This also means that multiple GLCD proxy clients can run at the same time, and each could be adjusting a specific part of the same JeePU display … whee, a multi-node status display!

One of the advantages of running the Graphics Board as a JeePU node, is that the other nodes don’t need to load the entire GLCDlib code, in particular they avoid the 1 Kb RAM buffer needed to drive the display.

The graphics code is based on what used to be the ST7565 library by Limor Fried at AdaFruit, which was in turn derived from public domain code attributed to “”.

Several great extensions (and a couple of bug fixes) for the core graphics routines were written by Steve Evans (aka tankslappa on the wiki and forum). Steve also implemented the remote/proxy code and the JeePU “host” and JeePU_demo “client”.

I just hacked around a bit on all that, including renaming things and adding an ACK mechanism to the RF12 wireless layer.

This code is likely to change and extend further, as we come up with more things to do with the current implementation. But for now, enjoy :)

Update – all the code is now at

Rethinking the Arduino hardware interface

In AVR, Hardware, Software on Dec 18, 2010 at 00:01

It’s been almost two years since the first design was created from which the JeeNode was born. It went from this very first prototype:

… to this leaner-and-meaner design, which is the current JeeNode v5:

Jlpcb 105

As you can see, it’s still essentially based on the same layout.

The JeeNode has been the flagship product here at JeeLabs for quite some time. It has been expanded to include a JeeNode USB variant which includes a USB interface and a LiPo charger, as well as a USB “stick-like” JeeLink that ties nicely into the WSN use of JeeNodes. And then there’s the bare-bones JeeSMD, which doesn’t have a wireless module built-in, but which is pin-compatible with the other two members of the JeeNode family.

With all the end-of-year stories coming up, and new year’s resolutions to follow soon, it seems like a good time to present my reasons for doing things this way.

Rethinking the Arduino hardware interface
That’s – in a nutshell – the essence behind the JeeNode.

I stumbled upon the fascinating world of Physical Computing and “Arduino’s” over two years ago, around the time when I also discovered an interesting low-cost wireless module. Lots of things “clicked” right away, but a few didn’t. Given that the Arduino is simply an ATmega (hardware), plus an IDE (software), plus a set of conventions (shields), I quickly realized that there might be more ways to skin this cat, and something new was born (inspired by the RBBB) – as summarized here a year ago.

I don’t want to rehash those points, but let me simply state what the JeeNode is about, assuming you know what an Arduino is.

  • the JeeNode lowers the operating voltage to 3.3V (implications)
  • it includes a wireless radio module (with software)
  • it drops the concept of shields (hard to combine)
  • instead, it adds 4 interchangeable 6-pin Ports (layout)
  • each port includes two dedicated I/O pins as well as power
  • there are numerous Plugs using this port pinout (list)
  • about half the plugs use (software) I2C and can be daisy-chained
  • there are many interface classes and code examples (here and here)
  • the remaining I/O pins are on two extra headers (details, PDF)
  • JeeNodes can be mounted upside-down (CB, GB, POF)
  • … or used alongside a solderless breadboard (BB)
  • with extension cables to move plugs further away (EC)
  • … or a prototype board to re-use all the I/O pins differently (PB)
  • reduced cost by using a detachable / reusable USB-FTDI interface

All this, while remaining fully compatible with “the” Arduino’s software + firmware.

But perhaps the most interesting bit coming out of all this, is that the JeeNode has become a practical ultra-low-power platform, with battery lifetimes measured in months, almost a year even, so far. There have already been tons of posts about this topic. It even spawned a nice little add-on to run JeeNodes from a single AA or AAA cell.

You may or may not agree with all the choices, but this is what the JeeNode is about.

Update – the Redmine repository is no longer available, everything is now on GitHub.

The downside of success

In News on Dec 17, 2010 at 00:01

With over 2,500 units produced to date, it’s safe to call the JeeNode (Kit + USB + JeeLink) a success. A runaway success in fact, in my opinion.

Great, but…

This isn’t purely good news, alas. The demand for JeeLabs products has been increasing so much lately, that I’ve run into serious supply problems in the shop these past few weeks. I can assure you that I’m extremely unhappy with that state of affairs – and the pain isn’t over yet, with some parts taking weeks longer than expected to reach me. This shortage might last into January 2011 for items such as the Ether Card.

So it’s a bit awkward to talk about “success” at a time when there are still 50 back-orders in the shop (down from 90…), with probably quite a few people frustrated by the slow delivery times. A postal strike next week in the Netherlands and the extra delays due to the busy Christmas season are clearly not going to help one bit.

Summary: the JeeNode design has been working out very nicely, but my ability to make it properly available is lagging far behind. My only way out is to “get larger quantities – sooner”. Which is exactly what I’ve been doing lately, in collaboration with Modern Device, who have started carrying more and more JeeLabs products for the US and nearby regions. We’re both scaling up, while trying not to drive ourselves off the cliff…


The current “crunch” is with headers, with 10,000 of them waiting in customs (again) and holding up just about everything, and with Ether Cards, RTC Plugs, and 2×16 LCD displays. That last one is holding up the Wireless Starter Packs as well.

In principle, all packages are sent out when complete, or nearly so in some cases. I cannot speed up things, although I keep looking for alternative suppliers. If you would prefer to get two partial shipments (no extra cost), please get in touch. I will of course also honor any cancellation, if you decide that you’ve had enough of this.

Please bear with me as I try to get over these growing pains. I apologize for all the delays and inconvenience this is causing. As new deliveries are coming in, I am continuously going through my backlog to fullfill as many pending requests as possible.

Voltage: 3.3 vs 5

In AVR, Hardware on Dec 16, 2010 at 00:01

One of the decisions made early on for the JeeNode, was to make it run at 3.3V, instead of the 5V used by the standard Arduino.

The main reason for this was the RFM12B wireless module, which can only be used with supply voltages up to 3.8V, according to the specs. Running them at 5V seems to give varying results: I’ve never damaged one, but there have been reports of such failures. Given that the older RFM12 (no B) worked up to 5V, my hunch is that something in the design was found to give problems at the higher voltage. It’s just a guess on my part, though.

So what’s the deal with 3.3V vs 5V?

Well, the first thing to note, is that the ATmega328 used in a 3.3V JeeNode runs at the same 16 MHz frequency as a 5V Arduino does. This overclocking is “out of spec”:

You’re not supposed to do this, but in my experience the good folks at Atmel (the designers and manufacturers of ATmega’s and other goodies) have drawn up specifications which are clearly on the conservative side. So much so, that not a single case has been reported where this has caused problems in any of the several thousand JeeNodes produced so far. As I pointed out in a previous post, that doesn’t necessarily mean everything is 100% perfect over the entire temperature range. But again: no known problems to date. None.

This is good news for low-power uses, BTW. It means you can get the same amount of work done using less power, since power = voltage x current. Even more so because both voltage and current are lower at 3.3V than when running at 5V.

A second reason for running at 3.3V, is that you can use 3 AA batteries instead of 4 (either alkaline or rechargeable). And that you can also power 3.3V circuits with LiPo packs, which have this hugely convenient 3.5..4.2V range.

The third important reason to run JeeNodes at 3.3V, is that more and more neat sensor chips are only available for use in the 2.7 .. 3.6V range or so. By having the entire setup operate at 3.3V, all these sensors can be used without any tedious level converters.

Occasionally I’ve been bitten by the fact that I used a chip which doesn’t work as low as 3.3V, as in the first RTC Plug trial. But more often than not, it’s simply a matter of looking for alternative chip brands. One recent example was the 555 oscillator used on the Infrared Plug: the original NE555 needs at least 4.5V, but there’s an ICM7555 using CMOS technology which works down to 3V, making it a non-issue.

Mixing 3.3V and 5V devices

The trouble with these voltage differences, is not just that the power supply needs to be different. That’s the easy bit, since you can always generate 3.3V from a 5V supply with a simple voltage regulator and 2 little capacitors.

The real problem comes from the I/O interface. Placing a 5V signal on a chip running at 3.3V will cause problems, in the worst case even permanently damaging the chip. So each I/O pin connected is also affected by this.

Fortunately, there’s often a very simple workaround, using just an extra resistor of 1 kΩ or so in series. To see how this works, here’s the way many chips have their input signals hooked up, internally:

Screen Shot 2010 12 14 at 23.35.16

There’s a pair of diodes inside the chip, for each pin (not just the inputs), used for ESD protection, i.e. to protect the chip against static electricity when you pick it up.

These diodes “deflect” voltage levels which are above the VCC of the device or below GND level. They do nothing else in normal use, but if you were to place 5V on in a pin of such a device powered by 3.3V, then that would lead to a (potentially large) current through the upper diode.

With electronics (as with humans, btw), it’s usually not the voltage itself which causes damage, but the current flow it leads to, and – in the case of sensitve electronics components – the heat produced from it.

By placing a 1 kΩ resistor in series, we limit the flow through the diode to under 2 mA, which most devices will handle without any problems:

Screen Shot 2010 12 14 at 23.42.27

Ok, so now we can hook up signals to a JeeNode, even if they swing in the 0..5V range. This works best with “slow” signals, BTW. The extra resistor has a bad effect on rise and fall times of the signal, so don’t expect this to work with signals which are in the 1 MHz range or higher. Then again, it’s unlikely you’ll need to tie such fast signals directly to an ATmega anyway…

How about the other direction?

What if you have a chip running at 5V which needs to receive signals from a chip running at 3.3V, i.e. signals going in the other direction?

Well, it turns out that this may or may not work by simply tying the two lines together. The 3.3V output signal will definitely not damage a chip running at 5V. The worst that can happen, is that the 5V side doesn’t consider the signal valid.

We need to look into logic levels to figure this one out, as specified in the datasheet of the chip. The easy part is logic “0”, i.e. a low level. Most chips consider anything between 0 and 0.8V a logic “low”. There will hardly ever be an issue when tying a 3.3V chip to a 5V chip.

The tricky part is logic “1”, i.e. a signal which is intended to represent a high level. Now it all depends on what the 3.3V chip sends out, and what the 5V chip requires.

Most CMOS chips, including the ATmega, send out nearly the full power line voltage to represent a logic “1” (when the load current is low), so you can expect output signals to be just about 3.3V on a JeeNode.

On the input side, there are two common cases. Some chips consider everything above 1.6V or so to be a logic one. These chips will be perfectly happy with the JeeNode signal.

The only case when things may or may not work reliably, is with chips which specify the minimum logic “1” voltage to be “0.7 x VCC” or something like that. On a 5V chip, that translates to a minimum value of 3.5V …

Note that datasheets usually contain conservative specs, meant to indicate limit values under all temperatures, load conditions, supply voltages, etc.

In practice, I find that even with “0.7 x VCC”, I can usually drive a 5V chip just fine from a JeeNode. The only exception being higher power chips, such as stepper motor drivers and such, which operate mostly at much higher voltage levels anyway. For these, you may have to use special “level translator” chips, or perhaps something like the I2C-based Output Plug, which can be powered with voltages up to 50V or so.

This post only addresses digital I/O signals. With analog I/O, i.e. varying voltage levels, you will need to carefully review what voltage ranges are generated and expected, and perhaps insert either a voltage divider or an op-amp to amplify voltages. That’s a bit more involved.

But all in all, living mostly in a 3.3V world is often more flexible than living mostly in a 5V world, nowadays.

Which is the fourth reason why I decided to run JeeNodes at 3.3V, BTW.

RF12 acknowledgements

In Software on Dec 11, 2010 at 00:01

The RFM12B wireless module is a transceiver, i.e. able to send and receive packets over wireless. This is an important advantage over simple sensor units which just send out what they measure, and things like RF-controlled power switches which only listen to incoming data but are not able to report their current state.

The only thing is… it’s a bit more work.

This is reflected in how the RF12 library works:

  • simple reception is a matter of regularly polling with rf12_recvDone()
  • simple transmission means you also have to call rf12_canSend() and rf12_sendStart()
  • the above are both essentially uni-directional, so packets can get lost

The second mechanism added to RF12 was a set of “easy transmission” functions, i.e. rf12_easyPoll() and rf12_easySend(). These look similar, but they send out data packets asking for an ACK (acknowledge) packet from the receiver to confirm that the packet was correctly received. If nothing comes in, they will re-send the packet (and repeat a few times, if needed). This mechanism greatly improves the chance of a message arriving properly at the destination. Losing an occasional packet is one thing, losing all retries is a lot less likely!

Note that packets can be damaged or get lost at any time. It may well be that the original packet arrived just fine, but the ACK got lost instead. The sender will resend, and then (probably) get the ACK which stops this retry cycle.

So with the easy transmission functions, note that very occasionally a packet might be received twice. If it is crucial to weed these out, you can include a counter in your data packets to help detect and ignore duplicates.

With RF12demo as receiver, ACK handling is automatic. It knows when the originating node wants to get an ACK, and will send it out as soon as possible. This is reported in the output as the text “-> ack”.

The code for this in RF12demo is horrendous:

Screen Shot 2010 12 10 at 18.27.08

This is silly, and overkill for simple cases. So let’s improve on it.

I’ve added two utility definitions to the RF12.h header, which can simplify the above code to:

Screen Shot 2010 12 10 at 19.58.02

That’s better, eh?

The rest is just there to deal with a special configuration setting in RF12demo.

So if all you want is to add logic in your own sketch to send back an empty ACK packet when requested, the above can be simplified even further to:

Screen Shot 2010 12 10 at 19.59.09

For completeness, here’s a complete processing loop for a receiving sketch which supports nodes using the easy transmission mechanism:

Screen Shot 2010 12 10 at 19.59.45

You have to send out the ACK after processing the packet, because the rf12_sendStart() call will re-use the same packet buffer and overwrite the incoming packet.

Also, RF12_WANTS_ACK and RF12_ACK_REPLY are defined as macros which access the global rf12_hdr variable, as set by rf12_recvDone(). IOW, the convenience comes for free, but it does depend on some fixed assumptions. I can’t think of a situation where this would lead to problems, given that RF12-based sketches are probably all structured in the same way, and that globals are part of the RF12 driver.

For another example, see the blink_recv.pde sketch, which has also been simplified with these two macros.

Binary packet decoding – part 2

In AVR, Software on Dec 8, 2010 at 00:01

Yesterday’s post showed how to get a 2-byte integer back out of a packet when reported as separate bytes:

Unfortunately, all is not well yet. Without going into details, the above may fail on 32-bit and 64-bit machines when sending a negative value such as -12345. And it’s not so convenient with other types of data. For example, here’s how you would have to reconstruct a 4-byte long containing 123456789, reported as 4 bytes:

Screen Shot 2010 12 07 at 09.56.08

And what about floating point values and C structs? The trouble with these, is that the receiving party doing the conversion needs to know exactly what the internal byte representation of the ATmega is.

Here is an even more complex example, as used in the roomNode.pde sketch:

Screen Shot 2010 12 07 at 08.44.28

This combines different measurement values into a 4-byte C struct using bit fields. Note how the “temp” value crosses two bytes, but only uses specific bits in them.

Fortunately, there is a fairly simple way to deal with all this. The trick is to decode the values back into meaningful values by the receiving ATmega instead of an attached PC. When doing so, we can re-use the same definition of the information. By using the same hardware and the same C/C++ compiler on both sides, i.e. the Arduino IDE, all internal byte representation details can be left to the compiler.

Let’s start with this 2-byte example again:

I’m going to rewrite it slightly, as:

Screen Shot 2010 12 07 at 08.57.23

No big deal. This sends out exactly the same packet. But now, we can rewrite the receiving sketch as follows:

Screen Shot 2010 12 07 at 09.00.14

The effect will be to send the following line to the serial / USB connection:

    MEAS 12345

The magic incantation is this line:

Screen Shot 2010 12 07 at 09.01.45

It uses a C typecast to force the interpretation of the bytes in the receive buffer into the “Payload” type. Which happens be the same as the one used by the sending node.

The benefit of doing it this way, is that the same approach can be used to transfer any type of data as a packet. Here is an example how a Room Node code sends out a 4-byte struct with various measurement results:

Screen Shot 2010 12 07 at 09.07.07

And here’s how the receiving node can convert the bytes in the packet back to the proper values:

Screen Shot 2010 12 07 at 09.10.55

The output will look like:

    ROOM 123 1 78 -15 0

Nice and tidy. Exactly the values we were after!

It looks like a lot of work, but it’s all very straightforward to implement. Most importantly, the correspondence between what happens in the sender and the receiver should now be obvious. It would be trivial to include more data. Or to change some field into a long or a float, or to use more or fewer bits for any of the bit fields. Note also that we don’t even need to know how large the packet is that gets sent, nor what all the individual bytes contain. Whatever the sender does to map values into a packet, will be reversed by the receiver.

This works, as long as the two struct definitions match. One way to make sure they match, is to place the payload definition in a separate header file, say “payload.h” and then include that file in both sketches using this line:

Screen Shot 2010 12 07 at 09.16.47

The price to pay for this flexibility and “representation independence”, is that you have to write your own receiving sketch. The generic RF12demo sketch cannot be used as is, since it does not have knowledge of the packet structures used by the sending nodes.

This can become a problem if different nodes use different packets sizes and structures. One way to simplify this, is to place all nodes using the same packet layout into a single net group, and then have one receiver per net group, each implemented in the way described above. Another option is to have a single receiver which knows about the different types of packets, and which switches into the proper decoding mode depending on who sent the packet.

Enough for now. Hopefully this will help you implement your own custom WSN to match exactly what you need.

Update – Silly mistake: the “rf12_sendData()” call doesn’t exist – it should be “rf12_sendStart()”.

Binary packet decoding

In AVR, Software on Dec 7, 2010 at 00:01

The RF12 library used with the RFM12B wireless radio on JeeNodes is based on the principle of sending individual “packets” of data. I’ve described the reasons for this design choice in a number of posts.

Let me summarize what’s going on with wireless:

  • RFM12B-based nodes can send binary packets of 0..66 bytes
  • these packets can contain any type of data you want
  • a checksum detects transmission errors to let you ignore bad packets
  • dealing with packet loss requires an ACK + re-transmission mechanism

Packets have the nice property that they either arrive intact as a whole or not at all. You won’t get garbled or inter-mixed packets when multiple nodes happen to send at (nearly) the same time. Compare this to some other solutions where all the characters sent end up in one big “soup” if the sending happens (nearly) simultaneously.

But first: what’s a packet?

Well, loosely speaking, you could say that a packet is like one line of text. In fact, that’s exactly what you end up with when using the RF12demo sketch as central receiver: a line of text on the serial/USB connection for each received packet. Packets with valid checksums will be shown as lines starting with “OK”, e.g.:

    OK 3 128 192 1 0
    OK 23 79 103 190 0
    OK 3 129 192 1 0
    OK 2 25 99 200 0
    OK 3 130 192 1 0
    OK 24 2 121 163 0
    OK 5 86 97 201 0
    OK 3 131 192 1 0

Let’s examine how that corresponds with the actual data sent by the node.

  • All the numbers are byte values, shown as numbers in the range 0..255.
  • The first byte is a header byte, which usually includes the node ID of the sender, plus some extra info such as whether the sender expects an ACK back.
  • The remaining data bytes are an exact copy of what was sent.

There appears to be some confusion about how to deal with the binary data in such packets, so let me go into it all in a bit more detail.

Let’s start with a simple example – sending one byte:

Screen Shot 2010 12 06 at 22.32.26

I’m leaving out tons of details, such as calling rf12_recvDone() and rf12_canSend() at the appropriate moments. This code is simply broadcasting one value as a packet for anyone who cares to listen (on the same frequency band and net group). Let’s also assume this sender’s node ID is 1.

Here’s how RF12demo reports reception of this packet:

    OK 1 123

Trivial, right? Now let’s extend this a bit:

Screen Shot 2010 12 06 at 22.38.13

Two things changed:

  • we’re now sending a larger int, i.e. a 2-byte value
  • instead of passing length 2, the compiler calculates it for us with the C “sizeof” keyword

Now, the incoming packet will be reported as:

    OK 1 57 48

No “1”, “2”, “3”, “4”, or “5” in sight! What happened?

Welcome to the world of multi-byte values, as computers deal with them:

  • a C “int” requires 2 bytes to represent
  • bytes can only contain values 0..255
  • 12345 will be “encoded” in two bytes as “12345 divided by 256” and “12345 modulo 256”
  • 12345 / 256 is 48 – this is the “upper” value (the top 8 bits)
  • 12345 % 256 is 57 – this is the “lower” value (the low 8 bits)
  • an ATmega stores values in little-endian format, i.e. lowest-range bytes come first
  • hence, as bytes, the int “12345” is represent as first 57 and then 48
  • and sure enough, that’s exactly what we got back from RF12demo

Yeah, ok, but why should we care about such details?

Indeed, on normal PC’s (desktop and mobile) we rarely need to. We just think in terms of our numbering system and let the computer do the conversions to and from text for us. That’s exactly what “Serial.print(12345)” does under the hood, even on an Arduino or a JeeNode. Keep in mind that “12345” is also a specific representation of the abstract quantity it stands for (and “0x3039” would be another one).

So we could have converted the number 12345 to the string “12345”, placed it into a packet as 5 bytes, and then we’d have gotten this message:

    OK 1 31 32 33 34 35

Hm. Still not quite what we were looking for. Because now we’re dealing with ASCII text, which itself is also an encoding!

But we could build a modified version of RF12demo which converts that ASCII-encoded result back to something like this:

    OKSTR 1 12345

There are however a few reasons why this is not necessarily a good idea:

  • sending would take 5 bytes instead of 2
  • string manipulation uses more RAM, which is scarce on an ATmega
  • lots of such little inefficiencies will add up, once more data is involved

There is an enormous gap in performance and availability of resources between a modern CPU (even on the simplest mobile phones) and this 8-bit few-bucks-chip we call an “ATmega”. Mega, hah!

But you probably didn’t really want to hear any of this. You just want your data back, right?

One way to accomplish this, is to keep RF12demo just as it is, and perform the proper transformation on the receiving PC. Given variables “a” = 57, and “b” = 48, you can get the int value back with this calculation:

Screen Shot 2010 12 06 at 23.11.04

Sure enough, 57 + 48 * 256 is… 12345 – hurray!

It’s obviously not hard to implement such a transformation in PHP, Python, C#, Delphi, VBasic, Java, Tcl… whatever your language of choice is.

But there’s more to it, alas (hints: negative values, floating point, structs, bitfields).

Stay tuned… more options to deal with these representation details tomorrow!

Update – Silly mistake: the “rf12_sendData()” call doesn’t exist – it should be “rf12_sendStart()”.

GLCD on battery power

In Hardware on Nov 27, 2010 at 00:01

I’m going to switch to a different type of graphic LCD once the next batch of Graphic Boards arrives. Instead of a white-on-black type, the new GLCD will be a blue-on-sort-of-blueish color:

Dsc 2356

The reason for this is that it is still quite readable without the backlight:

Dsc 2358

There is a solder jumper on the Graphics Board, to choose between powering the backlight continuously via PWR, or having it powered under software control, via the not-often-used IRQ pin. This not only lets you turn the backlight on and off, but even dim it under software control, because the IRQ pin supports PWM, i.e. analogWrite().

The above display draws about 280 µA without backlight right now, which means it will keep running for over a month on a single AA cell, using the AA Power Board. There are probably ways to reduce that current consumption further, as I’m not doing anything yet with the ST7565’s standby and other low-power modes.

And although it may not be practical for a permament display, this definitely is useful for something I’d like to have around and use on a regular basis. By sending it stuff to display over wireless, this setup can support what is quickly becoming my favorite mode of operation here at Jee Labs: no wires cluttering my desk!!!

Room Node display

In Software on Nov 17, 2010 at 00:01

Now that there’s a Graphics Board, I thought I’d make a little display with the last few readings from a couple of room nodes around here. Ironically, it’s just a 8×21 character text display for now – no graphics in sight:

Dsc 2281

The information consists of:

  • a packet sequence number (only 4-byte packets are treated as room nodes)
  • the node ID
  • the temperature in °C
  • the relative humidity in %
  • the measured light intensity (0..255)

New readings get added at the bottom, with older readings scrolling upwards.

Unfortunately, the ST7565 library doesn’t have a normal print() & println() API, so the first thing I did was to create a new wrapper class:

Screen Shot 2010 11 16 at 12.42.57

One quirk about this code is that since we’re using a RAM buffer, the ST7565 screen contents needs to be explicitly updated. I solved it by adding a poll() method which you need to call in the main loop. It’ll make sure that the display gets refreshed shortly after anything new has been “printed” (default is within 0.1 s).

Another thing this class does is to scroll the contents of the display one line up when the bottom is reached. It does this in a slightly lazy manner, i.e. the display is not scrolled immediately when a newline is sent to it but when the first character on the next line falls outside the display area – a subtle but important difference, because it lets you use println() calls and the display won’t constantly leave an empty line at the bottom.

Scroll support does require one change to the “ST7565.cpp” source code. This:

    static byte gLCDbuf[1024];

Has to be changed as follows, to make the RAM buffer accessible from other source files:

    byte gLCDbuf[1024];

(should be around line 42 in ST7565.cpp)

With that out of the way, here’s the glcdNode.pde sketch, which has been added to the RF12 library:

Screen Shot 2010 11 16 at 12.53.36

For debugging purposes, the same information shown on the display is also sent to the serial port:

Screen Shot 2010 11 15 at 02.24.28

Note that gldeNode is hard-coded to receive packets from net group 5 @ 868 MHz, as you can see in the call to rf12_initialize().

So now I have a battery-powered wireless gadget which lets me track what our house is trying to tell us!

Update – the ST7565 library needs to be changed a bit more, I now realize. Perhaps the easiest way to do so is to simply insert the following line somewhere near the top of ST7565.c:

    #define buffer gLCDbuf

That way, the buffer will have a more meaningful name when made global by the above-mentioned patch.

Update #2 – no more need for the ST7565 library, use the new GLCDlib instead. The glcdNode demo sketch has been adapted and moved over to it.

Meet the Graphics Board

In Hardware on Nov 15, 2010 at 00:01

Sometimes, 2×16 characters (or perhaps a few more) just don’t cut it, when I’d like to present things graphically, or use larger fonts for some of the info. That’s when a “GLCD” would be handy, one which lets you set individual pixels, that is. I was pointed to a nice low-power 128×64 display recently, and decided to create a pcb for it – so here’s the new Graphics Board:

Dsc 2258

Due to the size of the board, it’s not a plug or an add-on to a JeeNode, but the other way around: you can take a JeeNode, JeeNode USB, or JeeSMD, and push it onto this board to power and drive the display.

Ports 1 and 4 are used for the diplay, but ports 2 and 3 are available, and brought out in such a way that plugs can easily be added on. To make a clock, add an RTC Plug. To add a few touch buttons, add a Proximity Plug. To create a fancy Reflow Controller, add a Thermo Plug. Or just use it as is and add a sketch to display messages received over wireless. Endless possibilities…

Here’s the display in action, it’s pretty bright when driven from 5V:

Dsc 2278

It can display up to 8 lines of 21 characters, but it’s also fully graphical of course. BTW, in real life the display looks much more white-on-black.

I particularly like this setup, using the AA Power Board to make this thing completely self-powered:

Dsc 2245

The backlight resistor is chosen such that on 3.3V, this display draws ≈ 6 mA, and on 5V it draws ≈ 19 mA. It will last a few days with an AA Power Board, but when the 100 Ω resistor is replaced by 270 Ω or even 470 Ω, you can get well over a week of battery life on a single AA (assuming the JN itself is sleeping most of the time) – at the cost of dimming the backlight to a fairly low level.

There is a solder jumper on the board which normally connects the backlight to the PWR line. By connecting it to the IRQ line instead, that pin can turn off the backlight under software control (or dim it, using PWM).

The display is based on the ST7565 chip and can be driven by Limor Fried’s great ST7565 LCD library. Note that this library uses a 1 Kb RAM buffer.

Here’s the glcd_demo.pde sketch which generates the above display:

Screen Shot 2010 11 14 at 17.20.36

I included logic to put the ATmega and radio into power down mode to let me measure the display’s current consumption. While active, they draw another 5..30 mA.

The Graphics Board is now in the café and the shop.

UpdateHere is a copy of the ST7565 code I use.

JeeNode Experimenter’s Pack

In AVR, Hardware on Nov 8, 2010 at 00:01

Neat – it looks like JeeNodes are starting to become popular for workshops!

I’m not really surprised: it’s more fun than an Arduino, IMO, because you get to learn the basics of electronics and soldering, and of course every JeeNode comes with wireless connectivity built-in. As I’ve said before, making things happen by wireless is a bit like magic…

FWIW, there are a couple of workshops scheduled for this month (“in-house”, i.e. for a specific audience, and I’m only indirectly involved), all are based on either a JeeNode or an RBBB. Both are well suited for solderless breadboards, which – if you ask me – is one of the greatest inventions ever for tinkering and learning electronics. Sure, soldering works best when the circuit is known and proven, but nothing beats a breadboard, some jumper wires, and a bunch of components and chips to try out things!

This gives me great satisfaction. No, not for the money side of it (it’s all discounted anyway), but because I find nothing more exciting than to see people try out new fun stuff and nurture their tech geek sides :)

Learning is great: if you’re young, it’ll make you wiser – if you’re old, it’ll make you younger … (and it’s fun!)

The big question is always: “what do I need to get started?” – and my answer is usually: “it depends on what kind of adventure you’re after”.

I’ve come up with a new “JeeNode Experimenter’s Pack” to create a baseline for the future. That way, it will be easier for me to set up and document my experiments, knowing that there is a common baseline on which to build. It’s also the logical step after this rubber-band concoction. It consists of the following key components:

Dsc 2221

In prose:

In addition, and only in combination with the rest of the Experimenter’s Pack, I’m throwing in a little 10×17 cm laser-cut wooden base and an LDR to get started with some sensor experiments:

Dsc 2220

The base turns the whole thing into a self-contained “project platform” for all sorts of experimentation. And the AA Power Board will supply 3.3V to make this setup portable and usable anywhere. The rest of the empty space is all yours. That’s where the fun happens :)

The AA Power Board is not just a gimmick, BTW: an AA battery will actually provide more power @ 3.3V than a 9V “block”! As someone pointed out recently, it’s also a great way to drain any half-used AA batteries you might still have lying around (who hasn’t?).

Here’s a recent example of use (which required a bit more power, so I hooked it up to a 12V supply):

Dsc 2215

It’s a little stepper motor tester (using an EasyDriver board). Nothing fancy, but it was a convenient way for me to try something like this out. That project board has now been cleared to make room for new experiments…

That’s the whole point, really – this self-contained setup is intended to provide a quick path for trying out new ideas. USB-connected or battery-powered, serial or wireless, local or remote, whatever.

Once it all works, you could:

  1. take the design and redo it as a more permanently soldered circuit
  2. design a pcb for it (which is what I did for the graphics board)
  3. put it all into a custom enclosure and keep using the setup
  4. stash it all away as is, for some other time
  5. put hot glue on it? (yikes!)

I’ve added this to the shop, as 1- and as 10-pack.

Reflow Timer software

In Software on Nov 4, 2010 at 00:01

Another episode in the reflow controller story…

Here is yesterday’s graph again, but manually annotated this time:

Annotated Reflow

Actually, I went ahead and extended the code to add those axis labels in there. I was concerned that they would overlap and distract from the graph data itself, but after seeing this… it clearly improves readability.

The trick is to get the PID control factors right, and these will be different for each setup. Right now, I just picked a couple of values which seem to be working ok on my particular grill. I’ve extended the JeeNode sketch to allow adjusting these values via a serial USB connection:

    <N> P       P factor (x1000)
    <N> I       I factor (x1000)
    <N> D       D factor (x1000)
    <N> L       I limit (x1000)

The PID calculation is:

    (Pfactor*Pval + Ifactor*Ival - Dfactor*Dval) / 1000

In other words, these factors are specified as a multiple of 0.001.

The result is brought into a range of 0..100. This in turn is used to determine when, how often, and how long to turn on the heater in the grill/oven/skillet.

The reflow profile parameters are also adjustable from the serial link:

    <N> o       ON temperature (°C), default 70
    <N> w       minimum time in WARMUP phase (sec), default 60
    <N> p       temperature at end of PREHEAT phase (°C), default 140
    <N> s       temperature at end of SOAK phase (°C), default 170
    <N> m       maximum REFLOW temperature (°C), default 250
    <N> r       minimum time in REFLOW phase (sec), default 15
    <N> c       temperature at end of COOL phase (°C), default 150
    <N> f       temperature at end of FINAL phase (°C), default 50

Once the FINAL phase ends, the JeeNode will power itself down.

A few more parameters:

    <N> l       lower calibration temperature limit (°C), default 40
    <N> u       upper calibration temperature limit (°C), default 120
    <N> d       stable duration (sec), default 5
    <N> t       stable trigger gap (°C), default 25
    <N> a       number of temperature averages to take, default 250

Some parameters for reporting, which happens once per second:

    <N> i       wireless node ID (sending disabled if 0), default 8
    <N> b       wireless frequency (4=433, 8=868, 9=915), default 8
    <N> g       wireless net group, default 5
    <N> e       enable (1) or disable (0) serial reports, default 0

And finally, the parameters which control the FS20 remote switch:

    <N> H       house code to use for FS20, default 4660
    <N> h       device ID to use for FS20, default 1

All PID factors and other parameters are stored in EEPROM, so they will remain in effect until changed.

To get a summary of all the current settings, type a question mark: “?”.

To reset all parameters to their “factory” defaults, type an exclamation mark: “!”.

The code for the “reflowTimer.pde” sketch is here The current code size is ≈ 14 Kb. I’ll probably be tweaking it a bit further in the coming days.

One thing I’d like to try adding to the current sketch is an easy way to self-calibrate and come up with a workable set of P/I/D factors, so that it can be used with a variety of electrical grills, toasters, skillets, ovens, barbecues, whatever – under the motto: if it can melt solder, we should try it!

The JeeMon script is here and is about 150 lines of code. If you save it as “application.tcl” next to JeeMon, it will automatically be picked up when JeeMon is launched. The code is still work-in-progress at this point: you will have to manually edit the “device” variable to refer to your attached JeeNode/JeeLink running RF12demo – you can also set it to a COM port (Windows) or tty device (Linux/Mac). Likewise, the “nodeID” variable should be set to match the current setting in the Reflow Timer sketch (“i” parameter):

    variable device   usb-A900ad5m    ;# which JeeNode/JeeLink to attach to
    variable nodeID   8               ;# which node ID to listen to

The frequency band + netgroup of the JeeNode/JeeLink are assumed to have been previously set in RF12demo.

Note that the script is an optional GUI front-end – you can launch it anytime, or you can ignore this whole JeeMon thing, since the sketch does not depend on it. It’ll drive the reflow process with or without the GUI.

If you try this out, or have suggestions about how to improve things, please let me know.

Update – I’ve adjusted the info above to match the latest code changes.

Conquering the thermocouple

In Hardware, Software on Nov 1, 2010 at 00:01

(No Halloween stuff on this side of the pond – I’ll defer to Seth Godin for some comments on that…)

A while back, I had to shelve my experiments with the reflow controller, because I couldn’t get a reliable temperature reading from the Thermo Plug when using a thermcouple.

Or rather, sometimes it worked, sometimes it didn’t: the physical computerer’s equivalent of a nightmare!

The thermocouple circuit is very sensitive to ground currents, apparently. The effect was that my setup would work fine on batteries, but jump all over the place when attached to the USB port. Not very convenient for development, obviously.

It still has some unexplained behavior, but I’ve been able to narrow it down, so there are two new pieces of good news: 1) it only works badly while data is being transferred over the USB port, and 2) with some averaging, the readout is actually rock solid, both on batteries and on USB. I still see a difference in readout when data is transferred over USB, but since this is a JeeNode, I can work around that in the final version: go wireless!.

Here’s the readout code which produces good readings – all remaining jitter is now in 1/10’s of degrees Celsius:

Screen Shot 2010 10 31 at 18.48.10

The output is in 1/100’s of °C, because I’m trying to avoid floating point math in this sketch.

And here is the measuring side of my new reflow setup:

Dsc 2200

The thermocouple is taped to the thin aluminium insert in the grill using heat-resistant Kapton tape. When I turn on the heater, I now see a clear rise in temperature within seconds – perfect!

Note that I’m using a 4x AA pack i.s.o. 3x AA, because the AD597 needs at least 2V more on its supply line than the highest output voltage it is going to report. With 4x 1.2V (worst case, i.e. near-empty eneloops), the range will be at least 4.8 – 2 / 0.010 = 280°C, i.e. plenty!

And indeed, I’ve verified that at 250°, it reports valid temperatures on the attached LCD Plug w/ display.

The other plug you see in the lower left is a Blink Plug, with two pushbuttons and two LEDs.

Let’s see if this time around we can get the whole thing going properly!

Sending data TO remote nodes

In Software on Oct 31, 2010 at 00:01

Yesterday’s post described an easy way to get some data from remote battery-powered nodes to a central node. This is the most common scenario for a WSN, i.e. when reading sensors scattered around the house, for example.

Sometimes, you need more reliability, in which case the remote node can request an “ACK” and wait (briefly) until that is received. If something went wrong, the remote node can then retry a little later. The key of ACKs is that you know for sure that your data packet has been picked up.

But what if you want to send data from the central node TO a remote node?

There are a couple of hurdles to take. First of all, remote nodes running on batteries cannot continuously listen for incoming requests – the RFM12B receiver draws more than the ATmega at full power, it would drain the battery within a few days. There is simply no way a remote node can be responsive 100% of the time.

One solution is to agree on specific times, so that both sides know when communication is possible. Even just listening 5 ms every 500 ms would create a fairly responsive setup, and still take only 1% of the battery as compared to the always-on approach.

But this TDMA-like approach requires all parties to be (and remain!) in sync, i.e. they all need to have pretty accurate clocks. And you have to solve the initial sync when powered up as well as when reception fails for a prolonged period of time.

A much simpler mechanism is to let the remote take the initiative at all times: let it send out a “poll” packet every so often, so we can send an ACK packet with some payload data if the remote node needs to be signaled. There is a price: sending a packet takes even more power than listening for a packet, so battery consumption will be higher than with the previous option.

The next issue is how to generate those acks-with-payload. Until now, most uses of the RF12 driver required only packet reception or simple no-payload acks. This is built into RF12demo and works as follows:

Screen Shot 2010 10 30 at 20.17.37

That’s not quite good enough for sending out data to remote nodes, because the central JeeLink wouldn’t know what payload to include in the ACK.

The solution is the RF12demo’s “collect mode”, which is enabled by sending the “1c” command to RF12demo (you can disable it again with “0c”). What collect mode does, is to prevent automatic ACKs from being sent out to remote nodes requesting it. Instead, the task is delegated to the attached computer:

Screen Shot 2010 10 30 at 20.17.48

IOW, in collect mode, it becomes the PC/Mac’s task to send out an ACK (using the “s” command). This gives you complete control to send out whatever you want in the ACK. So with this setup, remote nodes can simply broadcast an empty “poll” packet using:

    rf12_sendStart(0, 0, 0);

… and then process the incoming ACK payload as input.

It’s a good first step, since it solves the problem of how to get data from a central node to a remote node. But it too has a price: the way ACKs are generated, you need to take the round-trip time from JeeLink to PC/Mac and back into account. At 57600 baud, that takes at least a few milliseconds. This means the remote node will have to wait longer for the reply ACK – with the RFM12B still in receive mode, i.e. drawing quite a bit of current!

You can’t win ’em all. This simple setup will probably work fine with remotes set to wait for the ACK using Sleepy::loseSometime(16). A more advanced setup will need more smarts in the JeeLink, so that it can send out the ACK right away – without the extra PC round-trip delay.

I’ll explore this approach further when I get to controlling remote nodes. But that will need more work – such as secure transmissions: once we start controlling stuff by wireless, we’ll need to look into authorization (who may control this node?), authentication (is this packet really from who it says it is?), and being replay-proof (can’t snoop the packet and re-send it later). These are all big topics!

More on this some other time…

Simple RF12 driver sends

In AVR, Software on Oct 30, 2010 at 00:01

(Whoops… this post got mis-scheduled – fixed now)

Yesterday’s post illustrates an approach I’ve recently discovered for using the RF12 driver in a very simple way. This works in one very clear-cut usage scenario: sending wireless packets out periodically (without ACK).

Here’s the basic idiom:

Screen Shot 2010 10 29 at 13.01.24

What this does is completely ignore any incoming data, it just waits for permission to send when it needs to, and then waits for the send to complete by specifying “2” as last arg to rf12_sendStart().

No tricky loops, no idle polling, everything in one place.

With a few lines of extra code, the RFM12B module can be kept off while not used – saving roughly 15 mA:

Screen Shot 2010 10 29 at 13.07.11

And with just a few more lines using the Sleepy class, you get a low-power version which uses microamps instead of milliamps of current 99% of the time:

Screen Shot 2010 10 29 at 13.09.03

Note the addition of the watchdog interrupt handler, which is required when calling Sleepy::loseSomeTime().

The loseSomeTime() call can only be used with time ranges of 16..65000 milliseconds, and is not as accurate as when running normally. It’s trivial to extend the time range, of course – let’s say you want to power down for 10 minutes:

Screen Shot 2010 10 29 at 13.11.16

Another point to keep in mind with sleep modes, is that it isn’t always easy to keep track of time and allow other interrupts to wake you up again. See this recent post for a discussion about this.

But for simple Wireless Sensor Network node scenarios, the above idioms will give you a very easy way to turn your sketches into low-power mode which can support months of operation on a single set of batteries.

Update – it looks like the above RF12_sleep() arguments are completely wrong. They should be:

  • rf12_sleep(N) turns the radio off with a wakeup timer enabled if N is 1..127
  • rf12_sleep(0) turns the radio off
  • rf12_sleep(-1) turns the radio back on

This list matches what is documented on the wiki page.

Meet the new Opto-coupler Plug

In Hardware on Oct 29, 2010 at 00:01

The plug barrage continues…

This time it’s an update of the flawed Opto-coupler Plug, which only worked properly on one channel (unless you patched it up). So here’s the new Opto-coupler Plug v2:

Dsc 2185

Wait! There’s more! It’s actually two plugs in one now:

Dsc 2186

(note: the solder jumpers need to be shorted out to use this as an output board)

Since there was enough room on this board, I decided to make it more versatile: it can now be used as dual Opto-coupled input (as before) or as dual Opto-coupled output plug. In the latter case, the JeeNode drives the IR LEDs and the output is a phototransistor which can be used as low-power (polarized) switch for some external device. The choice is determined by how the IC socket and Opto-coupler are hooked up.

The connectors are the same as before: detachable screw terminals, these are very convenient because you can prepare the wiring elsewhere and then plug / unplug / swap channels as needed.

I have no immediate use for these plugs right now, but of course I had to make sure that everything is working properly, so I hooked both of them up:

Dsc 2190

On the back you can see how things were connected together:

Dsc 2191

Here’s the updated opto_demo.pde test sketch for that – I decided to send the results by wireless for a change:

Screen Shot 2010 10 28 at 18.11.23

Sample output on my central JeeLink:

Opto Output

Yippie – looks like it’s working exactly as intended! Note the inverted signals, BTW.

Docs and shop have been updated for this new plug version. If you have version 1, please let me know and I’ll send out an updated board to you.

Tracking time in your sleep

In AVR, Software on Oct 18, 2010 at 00:01

No, this isn’t a story about bio-rhythms :)

One of the challenges I’ll be up against with Room Nodes is how to keep track of time. The fact is that an ATmega is extraordinarily power efficient when turned off, and with it a JeeNode – under a few microamps if you get all the little details right. That leads to battery lifetimes which are essentially only determined by self-discharge!

But there are two problems with power off: 1) you need to be 100% sure that some external event will pull the ATmega out of this comatose state again, and 2) you can completely lose track of time.

Wireless packets are of no use for power-down mode: the RFM12B consumes many milliamps when turned on to receive packets. So you can’t leave the radio on and expect some external packets to tell you what time it is.

Meet the watchdog…

Fortunately, the ATmega has a watchdog, which runs on an internal oscillator. It isn’t quite as accurate, but it’ll let you wake up after 16 ms, 32ms, … up to 8 seconds. Accuracy isn’t such a big deal for Room Nodes: I don’t really need to know the temperature on that strict a schedule. Once every 4 .. 6 minutes is fine, who cares…

There’s a Sleepy class in the Ports library, which manages the watchdog. It can be used to “lose time” in a decently accurate way, and will use the slowest watchdog settings it can to get it out of power-down mode at just about the right time. To not disrupt too many activities, the “millis()” timer is then adjusted as if the clock had been running constantly. Great, works pretty well.

It’s not enough, though.

As planned for the next implementation, a Room Node needs to sleep one minute between wakeups to readout some sensors, but it also needs to wake up right away if the motion sensor triggers.

One solution would be to wake up every 100 ms or so, and check the PIR motion sensor to see whether it changes. Not perfect, but a 0.1s delay is not the end of the world. What’s worse though is that this node will now wake up 14,400 times per day, just to check a pin which occasionally might change. This sort of polling is bound to waste some power.

Meet the pin-change interrupt…

This is where pin-change interrupts come in, They allow going into full power-down, and then getting woken up by a change on any of a specified set of pins. Which is perfect, right?

Eh… not so fast:

Screen Shot 2010 10 17 at 22.05.30

Q: What time is it when the pin-change occurred?

A: No idea. Somewhere between the last watchdog and the one which will come next?

IOW, the trouble with the watchdog is that you still don’t really track time. You just know (approximately) what time it is when the watchdog fires again.

If the watchdog fires say every 8 seconds, then all we know at the time of a pin-change interrupt, is that we’re somewhere inside that 8s cycle.

We can only get back on track by waiting for that next watchdog again (and what if the pin change fires a second time?). In the mean time, our best bet is to assume the pin change happened at the very start of the watchdog cycle. That way we only need to move the clock forward a little once the watchdog lets us deduce the correct moment. FYI: everything is better than adjusting a clock backwards (timers firing again, too fast, etc).

Now as I said before, I don’t really mind losing track of time to a certain extent. But if we’re using 8-second intervals to get from one important measurement time to the next, i.e. to implement a 1-minute readout interval, then we basically get an 8-second inaccuracy whenever the PIR motion detector triggers.

That’s tricky. Motion detection should be reported right away, with an ACK since it’s such an important event.

So we’re somehere inside that 8-second watchdog cycle, and now we want to efficiently go through a wireless packet send and an ACK cycle? How do you do that? You could set the watchdog to 16 ms and then start the receiver and power down. The reception of an ACK or the watchdog will get us back, right? This way we don’t spend too much time waiting for an ack with the receiver turned on, guzzling electrons.

The trouble is that the watchdog is not available at this point: we still want to let that original 8-second cycle complete to get our knowledge of time back. Remember that the watchdog was started to get us back out in 8 seconds, but that it got pre-empted by a pin-change.

Let me try an analogy: the watchdog is like throwing a ball straight up into the air and going to sleep, in the knowledge that the ball will hit us and wake us up a known amount of time from now. In itself a pretty neat trick to keep track of the passage of time, when you don’t have a clock. Well, maybe not for humans…

The scenario that messes this up is that something else woke us up before the ball came down. If we re-use that ball for something else, then we have lost our way to track time. If we let that ball bring us back into sync, fine, but then it’ll be unavailable for other timing tasks.

I can think of a couple solutions:

  • Dribble – never use the watchdog for very long periods of time. Keep waking up very frequently, then an occasional pin-change won’t throw us off by much.

  • Delegate – get back on track by asking for an ack which tells us what time it is. This relies on a central receiving node (which is always on anyway), to tell us how to adjust our clock again.

  • Fudge it – don’t use the watchdog timer, but go into idle mode to wait for the ack, and make that idle period as short as possible – perhaps 2 ms. IOW, the ACK has to reach us within 2 milliseconds, and we’re not dropping into complete powerdown during that time. We might even get really smart about this and require that the reply come back exactly 1 .. 2 ms after the send, and then turn off the radio for 1 ms, before turning it on for 1 ms. Sounds crazy, until you realize that 1 ms of radio time uses as much energy as 5 seconds of power down time – which adds up over a year! This is a bit like what TDMA does, BTW.

All three options look practical enough to consider. Dribbling uses slightly more power, but probably not that much. Delegation requires a central node which plays along and replies with an informative ack (but longer packets take longer to receive, oops!). Fudging it means the ATmega will be in idle mode a millisec or two, which is perhaps not that wasteful (I haven’t done the math on that yet).

So there you go. Low power stuff isn’t always trivial, once you start pushing for the real gains…

IR decoding with pin-change interrupts

In AVR, Hardware, Software on Oct 14, 2010 at 00:01

Yesterday’s post described a new “InfraredPlug” class which handles the main task of decoding IR pulse timings. The “irq_recv.pde” example sketch presented there depended on constant polling to keep the process going, i.e. there has to be a line like this in loop():


Worse, the accuracy of the whole process depends on calling this really often, i.e. at least every 100 µs or so. This is necessary to be able to time the pulse widths sufficiently accurately.

Can’t we do better?

Sure we can. The trick is to use interrupts instead of polling. Since I was anticipating support for pin-change interrupts, I already designed the class API for it. And because of that, the changes needed to switch to an interrupt-driven sketch are surprisingly small.

I’ve added a new irq_send_irq.pde sketch to the Ports library, which illustrates this.

The differences between using polling mode and pin-change interrupts in the code are as follows. First of all, we need to add an interrupt handler:

Screen Shot 2010 10 13 at 00.26.10

Second, we need to enable those interrupts on AIO2, i.e. analog pin 1:

Screen Shot 2010 10 13 at 00.26.44

And lastly, we can now get rid of that nasty poll() call in the loop:

Screen Shot 2010 10 13 at 00.27.51

That’s all there is to it. Does it still work? Of course:

Screen Shot 2010 10 13 at 00.29.16

Note: I made some small changes in the InfraredPlug implementation to tolerate interrupts and avoid race conditions.

This all seems like an insignificant change, but keep in mind that this completely changes the real-time requirements: instead of having to poll several thousands of times per second to avoid missing pulses or measuring them incorrectly, we can now check for results whenever we feel like it. Waiting too long would still miss data packets of course, but this means our code can now continue to do other lengthy things (or go into a low-power mode). Checking for incoming packets a few times a second is sufficient (IR remotes send out a packet every 100 ms or so while a button is pressed).

So the IR decoder now has the same background behavior as the RF12 driver: you don’t need to poll it in real-time, you just need to check once in a while to see whether a new packet has been received. Best of all, perhaps, is that you can continue to use calls to delay() even though they make the main loop less responsive.

There is another side effect of this change: if your code includes a call to “ir.send()”, then the receiver will see your own transmission, and report it as an incoming packet as well. Which shows that it’s running in the background. This could even be used for collision detection if you want to build a fancy IR wireless network on top of all this.

So there you go: an improved version of the InfraredPlug class, which lets you use either explicit polling or pin-change interrupts. The choice is yours…

JeeMon device discovery

In Software on Oct 12, 2010 at 00:01

Ok, I must admit that JeeMon has been a bit too ambitious in its original inception. It works quite nicely here at Jee Labs, but there are just too many hoops you have to jump through to make it happen on your own.

I’ll first explain why things are the way they are, and how that is supposed to work, and then I’ll present a different setup for the OOK Scope which jettisons all that machinery and lets you use the OOK Scope with nothing but a single source file and JeeMon.

The idea behind the “Serial” and “JeeSketch” rigs (code modules) in JeeMon, is that it should be possible to respond to changes in interfaces dynamically. So there’s a way to scan for USB interfaces periodically:

Serial periodicScan <cmd>

This will compare the list of USB devices it sees with the list it saw last time (once every 5 seconds by default). And then call the specified <cmd> whenever an interface is added or has gone away. Only FTDI interfaces are detected, by the way.

The next step is to decide what to do when a new USB device is attached. I’ve been using a convention for some time now, whereby every sketch starts by sending out its own name, with optional version and configuration details. For example, RF12demo will send out a string like this to the USB connection when it starts up:

[RF12demo.6] A i1 g5 @ 868 MHz

The trick is to wait for such a string when JeeMon detects the device and opens it. This is handled by the “JeeSketch” rig. Once the sketch type is known, JeeMon then tries to locate a “host.tcl” driver for it. It does this by looking in a directory, as configured at start up. I’ve been placing all my drivers in a directory called “sketches”, so my startup includes the following line:

JeeSketch register ./sketches

When RF12demo connects, JeeMon checks whether “./sketches/RF12demo/host.tcl” exists, and runs it.

Similarly, when plugging in a JeeNode running the OOK Scope sketch, it announces itself as:


The script “./sketches/ookScope/host.tcl” then gets started, it creates a GUI window, takes over the communication link to process all incoming (binary) data bytes, and does its pulse histogram thing.

So far so good. It’s a nicely modular mechanism. I can add a new sketch, i.e. a “blah.pde” file for use in a JeeNode, and add a matching “host.tcl” script alongside to deal with the communication in any way it likes. Then, all I need to do is plug the device in and everything starts up. With a bit of care, everything is also shut down and cleaned up again when the device is removed.

Unfortunately, that’s not the end of the story. One of the important devices I want to support is a JeeNode or JeeLink running RF12demo. But how can JeeMon deal with remote nodes? After all, all they do is send packets to the central device. Each of these nodes will be running a sketch, and not all of them are necessarily the same. So we either need some sort of auto-discovery or some central configuration file. For a first implementation, I decided to use a configuration file to try and keep things, eh, simple. Which is why my startup code also contains these two commands:

set appDir [file dir [dict get [info frame 0] file]]
Config setup $appDir/config.txt

That’s a tricky way of making sure that the “config.txt” file will be picked up from the same directory as the source code, i.e. the “application.tcl” file.

I’ll refrain from describing the config.txt file in full detail. Let me just include an example which I’ve been using around here:

Screen Shot 2010 10 11 at 22.52.15

As with any such type of “registry”, you can see lots of little config details, for use in different modules and parts of the code. Even some obsolete stuff, in fact.

Does it work? Oh, sure, it works great and it’s very easy to extend for new modules and usage scenarios. Even node discovery works nicely, both coming on-line and dropping off-line, as seen in the voltmeter demo.

But there’s a problem with what I’ve described so far…

… there’s too much rope – to hang yourself. It’s brittle, it needs lots of documentation to use this stuff (unless you’re willing to dive into the JeeMon Tcl code), and it’s just too much trouble if you want to do something simple with JeeMon, like run the OOK Scope and nothing else. The entry curve is way too steep.

I can’t say I’ve figured out an alternative. There is a lot of ground to cover, and it is fairly hard to implement a system which dynamically adapts to interfaces getting plugged in and nodes coming (wirelessly) online.

But at the end of the day, all that extra bagage is unnecessary for simple cases.

Fortunately, JeeMon is flexible enough to adapt in any way I want. I don’t have to use any of the above machinery. So here’s the OOK Scope logic as a single “application.tcl” file. No scanning, no config, nothing:

Screen Shot 2010 10 11 at 23.05.06

The full code is available here. To run this version of the OOK Scope, download that file, make sure it is called “application.tcl”, and place it next to your JeeMon executable. Then launch JeeMon. Just make sure to have the JeeNode running ookScope.pde plugged in.

If more than one FTDI interface is found, you will be asked to pick one:

Screen Shot 2010 10 11 at 23.22.45

That’s it: ookScope.pde, application.tcl, plus JeeMon – should work on Windows, Mac OS X, and Linux.

Software PWM at 1 KHz

In AVR, Software on Oct 3, 2010 at 00:01

While pondering about some PWM requirements for a new project here, I was looking at the rgbAdjust.pde sketch again, as outlined in the Remote RGB strip control weblog post from a few months back. It does PWM in software, and plays some tricks to be able to do so on up to 8 I/O pins of a JeeNode, i.e. DIO and AIO on all 4 ports. The main requirement there, was that the PWM must happen fast enough to avoid any visible flickering.

The rgbAdjust sketch works as follows: prepare an array with 256 time slots, each indicating whether an I/O pin should be on or off during that time slot. Since each array element consists of one byte, there is room for up to 8 such bit patterns in parallel. Then continuously loop through all slots to “play back” the stored PWM patterns.

There is one additional refinement in that I don’t actually store the values, but only store a 1-bit during the change of values. That shaves off some overhead when rapidly changing I/O pins (see the Flippin’ bits post).

There are some edge cases (there always are, in software), such as dealing with full on and full off. Those two cases require no bit flipping, whereas normally there are always exactly two flips in the 256-cycle loop. But that’s about it. It works well, and when I simplified the code to support only brightness values 0..100 instead of the original 0..255, the PWM rate went up to over 250 Hz, removing all visible flicker.

So what rgbAdjust does, is loop around like crazy, keeping track of which pins to flip. ATmega’s are good at that, and because the RF12 driver is interrupt-driven, you can still continue to receive wireless data and control the RGB settings remotely.

But still, a bit complex for such a simple task. Isn’t there a simpler way?

As it turns out, there is… and it’ll even bump the PWM rate to 1 KHz. I have no idea what our cat sees, but I wouldn’t be surprised if cats turned out to be more sensitive than us humans. And since I intend to put tons of LED strips around the house, it better be pleasant for all its inhabitants!

What occurred to me, is that you could re-use a hardware counter which is always running in the ATmega when working with the Arduino libraries: the TIMER-0 millisecond clock!

It increments every 4 µs, from 0 to 255, and wraps around every 1024 µs. So if we take the current value of the timer as the current time slot, then all we need to do is use that same map as in the original rgbAdjust sketch to set all I/O pins!

Something like this, basically:

Screen Shot 2010 10 01 at 01.41.11

(assuming that the map[] array has been set up properly)

No more complex loops. All we need to do is call this code really, really often. It won’t matter whether some interrupts occur once in a while, or whether some extra code is included to check for packet reception, for example. What might happen (in the worst case, and only very rarely) is that a pin gets turned on or off a few microseconds late. No big deal, and most importantly: no systematic errors!

It’s fairly easy to do some other work in between, as long as the main code gets called as often as possible:

Screen Shot 2010 10 01 at 01.51.18

I’ve applied this approach to an updated rgbRemote.pde sketch in the RF12 library, and sure enough, the dimming is very smooth for intensity levels 25..255. Below 25, there is some flickering – perhaps from the millis() timer? Furthermore, I’m back to being able to dim with full 24-bit accuracy, i.e. 8 bits on each of the RGB color controls. Which could be fairly important when finely adjusting the white balance!

So there you have it: simpler AND better! – all for the same price as before :)

Sending strings in packets

In Software on Sep 29, 2010 at 00:01

Prompted by a question on the forum, I thought it’d be a good idea to write an extra “PacketBuffer” class which makes it easy to fill a buffer with string data to send out over wireless.

This new packetBuf.pde sketch in the RF12 library shows how to set up a packet buffer, fill it with some string data, and send it to other nodes in the same net group:

Screen Shot 2010 09 27 at 11.23.09

If you want to use the PacketBuffer class in your own code, just copy the dozen lines or so from the above example. The code is very small, because all the heavy lifting is done by the standard Arduino “Print” class.

The code includes logic to report incoming packets, which are also assumed to always contain text strings. To try this example, you’ll need at least two nodes, of course. Here’s output from the two I tried this with:

Screen Shot 2010 09 27 at 11.01.55

And the other side (not at the same time):

Screen Shot 2010 09 27 at 11.02.42

But having made this demo, I have to add that in general it’s not such a good idea to send out everything in string form. It takes (much) more code and RAM to deal with strings, and the receiver cannot easily convert everything back to numeric values. Then again, if you just want to send out strings to report or process on the PC, then the above may come in handy.


Assembling the JeeNode v5

In AVR, Hardware on Sep 26, 2010 at 00:01

New JeeNode means: new build instructions.

Here goes. A long description of how to go from this:

Dsc 1973

… to this:

Dsc 1970

Make sure you’ve got a nice soldering iron (not too hot, not too large) and some solder wick to remove solder in case you need to back up a bit to fix things. The JeeNode printed circuit is very sturdy and can handle a lot of abuse, but some traces are thin, so be careful and don’t apply too much force.

Components are soldered on from lowest to highest profile, because then you can turn over the board and push on it to get each component snugly against the board as you solder it. So let’s start with the resistor:

Dsc 1974 Dsc 1975

Once you’re satisfied with the soldering, turn it over and snip off the leads:

Dsc 1976 Dsc 1977


Read the rest of this entry »

HomeSeer and JeeNode WSN’s

In Software on Sep 22, 2010 at 00:01

HomeSeer is a commercial Windows-based home automation software package. I’ve seen it demo’d a few times, but I’m not using it myself – nor do I have a license for it (I don’t have a permanent setup running Windows).

The following describes a great new development by Tijl van der Velden, who wrote a very interesting extension for HomeSeer to link into aWSN based on JeeNodes, Room Boards, plus a JeeLink on the PC side.

Tijl just finished his project, which was done as student assignment for Computer Science at the University of Utrecht. He was so kind to send pictures of his setup and screen shots of the resulting application in HomeSeer.

I’m just the messenger in this case, but I’m very happy to be able to report about this on the weblog. For details about Tijl’s project, which has been released as open source on SourceForge, visit the JeeSeer page. The code running on Windows is VBScript, as used by HomeSeer – the software on the JeeLink and JeeNodes are C/C++ Arduino-type sketches.

Here’s his test stup:

Sam 0381

You can see the JeeNode with Room Board on top (and the temperature, humidity, light, and motion sensors), as well as an extra LED and small DC motor, driven by a transistor (with the inductive kickback protection diode).

The sketch running on the JeeNode includes some very interesting customizable “decision rules”, which can be configured from HomeSeer. Here’s the customization screen:

Decision Rules 2

The sketch running on the JeeNode looks fairly generic, allowing for different devices, so that you can use the port I/O pins for various purposes – both as inputs and as outputs. From a brief look, it reminds me a bit of Firmata.

The Jeelink is also running a custom sketch, to be able to pass these special requests and replies to a JeeNode and back. As with RF12demo, you can configure the JeeLink to listen on a specific frequency band and filter out a specific net group. This is also nicely configurable from the HomeSeer web interface:

Config 2

And here is what it’s all about – sensor results and device (i.e. LED + motor demo) control:

Status Page 2

Thank you Tijl, for completing this project and for sharing your results and your code on SourceForge. You are making it possible for others to learn from what you did, to plug what you made directly into HomeSeer, and to let people extend things further as the need arises.

It’s great to see the JeeNode – domotics integration becoming a reality!

Lots of AA’s

In Hardware on Sep 20, 2010 at 00:01

C’mon, admit it… you’ve got a pile of discarded AA’s somewhere in your drawers as well:

Dsc 1950

With all the JeeNodes and room nodes I’ve been trying out around here – and the modest results with the rooms.pde sketch w.r.t. battery life so far, I’ve gone through all these much faster than I would have liked to:

Dsc 1949

Went through over 60 here at Jee Labs, in the past year or so. So much for the environment!

Enough is enough. I’m switching to the Apple charger with the Eneloop NiMh’s. And with the new AA Power board, it looks like a single AA cell per node might be enough.

But wait! Are all those AA cells really empty? Time to find out!

Since the AA Power board is so efficient, I though it’d be interesting to see how many of those “dead” AA cells are truly empty. Note that the AA Power board can pull juice out of a battery and generate 3.3V even when it’s supplying less than half its original voltage:

Dsc 1946

So let the battle begin: which cells really can no longer drive a JeeNode as wireless test node?

The result surprised me quite a bit. These 10 were completely dead:

Dsc 1948

But the rest – which is ALL the batteries shown in the first picture – still worked!

This doesn’t mean that any of these batteries will last very long. But still – they drive the JeeNode and its on-board RFM12B transmitter well enough to send out a fresh packet once a second. Which means that even with an output voltage less than 1.1V, they are still able to a briefly deliver a 80 mA peak current once a second (i.e. 3x the current required @ 3.3V) !

Hmm, now what … charge an Eneloop with all that residual energy, perhaps? :)

AA power options

In Hardware on Sep 19, 2010 at 00:01

The new AA Power board is a pretty darn flexible little board, if I may say so myself. Its switch regulator draws very little current itself, 7..30 µA depending on the input voltage. The input voltage range is approx 1.0 .. 5.5V, to be able to start up, but it will drain the power source all the way down to 0.4V if it can, pulling every last bit of juice out of regular AA batteries.

But the AA Power board is also capable of providing a pretty decent amount of current when needed. This is essential for the on-board wireless radio of the JeeNode, but it even works with heavier loads than that:

Dsc 1942

That’s an Arduino-compatible JeeNode with an LCD Plug and a 2×16 character LCD with backlight – all running off a single 1.2V AA NiMH battery!

The maximum current depends on the input voltage. It is guaranteed to be at least 60 mA from a 1V supply, going up to 140 mA from a 1.8V supply. Note that the input current can be much more than that – drawing 60 mA at 3.3V means the battery may have to deliver about 200 mA at 1V to make it happen. A clever little power regulator it is – a producer of energy it is not!

There a several ways to connect the AA Power board with a JeeNode:

  • inline, feeding the PWR pin with 3.3V (this will be dropped some 30..50 mV by the on-board regulator of the JeeNode itself, with no ill effects)
  • in piggy-back mode, with an AA cell inserted in the battery clips (alkaline or NiMh)
  • as a shield on top of the JeeNode, again with an AA cell inserted

There’s also a fourth way to use this board: leave off the battery clips, and connect a battery between any PWR and GND pins on the JeeNode itself (or use the battery holes) – this requires a solder jumper on the AA Power board.

The flexibility of the regulator means that you can connect any power source between 1.0 and 5.5V to PWR and GND, just as you would with a stand-alone JeeNode. Whatever it is, the +3V pin will carry the essential 3.3V level.

There is one issue to beware of: when PWR is connected to BAT+ via the solder jumper, then do not hook up a second power source at the same time. The most common case is probably: when a battery is connected to PWR, do not connect an FTDI adapter such as the BUB, because it’ll put 5V on the PWR pin … and the battery (or the BUB!) probably won’t like it.

The PWR pin can in fact be used in four different modes:

  • normal – it’s higher than 3.3V and the on-board regulator brings it down to 3.3V for the +3V pin – this would be the case with 3 to 8 AA’s, for example (no need for an AA Power board)
  • boosted – it’s lower than 3.3V and it’s used to feed the AA Power board – in this case the on-board regulator does nothing (and could be omitted)
  • parallel – the PWR pin is connected directly to the +3V pin – this can be used with the AA Power board to make sure the PWR pin also carries power (always 3.3V), in case some plugs expect a supply voltage on the PWR pin
  • floating – the PWR carries no power – this is the case when the AA Power board is used without solder jumper (default case)

The important point here is that the PWR pins do not necessarily carry a higher voltage than the +3V pins. It might be more (normal), less (boosted), the same (parallel), or none (floating). Not every JeePlug can be used with each mode of operation, so be careful to check.

Tomorrow, I’m going to fool around with a bunch of batteries :)

Long live the AA battery!

In Hardware on Sep 17, 2010 at 00:01

The AA Power board announced yesterday just arrived:

Dsc 1931

And it looks like it does indeed perform exactly as expected. Here’s the ripple:

Screen Shot 2010 09 16 at 13.41.07

That’s with the 1.6 mA LED load, i.e. a 75 µs cycle / roughly 13 KHz – this was as predicted: at light loads, the recharge frequency can reach down into the audible range. But it’s highly unlikely to be noticable due to the tiny size of the inductor, which after all is not built to act as a loudspeaker :)

Here’s the “AAv1” fully mounted for powering a JeeNode via the FTDI connector:

Dsc 1934

(there’s no charge circuit here, I’m just using an externaly-recharged battery as power source)

And here’s the whole setup in actual use:

Dsc 1935

Works like a charm. Runs just fine with the “rooms” and “radioBlip” sketches, and wireless just works – as before.

Quiescent current draw is about 20 µA when powered this way. That goes down to 10 µA when used with two cells @ 2.4V, and down to an amazing 7 µA when powered from a 3V source (a CR2032 ought to work nicely!). Above 3.3V, the circuit becomes just a tad less efficient when it switches into step-down mode, drawing about 30 µA all the way up to 5.5V.

Great, now we’re starting to get into some serious low-power options.

Tomorrow, I’ll describe other ways to use this new AA Power board…

Modular nodes

In Software on Sep 13, 2010 at 00:01

Ok, so we have JeeNodes and JeePlugs, and it’s now possible to sense and hook up all sorts of fun stuff. In theory, it’s all trivial to use and easy to integrate with what you already have, right? Well… in practice there’s a lot of duplication involved – literally, in fact: for my experiments, I often take an existing sketch, make a fresh copy and start tweaking it. Shudder…

  • New plug. New bit of code to include in the sketch for that plug. New sketch.

  • New device connected. New bit of code to talk to that device. New sketch.

  • New idea. New logic to implement that idea. New sketch.

Yawn. Some of this WSN stuff sure is starting to become real tedious…

There are a couple of ways to deal with this. The traditional way is to modularize the source code as much as possible: create a separate header and implementation source file for each new plug, device, sensor, and function which might need to be re-used at some point. Then all you have to do is create a new sketch (again!) and include the bits and pieces you want to use in this sketch.

I have a big problem with that. You end up with dozens – if not hundreds – of tiny little files, all with virtually no code in them, since most interfaces definitions and interface implementations are trivial. My problem is not strictly the number of little files, but the loss of overview, and the inability to re-factor such code collections across the board. It just becomes harder and harder to simplify common patterns, which only show after you’ve got a certain amount of code. The noise of the C/C++ programming itself starts to drown out the essence of all these (small & similar) bits of interface code.

The other serious problem with too fine-grained modularization of the source code, is that you end up with a dependency nightmare. Some of my sketches need the RF12 driver, other need the PortsI2C class, yet others use the MilliTimer.

At the opposite end of the spectrum is the copy-and-paste until you drop approach, whereby you take the code (i.e. sketches) you have, and make copies of it all, picking the pieces you want to re-use, and dropping everything else. I’ve been doing that a bit lately, because most of this code is so trivial, but it’s a recipe for disaster – not only do I end up with more and more slightly different versions of everything over time, it also becomes virtually impossible to manage bug fixes and fold them into all the affected sources.

A version control system such as subversion can help (I couldn’t live without it), but it just masks the underlying issues, really. Being that some parts of the code deal with the essence of the interface, and other parts exists just to make the code into a compilable unit.

There is another alternative: go all out with C++ and OO, i.e. create some class hierarchies and make lots of members virtual. So that slight variations of existing code can be implemented as derived classes in C++, with only a bit of code for the pieces which differ from the previous implementation. This is not very practical on embedded microcontrollers such as the ATmega, however. V-tables (the technique used internally in C++ to implement such abstractions) tend to eat up memory when used for elaborate class hierarchies, and worse still, much of that memory will have to be in RAM.

There is a solution for this too, BTW: C++ templates. But I fear that the introduction of template programming (and meta-programming) is going to make the code virtually impenetrable for everyone except hard-core and professional C++ programmers. Already, my use of C++ in sketches is scaring some people off, from what I hear…

Is there a way to deal with a growing variety of little interface code snippets, in such a way that we don’t have to bring in a huge amount of machinery? Is there some way to plug in the required code in the same way as JeePlugs can be plugged in and used? Can we somehow re-use bits and pieces without having to copy and paste sketches together all the time?

I think there is…

The approach I’d like to introduce here is “code generation”. This technique has been around for ages, and it has been used (and abused) in a wide range of tasks.

The idea is to define a notation (a related buzzword is “DSL“) which is better suited for the specific requirements of Physical Computing, Wireless Sensor Nodes, and Home Automation. And then to generate real C/C++/sketch code from a specification which uses this notation to describe the bits and pieces involved:

Screen Shot 2010 09 12 at 17.16.38

To create a sketch for a JeeNode with the Room Board on it and using say an EtherCard as interface to the outside world, one could write something like the following specification:

Screen Shot 2010 09 12 at 18.29.39

The key point to make here is that this is not really a new language. The code you add is the same code you’d write if you had to create the sketch from scratch. But the repetitive stuff is gone. In a way, this is copy-and-paste taken to extremes: it is automated to the point that you no longer have to think of it as copying: all the pieces are simply there for immediate re-use.

Problems will not be gone simply by switching to a code generator approach. There will still be dependencies involved, for example. The “RoomBoard” device might well need the MilliTimer class to function properly. But it is no longer part of the code you write. It doesn’t show up in the source file, there’s no #include line as there would be in C/C++ or in a sketch. Which means it also no longer matters at this level whether the RoomBoard driver uses a MilliTimer class or not.

Code generation in itself also doesn’t solve the issue of having lots of little snippets of code. But what you can do, is combine lots of them together one source file, and then have the generator pick what it needs each time it is used:

Define RoomBoard {
Define EtherCard {

The technique of code generation has many implications. For one, you have to go through one more step before the real code can be compiled and then uploaded – you have to first produce an acceptable sketch. And with mistakes, the system has to be able to point you to the error in the original specification file, not some very obscure C/C++ statement in the generated source code.

And of course it’s a whole new mechanism on top of what already exists. One more piece of the puzzle which has to be implemented and maintained (by someone – not necessarily you). And documented – because even if the specification files can reduce a large amount of boilerplate code to a single line, that one line still needs to be clearly documented.

So far, these notes are just a thought-experiment. I’ll no doubt keep on muddling along with new sketches and plugs and nodes for some time to come.

But wouldn’t it be nice if one day, something like this were a reality?

JeeNode goes solar

In Hardware on Sep 3, 2010 at 00:01

Now that ultra low-power options are coming into reach for JeeNodes, lots of new scenarios can be explored.

The most obvious one is probably a solar-powered JeeNode … so meet the latest new node #5 at Jee Labs:

Dsc 1875

It also uses a 0.47 F supercap, same as yesterday, but now hooked up to a small 4.5 V solar cell (which can only deliver a few mA in bright sunlight), and a Schottky diode between the solar cell and the capacitor.

Here’s the “power supply” in more detail:

Dsc 1876

As you can see, the solar cell is tiny. A few square cm’s only. In fact, it takes quite some time for it to charge the supercap to acceptable levels. I had to place the cell in moderately bright sunlight for about half an hour to get to a 4 Volt charge. It was inching along, taking several seconds per 0.01 V increase.

To avoid losing all that charge right away in the power-up cycle, I modified the ATmega’s fuses to start in 258 clock cycles after power down, and to start up within 4.1 msec after reset. That way it will start up as quickly as possible at all times. The 258 CK setting is particularly nice, because it means the ATmega can get out of total power down within about 16 µs, fast enough to respond to a byte RX/TX interrupt from the RFM12B!

Does it work? Check it out: after connecting the JeeNode with the “radioBlip.pde” sketch pre-loaded… away it went – sending one packet every 60 seconds as node 5:

    OK 5 1 0
    OK 5 2 0
    OK 5 3 0
    OK 5 4 0

While exposed to the current partly-sunny / partly-cloudy light levels, the voltage on the supercap is still increasing. This is good – it means there’s a surplus of solar energy, even with these transmissions going on. That extra energy will be crucial if this thing is to last through the night…

If everything works out, this little Arduino-compatible bugger could well be the first JeeNode to become completely autonomous and transmit wirelessly… forever!

Time will tell :)

Pulling data from an EtherNode

In Software on Aug 17, 2010 at 00:01

Last month’s EtherNode sketch was an example of a simple web server which allows viewing incoming packets received by the RFM12B. Here’s a sample web page again:

Screen Shot 2010 07 13 at 231929

If JeeMon could access and pick up that data without requiring an extra JeeLink or JeeNode, then you could place the EtherNode wherever reception is best while running JeeMon on your desktop machine, or anywhere else.

In response to a request on the forum for just that, I started writing a little demo “application.tcl” for JeeMon to do this sort of web-scraping. Here’s what I came up with (code):

Screen Shot 2010 08 16 at 10.35.49

Sample console output:

Screen Shot 2010 08 16 at 10.42.48

The point here, is that it needs to periodically poll the EtherNode, get a web page from it, and skip the readings it has already seen before. That’s what most of the code in “EtherNodePull” does. Each packet that remains will be sent to the “GotPacket” proc, which just logs it on the console.

But that’s just one half of the required solution…

The bigger challenge is to also make JeeMon decode these packets, as if they came in through a serial USB link. There is quite a bit of logic in sketches/central/host.tcl to do that for a JeeNode or JeeLink running the “central” sketch (which is almost identical to RF12demo).

The reason this is more complicated, is that I want to be able to decode each packet in different ways, depending on the sketch running on the remote (sending) node. My network has more than just room nodes, and will be extended with many more node types in the future.

One workaround would be to collect all nodes of the same type in their own group, i.e. net group 1 for room nodes, net group 2 for the ookRelay, etc. And yes, that would work – but it’s not very convenient, and I’d need separate etherNodes to pick up the packets from each net group. Messy.

The approach I have used so far, is to maintain a config section for JeeMon, with information about the type of each node, organized by frequency band, net group, and node id:

Screen Shot 2010 08 16 at 10.52.23

It’s not automatic, but this way I just need to adjust one list whenever a new wireless node is brought online.

The current code in sketches/central/host.tcl is all about picking up packets, and mapping them thtough this configuration section to know what is what. It does this by setting up a pseudo “connection” whenever packets come in for the first time and includes logic to tear down this connection again when no new packets are received within a certain amount of time.

To use this approach with an EtherNode as data collection node, I need to re-factor the exisiting code and make the core mechanism independent of the Serial implementation. I also need to bring more of the code from central/host.tcl into the JeeMon code, so it can be re-used for EtherNodes.

Re-factoring is my middle name – I’ll update this post when the code changes are complete.

What a year it’s been…

In Musings on Jul 13, 2010 at 00:01

One year ago, the first serious PCB designs were “taped out” (heh, if that isn’t an anachronism by now!) – this is when the first batch of JeeNode v3 boards was produced, with all the ports and pins that have by now become a standard around here.

One year later, there are 4 JeeNode variants and over 20 “plugs” / add-ons – all part of a happy JeeFamily :)

What’s next? Well, I don’t have a crystal ball. But I do know what’s coming next because of some recent projects behind the scenes … and I can tell you that there will be several new plugs starting mid-August.

Another announcement I’d like to make now, is that after the summer more of the production will be out-sourced (here in the Netherlands), to free my time for work on new hardware and software development.

As you probably know, Jee Labs is just me, moi, and myself – with a few great people helping out behind the scenes. The major difference with traditional companies is that I’m neither driven by a boss, nor (primarily) by revenue, but by interest. Which means that you can have a considerably larger influence on where Jee Labs is going than you might think… all you need to do is speak up, preferably in the discussion forum, and point out neat / useful / practical stuff. I won’t guarantee that I’ll follow everyone’s lead, but I’m as keen as anyone to go where the neat stuff is regarding physical computing.

Speaking of neat stuff…

Franz Achatz sent in a great email today, describing what he’s been doing, complete with pictures and screen dumps. Here’s the latest addition to his RFM12B-based WSN – a fridge sensor (posted with permission):


All in a neat little box, with the GSM-type antenna sticking out:


The sensor is a 1-wire Dallas sensor, to allow tracking the current temperature inside the fridge.

And here’s the software side of it, all created by Franz with the current JeeMon software:

Screen Shot Small

(Click here for the full-size image)

Given how young JeeMon currently is, I’m amazed to see just how much it can already be made to do…

The story I’d like you to take home from this is not how great JeeNodes or JeeMon are (they’re not, they are still far too young and simplistic), but how much freedom you have when everything is open source, hardware as well as software.

It’s time for me to start winding down (with 30..39°C of humid heat, it’s almost a necessity, even…). There will be one or two more queued-up posts on the weblog, and then it’ll be set to read-only mode. In fact, all of internet will become read-only a few days from now, as far as I’m concerned. I’ll be away only part of this summer, but even when I’m in I won’t respond to emails – sorry.

If you ever get bored, there are now 550 posts on this weblog – feel free to browse around, and enjoy :)

TTYL, as they say!

Serial communication vs packets

In Hardware, Software on Jul 12, 2010 at 00:01

When you hook two devices up via wires, you’ve got essentially two options: parallel, i.e. one wire for each bit you want to transmit & receive (example: memory cards inside a PC). Or serial, where information gets sent across bit by bit over only a few wires (examples: ethernet, USB, I2C). Parallel can achieve very high speeds with little circuitry, but serial is more convenient and cheaper for large distances.

Serial communication is very common. The model even carries through to the way we think about the “command line” – a stream of characters typed in, followed by a stream of output characters. Not surprising, since terminals used to be connected via RS232 serial links.

Wireless connections are also essentially serial: you rapidly turn a transmitter on and off (OOK), or you change its frequency of operation (FSK), to get the individual bits across.

But there’s a lot more to it than that.

With two devices connected together, you get a peer-to-peer setup with a link which is dedicated for them. This means they can send whenever they please and things will work. The same can be done with wireless: as long as only two devices are involved, one device can send whenever it likes and the other will receive the signal just fine (within a certain range, evidently).

With such a peer-to-peer setup, the serial nature of the communication channel is obvious: A sends some characters, and B will receive them, in the same order and (almost) at the same time.

But what if you’ve got more than two devices? Ah, now it gets interesting…

With wires, you could do this:

Screen Shot 2010 07 11 at 11.20.41

It’s easy to set up, but it’s pretty expensive: lots of wires all over the place (N x (N-1) / 2 for N devices) plus lots of interfaces on each device (N-1). With 10 devices, that would be 45 wires and 90 interfaces!

Worse still, this is very hard to use with wireless, where each “wire” would need to be a dedicated frequency band.

The solution is to share a single wire – called multi-drop:

Screen Shot 2010 07 11 at 11.24.58

Now there’s one wire, a couple of “taps”, and one interface per device. Much cheaper!

Trouble is, you’ve now created a “channel” which is no longer dedicated to each device (or “node” as it is usually called in such a context). They can’t just talk whenever they like anymore!

Whole new slew of issue now. How do you find out when the channel is available? What do you do when you can’t send something right away – save it up? How long? How much can you save up? What if someone else hijacked the channel and never stops transmitting? What if all nodes want to send more than the channel can handle? How do you get your information out to a specific node? Can all nodes listen to everything?

Welcome to the world of networking.

All of a sudden, simple one-on-one exchanges become quite complex. You’ll need more software to play nice on the channel. All nodes need the same software revision. And you’ve got to deal with being told “not now”.

Note that these issues apply to wired solution sharing the same channel (RS485, Canbus, USB, Ethernet) as well as all wireless networks.

Simple OOK transmitters used in weather station sensors just ignore the issue. They send whenever they want to, in an après moi le déluge fashion… (“what the heck, I don’t care whether my message arrives”). This usually works fairly well when transmissions are short, and when lost transmissions are no big deal – they’ll send out a new reading a few minutes later anyway.

Another aspect of this shotgun approach is that it’s a broadcast mechanism. The sending node transmits its messages into the air without interest as to who receives them, or whether there’s anyone listening even. All it needs to do is include a unique code, so that the receiver(s) will be able to tell who sent the message.

For weather sensors, the above is ok. For security / alarm purposes, it’s a bit unfortunate – missing an intrusion alert is not so great. So what the simplest systems do is to yell a bit louder: repeat the alert message many times, in the hope that at least one will arrive intact. No guarantees, yet some very common security systems seem to be happy with that level of reliability.

For more robust setups, you really need bi-directional communication, even if the payload only flows in one direction. Then each receiver can let the transmitter know when it got a packet intact.

There’s a lot more (software) complexity involved to use a channel effectively, to get data across reliably with “ACK” packets, to detect new and lost nodes, to deal with “congestion” and external causes of bad reception, etc.

With JeeNodes and wireless comms via the RFM12B module, the basic RF12 driver is somewhere in the middle between unchecked uni-directional transmission and fully checked self-adapting configurations.

So what does this all mean for the “end user” ?

Well, first of all: wireless communication can fail. A node can be out of range, or a badly-behaved machine can be sending out RF interference to such an extent that nothing gets across no matter what nodes do. Wireless communication can fail, it’s as simple as that! But with bi-directional communication, at least all nodes can find out whether things work or not.

The second key property of communication via a shared channel, is that you can’t just send whenever you like. You have to be able to either save things up until later, or discard messages to let future ones through.

This means that treating a wireless channel as a serial link is really a very bad idea. Keep in mind that the baudrate can drop to zero – this means that you must be prepared to save up infinitely much data for re-transmission. And the more you intend to re-transmit later, the longer you’re going to have to need that channel when it becomes available. That will frustrate all the other nodes trying to do the same thing.

One way around this, is to use a RF link with very high data rates. That way there will be a lot of slack when nodes want to catch up. But then you still need to be able to buffer all that data in the first place. Not a great idea for limited devices such as an ATmega…

The better way is to design the system to work well with occasional loss of packets. Take an energy meter, for example: don’t sent the pulse or rotation trigger, but keep a count and send the current count value. That way, lost packets will not affect the accuracy of the results, they will merely be updated less frequently when the RF link is down.

The RF12 driver used in JeeNodes was designed for the context of a little data, sent on a periodic basis. The difference with a serial link, is that you don’t get garbled text on the other side, but packets (i.e. chunks of data). All you need to keep in mind is that occasionally an entire packet won’t make it.

This design also deals with multiple nodes. Each incoming packet can have a “node ID” so receivers can tell everything apart. Packets never get mixed up or combined or split in any way. Each packet is a verified and consistent amount of data.

Couldn’t we implement a virtual serial link?

Well, yes – there are well-known techniques to implement a virtual circuit on top of a packet-based communication channel.

But doing so would be a bad idea, for reasons which have hopefully become clear from the above. A virtual circuit would either have to act as perfect channel (not feasible with finite data storage) or drop characters in unpredictable places. It is far more practical to impose a packet / chunk structure on the sender, and then be allowed to drop chunks with clearly-defined boundaries when the RF link is out of service or overloaded.

The moral of the story: think in packets when using JeeNode wireless comms – you’ll get a lot more done!

Update – see some good comments by John M below, about IP, UDP, TCP, and the OSI model which describes all the levels of abstraction involved with networking, and all the standard terminology.

RF12 communication

In Hardware on Jul 11, 2010 at 00:01

(This weblog post seems to accidentally have escaped into the wild a few days early…)

The RFM12(B) wireless radio modules from HopeRF, as used on the JeeNode, uses “Frequency Shift Keying” (FSK) as the way to get information across a wireless channel on the 433, 868, or 915 MHz band.

With wireless, there’s always a trade-off between speed and range. More speed, i.e. a higher data rate, lets you either get more data across in the same time or the same amount of data in less time, which might reduce battery consumption. But higher data rates require a larger frequency shift in the transmitter for the receiver to still be able to detect all the bit transitions reliably. A larger frequency shift wastes more power though (I think…). And a larger frequency shift means that the receiver has to accept more bandwidth to catch all the signal details.

Btw, another way to extend the range is to improve the antennas – see this discussion.

I’ll leave the narrow-band vs. wide-band details to the EE’s and amateur radio experts in this world, along with all the RF / HF calculations, because frankly I’m at the limit of my knowledge on these topics.

But what the above comes down to is that we’ve got essentially three parameters to fool around with when using RFM12B’s for wireless networking:

  • the data rate, which needs to be identical on both sides
  • the frequency shift on the transmitter side
  • the bandwidth on the receiver to filter out unwanted signals

Here are the relevant sections from the HopeRF RF12B datsheet:

Data rate

Screen Shot 2010 07 08 at 00.58.29

Frequency shift

Screen Shot 2010 07 08 at 01.00.43


Screen Shot 2010 07 08 at 01.01.15

Screen Shot 2010 07 08 at 01.01.37

The challenge is to find “good” settings, which really depends on what you’re after. The settings used in the RF12 v3 driver are as follows:

  • Data rate = 49.142 KHz (see this discussion)
  • Frequency shift is set to 90 Khz
  • Bandwidth is set to 134 KHz

This was chosen partly on what I found around the web, and partly by pushing the data rate up while verifying that the range in the home would be sufficient for my own purposes (i.e. to reach the office from the electricty meter: a few concrete walls and floors).

It can probably be improved, but since such changes affect all the node in a net group, I never bothered after the initial tests were “good enough”.

The RF12 library now includes a new rf12_control() function, which allows making changes to these parameters. It’s a low-level option, but you could easily add some wrappers and an API to adjust parameters in a more intuitive way.

As mentioned in the forum, there’s a (Windows-only) tool to do the conversion to hex parameter settings. That ought to make it fairly easy to tweak these things, if you want to push the envelope.

Some people will no doubt be quite interested in such optimizations, so if you’ve found an interesting new combination of parameters, please consider sharing your findings in this forum discussion.

Having said all this, please keep in mind that these settings will still lead to fairly low data rates. The default data rate corresponds to ≈ 6 Kbytes/second of one-way data, assuming perfect conditions and 100% utilization (“hogging”) of the frequency band. With the official ISM rules imposed on the 868 MHz frequency band in Europe, each node is allowed to use only 1% of that rate – i.e. about 60 bytes per second of throughput! (there are no such restrictions @ 433 and 915 MHz – but 915 is not allowed in Europe).

There are alternate bands in the 860’ish MHz range, but I’ve never quite figured out what works where, so for now I’m sticking to this simple 1% rule. For day-to-day sensing and signaling purposes around the house, it’s actually plenty.

To put things into perspective: the IEEE 802.15.4 standard used by XBee’s has up to 16 channels of 250 Kbaud each at its disposal, when operated at 2.4 GHz. It’s a whole different ball game. And a different range: 2.4 GHz gets absorbed far more by walls than the sub-GHz bands (which is why mesh networking becomes a lot more important, which requires more resources, making it harder to keep overall battery consumption low, etc).

So you see: speed, range, complexity, cost – lots of tradeoffs, as with everything in this world!

Update – just got an email with a lot more info about the 868 MHz regulations (for the Netherlands, but I assume it’ll be the same across Europe). Looks like cordless headphones get 40 channels to pick from with 100% utilization (in the 864..868 MHz range), so you could pick one of those channels to avoid the 1% rule.

Six lousy pins!

In Hardware on Jul 3, 2010 at 00:01

This has been a somewhat frustrating week. Just when June ended up being the biggest month ever for the shop (no doubt propelled by the special June discount), Mr. Murphy strikes again.

I’ve been waiting for several weeks now for a batch of pin headers. Had lots of them when I ordered, but since the 6-pin headers are included in just about everything, I still ended up running out sooner than expected.

This is the cuplrit – the 6-pin female port header connector:

Dsc 1764

It’s not easy to find alternatives, because I insist on having a clean 6-pin header, not some cut-off-from-a-long-strip header with very rough plastic edges on both sides.

All would have been well, if the shipment had gone its normal route. But it’s been in this country, at Schiphol Airport, for over a week now! And some lazy bum in customs seems to be sitting on it. The irony is that they do this to charge me 19% VAT – which I then get back (being a company) a few months later. More paperwork.

I’ve started shipping some packages without this item, but often even that is not an option. I can’t really send out new JeeNode kits without these headers, for example – it really wouldn’t solve anyone’s problem.

So there you go. If you’re among the several dozen people waiting for the goodies from Jee Labs to get sent to you: I’m just as eager as you to get it all resolved! – probably more, because here, all those frustrations tend to accumulate by proxy :(

Affected are: Plug Headers, JeePlug Packs, JeeNode kits, Wireless Starter Packs, and probably a few more.

(A separate issue is that the JeeNode USB won’t be back in stock for at least another week)

What can I do? Order lots and lots of them, evidently. Which is what I did (many thousands) – in the hope that this silly state of affairs won’t happen again. And offer my sincere apologies to everyone waitng.

Six lousy pins! And in two days I’ll probably be drowning in them…

Update – Just got a letter from the postal service: no invoice included. Need to supply info. By mail. So there is light at the end of the tunnel, but it’s going to take several more days. My apologies. All for some VAT, which gets refunded later. What a silly world.

TwitLEDs robot, part 3

In AVR, Hardware, Software on Jun 24, 2010 at 00:01

To continue on yesterday’s post, here is how we got a little autonomous robot going.

The main part was solved by picking the Asuro robot kit. It’s really low-end, but it has just enough functionality for this project, and at €50, it’s very affordable. In fact, Myra built a spare one because the first unit broke down. In the end, I was still able to fix it: a burnt out transistor (both H-bridges are done with discrete components!). So now we have two Asuro’s:

Dsc 1750

The nice bit about the Asuro is that it has an odometer on each wheel. IOW, there are IR LEDs + sensors to count the number of steps (8 per rev) made by the wheel, and the C library code includes logic to adjust the speed of the motor. It’s a bit crude, but because of this the Asuro can drive fairly straight. As we found out later, it has a bit more trouble doing so while driving slowly, so it’s still a bit wiggly.

But it works. Two small DC motors, some simple gears, motor axles soldered to the PCB (what a great low-cost solution), and room for an extension board. To give you an idea of how crude this thing really is: there is no on-board voltage regulator. When used with 4 alkaline AA batteries, you have to remove a jumper so the extra voltage drop over a diode gets the supply voltage down to under 5.5V …

The Asuro is full of such nifty cost-cutting tricks. It even includes a bidirectional IR link, over which new code can be uploaded. The IR link is very short-range, so it would have been insufficient for our purposes – but for quick co