Computing stuff tied to the physical world

Archive for the ‘Software’ Category

Wrapping up

In AVR, Hardware, News, Software, Musings on Oct 6, 2013 at 00:01

I’m writing this post while one of the test JeeNode Micro’s here at JeeLabs is nearing its eighth month of operation on a single coin cell:


It’s running the radioBlip2 sketch, sending out packets with an incrementing long integer packet count, roughly once every minute:

Screen Shot 2013-10-04 at 15.44.58

The battery voltage is also tracked, using a nice little trick which lets the ATtiny measure its own supply voltage. As you can see, the battery is getting weaker, dropping in voltage after each 25 mA transmission pulse, but still recovering very nicely before the next transmission:

Screen Shot 2013-10-04 at 15.45.45

Fascinating stuff. A bit like my energy levels, I think :)

But this post is not just about reporting ultra low-power consumption. It’s also my way of announcing that I’ve decided to wrap up this daily weblog and call it quits. There will be no new posts after this one. But this weblog will remain online, and so will the forum & shop.

I know from the many emails I’ve received over the years that many of you have been enjoying this weblog – some of you even from the very beginning, almost 5 years ago. Thank you. Unfortunately, I really need to find a new way to push myself forward.

This is post # 1400, with over 6000 comments to date. Your encouragement, thank-you’s, insightful comments, corrections and additions – I’m deeply grateful for each one of them. I hope that the passion which has always driven me to explore this computing stuff tied to the physical world technology and to write about these adventures, have helped you appreciate the creativity that comes with engineering and invention, and have maybe even tempted you to take steps to explore and learn beyond the things you already knew.

In fact, I sincerely hope that these pages will continue to encourage and inspire new visitors who stumble upon this weblog in the future. For those visitors, here’s a quick summary of the recent flashback posts, to help you find your way around on this weblog:

Please don’t ever stop exploring and pushing the boundaries of imagination and creativity – be it your own or that of others. There is infinite potential in each of us, and I’m certain that if we can tap even just a tiny fraction of it, the world will be a better place.

I’d like to think that I’ve played my part in this and wish you a lot of happy tinkering.

Take care,
Jean-Claude Wippler

PS. For a glimpse of of what I’m considering doing next, see this page. I can assure you that my interests and passions have not changed, and that I’ll remain as active as ever w.r.t. research and product development. The whole point of this change is to allow me to invest more focus and time, and to take the JeeLabs projects and products further, in fact.

PPS. Following the advice of some friends I highly respect, I’m making this last weblog post open-ended: it’ll be the last post for now. Maybe the new plans don’t work out as expected after all, or maybe I’ll want to reconsider after a while, knowing how much joy and energy this weblog has given me over the years. So let’s just call this a break, until further notice :)

Update Dec 2013 – Check out the forum at for the latest news about JeeLabs.

Flashback – Dive Into JeeNodes

In AVR, Hardware, Software, Linux on Oct 4, 2013 at 00:01

Dive Into JeeNodes (DIJN) is a twelve-part series, describing how to turn one or more remote JeeNodes, a central JeeLink, and a Raspberry Pi into a complete home monitoring setup. Well, ok, not quite: only a first remote setup is described with an LDR as light sensor, but all the steps to make the pieces work together are described.

More visually, DIJN describes how to get from here:

dijn01-essence.png   dijn01-diagram

.. to here:


This covers a huge range of technologies, from embedded Arduino stuff on an ATmega-based JeeNode, to setting up Node.js and the HouseMon software on a Raspberry Pi embedded Linux board. The total cost of a complete but minimal setup should be around €100. Less than an Xbox and far, far more educational and entertaining, if you ask me!

It’s all about two things really: 1) describing the whole range of technologies and getting things working, and 2) setting up a context which you can explore, learn, and hack on tinker with in numerous ways.

If you’re an experienced Linux developer but want to learn about embedded hardware, wireless sensors, physical computing and such, then this offers a way to hook up all sorts of things on the JeeNode / Arduino side of things.

If you’re familiar with hardware development or have some experience with the Arduino world, then this same setup lets you get familiar with setting up a self-contained low-power Linux server and try out the command line, and many shell commands and programming languages available on Linux.

If you’ve set up a home automation system for yourself in the past, with PHP as web server and MySQL as back end, then this same setup will give you an opportunity to try out rich client-side internet application development based on AngularJS and Node.js – or perhaps simply hook things together so you can take advantage of both approaches.

With the Dive Into JeeNode series, I wanted to single out a specific range of technologies as an example of what can be accomplished today with open source hardware and software, while still covering a huge range of the technology spectrum – from C/C++ running on a chip to fairly advanced client / server programming using JavaScript, HTML, and CSS (or actually: dialects of these, called CoffeeScript, Jade, and Stylus, respectively).

Note that this is all meant to be altered and ripped apart – it’s only a starting point!

Flashback – Ports and I2C in JeeLib

In AVR, Software on Oct 2, 2013 at 00:01

One of the small innovations in the JeeNode is the use of “Ports”:


Most µC boards, including the Arduino, brought out all the I/O pins on one or more large headers, meaning that you had to pick and route individual pins for each case where some component or sensor was being connected. While flexible, it prevented adding stuff in a plug-an-play manner: hook up some sensors, use matching bits of C/C++ code, and done.

I also always kept running out of VCC and GND connection pins while experimenting. So instead of bundling all I/O pins, a decision was made in 2009 to bring out a couple of (nearly) identical “ports”, to any of which the same sensor could be attached. That meant you could connect up to at least four sensors independent of each other.

As it turns out, there are enough spare I/O pins on an ATmega to provide one digital and one analog pin to each port. Add GND, VCC (+3.3V) and PWR (the input voltage, could be up to about 13V), and you end up with a fairly general-purpose setup. One extra spare pin was provided as “IRQ” to allow connected sensors to draw attention to themselves via an interrupt, although this feature has admittedly so far not been used much on JeeNodes.

But the real improvement is the fact that the two I/O pins can be used as I2C bus. All of a sudden, each port is no longer limited to a single sensor. The I2C bus is so useful because many types of low-cost chips include an I2C interface in hardware, and it lets multiple chips operate on a single bus.

One of the first extensions added was the Expander Plug, an 8-bit general-purpose digital I/O interface. So now, with just two pins consumed on the ATmega, you could have up to 32 input/output pins (by daisy chaining 4 of these plugs and setting them each to a different I2C address). The Dimmer Plug takes this even further: up to 8 of them on one bus, each driving up to 16 LED’s – for a total of 128 dimmable LEDs on 2 I/O pins!

Officially, the I2C bus requires dedicated hardware with “slew control” to increase the accuracy of signals on the bus. And while the ATmega has such an interface, there is just one per ATmega. Fortunately, you can also “bit-bang” I2C in plain software, i.e. simulate the relatively slow pin changes with nothing but software on any 2 I/O pins. The speed is not quite as high, and the supported bus length is also limited to a few dozen centimeters, but for direct connection of a few plugs that still works out quite well.

And so the JeeNode has ended up having 4 “ports”, each of them individually capable of supporting either I2C, or one analog (AIO) and one digital (DIO) pin – or alternately 2 digital I/O’s, since each analog pin on the ATmega can also be used in digital mode.

Since then, lots of different types of “JeePlugs” have been created, some I2C, some not. Most of the plugs have a corresponding class and a demo sketch in the JeeLib library, making it very easy to hook them up and interface to them in software. With most plugs, you just have to define which port they are connected to – and in the case of I2C plugs, which bus address they have been set to.

The placement of these ports and the choice of very low-cost standard 6-pin headers took some experimentation, but I think it all turned out ok. Lots of expandability and flexibility.

Flashback – Batteries came later

In AVR, Hardware, Software on Sep 30, 2013 at 00:01

During all this early experimentation in 2008 and 2009, I quickly zoomed in on the little ATmega + RFM12B combo as a way to collect data around the house. But I completely ignored the power issue…

The necessity to run on battery power was something I had completely missed in the beginning. Everyone was running Arduino’s off either a 5V USB adapter or – occasionally – off a battery pack, and never much more than a few days. Being “untethered” in many projects at that time, meant being able to do something for a few hours or a day, and swapping or recharging batteries at night was easy, right?

It took me a full year to realise that a wireless “node” tied to a wire to run for an extended period of time made no sense. Untethered operation also implies being self-powered:


Evidently, having lots of nodes around the house would not work if batteries had to be swapped every few weeks. So far, I just worked off the premise that these nodes needed to be plugged into a power adapter – but there are plenty of cases where that is extremely cumbersome. Not only do you need a power outlet nearby, you need fat power adapters, and you have to claim all those power outlets for permanent use. It really didn’t add up, in terms of cost, and especially since the data was already being exchanged wirelessly!

Thus started the long and fascinating journey of trying to run a JeeNode on as little power as possible – something most people probably know this weblog best for. Over the years, it led to some new (for me) insights, such as: transmission draws a “huge” 25 mA, but it’s still negligible because the duration is only a few milliseconds. By far the most important parameter to optimise for is sleep-mode power consumption of the entire circuit.

In September 2010, i.e. one year after starting on this low-power journey, the Sleepy class was added to JeeLib, as a way to make it easy to enter low-power mode:

class Sleepy {
    /// start the watchdog timer (or disable it if mode < 0)
    /// @param mode Enable watchdog trigger after "16 << mode" milliseconds 
    ///             (mode 0..9), or disable it (mode < 0).
    static void watchdogInterrupts (char mode);
    /// enter low-power mode, wake up with watchdog, INT0/1, or pin-change
    static void powerDown ();
    /// Spend some time in low-power mode, the timing is only approximate.
    /// @param msecs Number of milliseconds to sleep, in range 0..65535.
    /// @returns 1 if all went normally, or 0 if some other interrupt occurred
    static byte loseSomeTime (word msecs);

    /// This must be called from your watchdog interrupt code.
    static void watchdogEvent();

The main call was named loseSomeTime() to reflect the fact that the watchdog timer is not very accurate. Calling Sleepy::loseSomeTime(60000) gives you approximately one minute of ultra low-power sleep time, but it could be a few seconds more or less. To wait longer, you can call this code a few times, since 65,535 ms is the maximum value supported by the Sleepy class.

As a result of this little class, you can do things like put the RFM12B into sleep mode (and any other power-hungry peripherals you might have connected), go to sleep for a bit, and restore all the peripherals to their normal state. The effects can be quite dramatic, with a few orders of magnitude less power consumption. This extends a node’s battery lifetime from a few days to a few years – although you have to get all the details right to get there.

One important design decision in the JeeNode was to use a voltage regulator with a very low idle current (the MCP1700 draws 2 µA idle). As a result, when a JeeNode goes to sleep, it can be made to draw well under 10 µA.

Most nodes here at JeeLabs now keep on running for well over a year on a single battery charge. Everything has become more-or-less install and forget – very convenient!

Flashback – RFM12B wireless

In AVR, Hardware, Software on Sep 29, 2013 at 00:01

After the ATmega µC, the second fascinating discovery in 2008 was the availability of very low-cost wireless modules, powerful enough to get some information across the house:


It would take another few months until I settled on the RFM12B wireless module by HopeRF, but the uses were quickly falling into place – I had always wanted to track the energy consumption in the house, to try and identify the main energy consumers. That knowledge might then help reduce our yearly energy consumption – either by making changes to the house, or – as it turned out – by simply adjusting our behaviour a bit.

Here is the mouse trap which collected energy metering data at JeeLabs for several years:


This is also when I found Modern Devices’s Real Bare Bone Board Arduino clone by Paul Badger – all the good stuff of an Arduino, without the per-board FTDI interface, and with a much smaller form factor.

Yet another month would pass to get a decent interrupt-driven driver working, and some more tweaks to make transmission interrupt-based as well. The advantage of such as design is that you get the benefits of a multi-tasking setup without all the overhead: the RF12 driver does all its time-critical work in the background, while the main loop() can continue to use delay() calls and blocking I/O (including serial port transmission).

In February 2009, I started installing the RF12demo code on each ATmega, as a quick way to test and try out wireless. As it turned out, that sketch has become quite useful as central receiving node, even “in production” – and I still use it as interface for HouseMon.

In April 2009, a small but important change was made to the packet format, allowing more robust use of multiple netgroups. Without this change, a bit error in the netgroup byte will alter the packet in a way which is not caught by the CRC checksum, making it a valid packet in another netgroup. This is no big deal if you only use a single netgroup, but will make a difference when multiple netgroups are in use in the same area.

Apart from this change, the RF12 driver and the RFM12B modules have been remarkably reliable, with many nodes here communicating for years on end without a single hick-up.

I still find it pretty amazing that simple low-power wireless networking is available at such a low cost, with very limited software involvement, and suitable for so many low-speed data collection and signalling uses in and around the house. To me, wireless continues to feel like magic after all these years: things happening instantly and invisibly across a distance, using physical properties which we cannot sense or detect…

LevelDB, MQTT, and Redis

In Software on Sep 27, 2013 at 00:01

To follow up on yesterday’s post, here are some more interesting properties of LevelDB, in the context of Node.js and HouseMon:

  • The levelup wrapper for Node supports streaming, i.e. traversing a set of entries for reading, as well as sending an entire set of PUTs and/or DELs as a stream of requests. This is a very nice match for Node.js and for interfaces which need to be sent across the network (including to/from browsers).

  • Events are generated for each change to the database. These can easily be translated into a “live stream” of changes. This is really nothing other than publish / subscribe, but again in a form which matches Node’s asynchronous design really well.

  • Standard encoding conventions can be defined for keys and or values. In the case of HouseMon, I’m using UTF-8 as key encoding default and JSON as value encoding. The latter means that the default “stuff” stored in the database is simply a JavaScript object, and this is also what you get back on access and traversal. Individual cases can still override these defaults.

Together, all these features and conventions end up playing together very nicely. One particularly interesting aspect is the “persistent pub-sub” model you can implement with this. This is an area where the “Message Queue Telemetry Transport” MQTT has been gaining a lot of attention for things like home automation and the “Internet of Things” IoT.

MQTT is a message-based system. There are channels, named in a hierarchical manner, e.g. “/location/device/sensor”, to which one can publish measurement values (“publish /kitchen/roomnode/temp 17”). Other devices or software can subscribe to such channels, and get notified each time a value is published.

One interesting aspect of MQTT is its “RETAIN” functionality: for each channel, the last value can be retained, meaning that anyone subscribing to a channel will immediately be sent the last retained value, as if it just happened. This turns an ephemeral message-based setup into a state-based mechanism with change propagation, because you can subscribe to a channel after devices have published new values on it, yet still see what the best known value for that channel is, currently. With RETAIN, the MQTT system becomes a little database, with one value per channel. This can be very convenient after power-up as well.

LevelDB comes from the other side, i.e. a database with change tracking, but the effect is exactly the same: you can subscribe to a set of keys, get all their latest values and be notified from that point on whenever any keys in this range changes. This makes LevelDB and MQTT functionally equivalent, although LevelDB can handle much larger key spaces (note also that MQTT has features which LevelDB lacks, such as QoS).

It’s also interesting to compare this with the Redis database: Redis is another key-value store, but with more data structure options. The main difference with LevelDB, is that Redis is memory-based, with periodic snapshots to disk, whereas LevelDB is disk based, with caching in memory (and thus able to handle datasets which far exceed available RAM). Redis, LevelDB, and MQTT all support pub-sub in some form.

One reason for me to select LevelDB for HouseMon, is that it’s an in-process embedded database. This means that there is no need for a separate process to run alongside Node.js – and hence to need to install anything, or keep a background daemon running.

By switching to LevelDB, the HouseMon installation procedure has been simplified to:

  • download a recent version of HouseMon from GitHub
  • install Node (which includes the “npm” package manager)
  • run “npm install” to get everything needed by HouseMon

And that’s it. At this point, you just start up Node.js and everything will work.

But it’s not just convenience. LevelDB is also so compact, performant, and scalable, that HouseMon can now store all raw data as well as a complete set of aggregated per-hour values without running into resource limits, even on a Raspberry Pi. According to my rough estimates, it should all end up using less than 1 GB per year, so everything will easily fit on an 8 or 16 GB SD card.

The LevelDB database

In Software on Sep 26, 2013 at 00:01

In recent posts, I’ve mentioned LevelDB a few times – a database with some quite interesting properties.

Conceptually, LevelDB is ridiculously simple: a “key-value” store which stores all entries in sorted key order. Keys and values can be anything (including empty). LevelDB is not a toy database: it can handle hundreds of millions of keys, and does so very efficiently.

It’s completely up to the application how to use this storage scheme, and how to define key structure and sub-keys. I’m using a simple prefix-based mechanism: some name, a “~” character, and the rest of the key, of which each field can be separated with more “~” characters. The implicit sorting then makes it easy to get all the keys for any prefix.

LevelDB is good at only a few things, but these work together quite effectively:

  • fast traversal in sorted and reverse-sorted order, over any range of keys
  • fast writing, regardless of which keys are being written, for add / replace / delete
  • compact key storage, by compressing common prefix bytes
  • compact data storage, by a periodic compaction using a fast compression algorithm
  • modifications can be batched, with a guaranteed all-or-nothing success on completion
  • traversal is done in an isolated manner, i.e. freezing the state w.r.t. further changes

Note that there are essentially just a few operations: GET, PUT, DEL, BATCH (of PUTs and DELs), and forward/backward traversal.

Perhaps the most limiting property of LevelDB is that it’s an in-process embedded library, and that therefore only a single process can have the database open. This is not a multi-user or multi-process solution, although one could be created on top by adding a server interface to it. This is the same as for Redis, which runs as one separate process.

To give you a rough idea of how the underlying log-structured merge-tree algorithm works, assume you have a datafile with a sorted set of key-value pairs, written sequentially to file. In the case of LevelDB, these files are normally capped at 2 MB each, so there may be a number of files which together constitute the entire dataset.

Here’s a trivial example with three key-value pairs:


It’s easy to see that one can do fast read traversals either way in this data, by locating the start and end position (e.g. via binary search).

The writing part is where things become interesting. Instead of modifying the files, we collect changes in memory for a bit, and then write a second-level file with changes – again sorted, but now not only containing the PUTs but also the DELs, i.e. indicators to ignore the previous entries with the same keys.

Here’s the same example, but with two new keys, one replacement, and one deletion:


A full traversal would return the entries with keys A, AB, B, and D in this last example.

Together, these two levels of data constitute the new state of the database. Traversal now has to keep track of two cursors, moving through each level at the appropriate rate.

For more changes, we can add yet another level, but evidently things will degrade quickly if we keep doing this. Old data would never get dropped, and the number of levels would rapidly grow. The cleverness comes from the occasional merge steps inserted by LevelDB: once in a while, it takes two levels, runs through all entries in both of them, and produces a new merged level to replace those two levels. This cleans up all the obsolete and duplicate entries, while still keeping everything in sorted order.

The consequence of this process, is that most of the data will end up getting copied, a few times even. It’s very much like a generational garbage collector with from- and to-spaces. So there’s a trade-off in the amount of writing this leads to. But there is also a compression mechanism involved, which reduces the file sizes, especially when there is a lot of repetition across neighbouring key-value pairs.

So why would this be a good match for HouseMon and time series data in general?

Well, first of all, in terms of I/O this is extremely efficient for time-series data collection: a bunch of values with different keys and a timestamp, getting inserted all over the place. With traditional round-robin storage and slots, you end up touching each file in some spot all the time – this leads to a few bytes written in a block, and requires a full block read-modify-write cycle for every single change. With the LevelDB aproach, all the changes will be written sequentially, with the merge process delayed (and batched!) to a different moment in time. I’m quite certain that storage of large amounts of real-time measurement data will scale substantially better this way than any traditional RRDB, while producing the same end results for access. The same argument applies to indexed B-Trees.

The second reason, is that data sorted on key and time is bound to be highly compressible: temperature, light levels, etc – they all tend to vary slowly in time. Even a fairly crude compression algorithm will work well when readings are laid out in that particular order.

And thirdly: when the keys have the time stamp at the end, we end up with the data laid out in an optimal way for graphing: any graph series will be a consecutive set of entries in the database, and that’s where LevelDB’s traversal works at its best. This will be the case for raw as well as aggregated data.

One drawback could be the delayed writing. This is the same issue with Redis, which only saves its data to file once in a while. But this is no big deal in HouseMon, since all the raw data is being appended to plain-text log files anyway – so on startup, all we have to do is replay the end of the last log file, to make sure everything ends up in LevelDB as well.

Stay tuned for a few other reasons on why LevelDB works well in Node.js …

In search of a good graph

In Software on Sep 24, 2013 at 00:01

I’ve been using the DyGraphs package for some time now in HouseMon, but it’s not perfect. Here is a generated graph for this morning, comparing two different ways to measure the same solar power generation:

Screen Shot 2013-09-23 at 09.20.52

The green series comes from the a pulse counter which generates a pulse every 0.5 Wh. These values then get sent to HouseMon every 3 seconds. A fairly detailed set of readings, as you can see – this is a typical morning in autumn on a cloudy day.

The blue series comes from the SMA inverter, which I’m reading every 5..6 minutes. It might be more accurate, but there is far less data coming in.

The key is that the area (i.e. Wh) under these two lines should be identical, if both devices are accurately calibrated. But this doesn’t quite seem to be the case here. (see update)

One known problem is that DyGraphs is plotting these “step charts” incorrectly: the value of the step should be at the end of the step, but DyGraphs can only plot steps given a starting value. So the way to read these blue steps, I think, is to keep the points, but to think of the step lines as being drawn horizontally preceding that point, not trailing it. I’m not fully convinced that this explains all the differences in the above examples, but it would seem to be a better match.

I’ve been looking at Highcharts and Highstocks as an alternative graphing library to use. They seem to support steps-with-end-values. But this sort of use really has quite a few other requirements as well, such as: being able to draw large datasets efficiently, support for multiple independent time series (DyGraphs was a bit limited here as well), interactive and intuitive zooming, good automatic choice of axis scales and fonts.

It would also be nice to be able to efficiently redraw a graph when new data points come in. Graphs used to be live in HouseMon, but at some point I ran into a snag and never bothered to fix it. For me, right now, live updating is less important than being able to generate accurate graphs. What’s the point of showing results on screen after all, if those results are not really represented truthfully…

I’ll stay on the lookout for better graphing solution. All suggestions welcome!

Update – See @ward’s comments below for the proper explanation of what’s going on here. It’s not a graphing problem after all (or rather: a different one…).

Animation in the browser

In Software on Sep 23, 2013 at 00:01

If you’ve programmed for Mac OS X (my actual experience in that area is fairly minimal), then you may know that it has a feature called Core Animation – as a way to implement all sorts of flashy smooth UI transitions – sliding, fading, colour changes, etc.

The basic idea of animation in this context is quite simple: go from state A to state B by varying one or more values gradually over a specified amount of time.

It looks like web browsers are latching onto this trend as well, with CSS Animations. It has been possible to implement all sorts of animations for some time, using timed JavaScript loops running in the background to simulation this sort of stuff, as jQuery has been doing. Then again, getting such flashy features into a whole set of independently-developed (and competing) web browsers is bound to take more time, but yeah… it’s happening. Here’s an illustration of what text can do in today’s browsers.

In Angular, browser-side animation has recently become quite a bit simpler.

I’m no big fan of attention-drawing screen effects. There was a time when HTML’s BLINK tag got so much, ehm, attention that every page tried to look like a neon sign. Oh, look, a new toy tag, and it’s BLINKING… neat, let’s use it everywhere!

Even slide transitions (will!) get boring after a while.

But with dynamic web pages updating in real time, it seems to me that there is a case to be made for subtle hints that something has just happened. As trial, I used a fade transition (from jQuery) in the previous version of HouseMon to briefly change the background of a changed row to very light yellow, which then fades back to white:

Screen Shot 2013-09-22 at 20.26.32

The effect seems to work well – for me anyway: it’s easy to spot the latest changes in a table, without getting too distracted when looking at some specific data.

In the latest version of HouseMon, I wanted to try the same effect using the new Angular animation feature, and I must say: the basic mechanism is really simple – you make animations happen by defining the CSS start settings, the finish settings, and the time period plus transition style to use. In the case of adding a row to a table or deleting a row, it’s all built-in and done by just adding some CSS definitions, as described here.

In this particular case, things are slightly more involved because it’s not the addition or deletion of a row I wanted to animate, but the change of a timestamp on a row. This requires defining a custom Angular “directive”. Directives are the cornerstone of Angular: they can (re-) define the behaviour of HTML elements or attributes, and let you invent new ones to do just about anything possible in the DOM.

Unfortunately, directives can also be a bit obscure at times. Here is the one I ended up defining for HouseMon:

ng.directive 'highlightOnChange', ($animate) ->                                  
  (scope, elem, attrs) ->                                                        
    scope.$watch attrs.highlightOnChange, ->                                     
      $animate.addClass elem, 'highlight', ->                                    
        attrs.$removeClass 'highlight'   

With it, we can now use a new custom highlightOnChange attribute to an element to implement highlighting when a value on the page changes.

Here is how it gets used (in Jade) on the status table rows shown in yesterday’s post:


In prose: when row.time changes, start the highlight transition, which adds a custom highlight class and removes it again at the end of the animation sequence.

The CSS then defines the actual animation (using Stylus notation):

  transition 1s linear all
  background lighten(yellow, 75%)
  background white

I’m leaving out some finicky details, but that’s the essence of it all: set up an $animate call to trigger on the proper conditions, and then define what is going go happen in CSS.

Note that Angular picks name suffixes for the CSS classes during the different phases of animation, but then it can all be used for numerous types of animation: you pick the CSS setting(s) at start and finish, and the browser’s CSS engine will automatically “run” the selected animation – covering motions, colour changes, and lots more (could fonts be emboldened gradually or even be morphed in more sophisticated ways? I haven’t tried it).

The result of all this is a table with subtle row highlights wherever a new reading comes in. Triggered by some Angular magic, but the actual highlighting is a (not-yet-100%-standardised) browser effect.

Working with a CSS grid

In Software on Sep 22, 2013 at 00:01

As promised yesterday, here is some information on how you can get a page layouts off the ground real quick with the Zurb Foundation 4 CSS framework. If you’re more into Twitter Bootstrap, see this excellent comparison and overview. They are very similar.

It took a little while to sink in, until I read this article about how it’s supposed to be used, and skimmed through a 10-part series on webdesign tuts+. Then the coin dropped:

  • you create rows of content, using a div with class row
  • each row has 12 columns to put stuff in
  • you put the sections of your page content in divs with class large-6 and columns
  • each section goes next to the preceding one, so 6 and 6 puts the split in the middle
  • any number from 1 to 12 goes, as long as you use at most 12 columns per row

So far so good, and pretty trivial. But here’s where it gets interesting:

  • each column can have more rows, and each row again offers 12 columns to lay out
  • for mobile use, you can add small-N classes as well, with other column allocations

I haven’t done much with this yet, but this page was trivial to generate:

Screen Shot 2013-09-20 at 00.57.16

The complete definition of that inner layout is in app/status/view.jade:

        h3 Status
            th Driver
            th Origin
            th Name
            th Title
            th Value
            th Unit
            th Time
          tr(ng-repeat='row in status | filter:statusQuery | orderBy:"key"')
            td {{row.type}}
            td {{row.tag}}
            td {{}}
            td {{row.title}}
            td(align='right') {{row.value}}
            td {{row.unit}}
            td {{row.time | date:"MM-dd &nbsp; HH:mm:ss"}}

In prose:

  • the information shown uses the first 9 columns, (the remaining 3 are unused)
  • in those 9 columns, we have a header row and then the actual data table as row
  • the header is a h1 header on the left and an input on the right, split as 2:1
  • the table takes the full width (of those 9 columns in which it resides)
  • the rest is standard HTML in Jade notation, and an Angular ng-repeat loop
  • there’s also live row filtering in there, using some Angular conventions
  • the alternating row backgrounds and general table style are Foundation’s defaults

Actually, I left out one thing: the entries automatically get highlighted whenever the time stamp changes. This animation is a nice way to draw attention to updates – stay tuned.

Responsive CSS layouts

In Software on Sep 21, 2013 at 00:01

One of the things today’s web adds to the mix, is the need to address devices with widely varying screen sizes. Seeing a web site made for an “old” 1024×768 screen layout squished onto a mobile phone screen is not really great if everything just ends up smaller on-screen.

Enter grid-based CSS frameworks and responsive CSS.

There’s a nice site to quickly try designs at different sizes, at Here’s how my current HouseMon screen looks on two different simulated screens:

Screen Shot 2013-09-19 at 23.45.02

Screen Shot 2013-09-20 at 00.17.54

It’s not just reflowing CSS elements (the bottom line stacks vertically on a small screen), it also completely alters the behaviour of the navigation bar at the top. In a way which works well on small screens: click in the top right corner to fold out the different menu choices.

Both Zurb Foundation and Twitter Bootstrap are “CSS frameworks” which provide a huge set of professionally designed UI elements – and now also do this responsive thing.

The key point here is that you don’t need to do much to get all this for free. It takes some fiddling and googling to set all the HTML sections up just right, but that’s it. In my case, I’m using Angular’s “ng-repeat” to generate the navigation bar menu entries on-the-fly:'NavCtrl')
          | &nbsp; {{}}
      li.myNav(ng-repeat='nav in navbar')
        a(ng-href='{{nav.route}}') {{nav.title}}
        a(href='/admin') Admin

Yeah, ok, pretty tricky stuff. But it’s all documented, and nowhere near the effort it’d take to do this sort of thing from scratch. Let alone get the style and visual layout consistent and working across all the different browsers and versions.

Tomorrow, I’ll describe the way grid layouts work with Zurb Foundation 4 – it’s sooo easy!

In praise of AngularJS

In Software on Sep 20, 2013 at 00:01

This is how AngularJS announces itself on its home page:

HTML enhanced for web apps!

Hm, ok, but not terribly informative. Here’s the next paragraph:

HTML is great for declaring static documents, but it falters when we try to use it for declaring dynamic views in web-applications. AngularJS lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop.

That’s better. Let me try to ease you into this “framework” on which HouseMon is based:

  • Angular (“NG”) is client-side only. It doesn’t exist on the server. It’s all browser stuff.
  • It doesn’t take over. You could use NG for parts of a page. HouseMon is 100% NG.
  • It comes with many concepts, conventions, and terms you must get used to. They rock.

I can think of two main stumbling blocks: “Dependency Injection” and all that “service”, “factory”, and “controller” stuff (not to mention “directives”). And Model-View-Something.

It’s not hard. It wasn’t designed to be hard. The terminology is pretty clever. I’d say that it was invented to produce a mental model which can be used and extended. And like a beautiful clock, it doesn’t start ticking until all the little pieces are positioned just right. Then it will run (and evolve!) forever.

Dependency injection is nothing new, it’s essentially another name for “require” (Node.js), “import” (Python), or “#include” (C/C++). Let’s take a fictional piece of CoffeeScript code:

value = 123
myCode = (foo, bar) -> foo * value + bar

console.log myCode(100, 45)

The output will be “12345”. But with dependency injection, we can do something else:

value = 123
myCode = (foo, bar) -> foo * value + bar

define 'foo', 100
define 'bar', 45
console.log inject(myCode)

This is not working code, but it illustrates how we could define things in some more global context, then call a function and give it access to that context. Note that the “inject” call does something very magical: it looks at the names of myCode’s arguments, and then somehow looks them up and finds the values to pass to myCode.

That’s dependency injection. And Angular uses it all over the place. It will often call functions through a special “injector” and get hold of all args expected by the function. One reason this works is because closures can be used to get local scope information into the function. Note how myCode had access to “value” via a normal JavaScript closure.

In short: dependency injection is a way for functions to import all the pieces they need. This makes them extremely modular (and it’s great for test suites, because you can inject special versions and mock objects to test each piece of code outside its normal context).

The other big hurdle I had to overcome when starting out with Angular, is all those services, providers, factories, controllers, and directives. As it turns out, that too is much simpler than it might seem:

  • Angular code is packaged as “modules” (I actually only use a single one in HouseMon)
  • in these modules, you define services, etc – more on this in a moment
  • the whole startup is orchestrated in a specific way: boot, config, run, fly
  • ok, well, not “fly” – that’s just my name for it…

The startup sequence is key – to understand what goes where, and what gets called when:

  • the browser starts by loading an HTML page with <script> tags in it (doh)
  • all the code needed on the browser side has to be loaded up front
  • what this code does is define modules, services, and so on
  • nothing is really “running” at this point, these are all function definitions
  • the last step is to call angular.bootstrap(document, ['myApp']);
  • at this point, all the config sections defined in the modules are run
  • once that is over, all the run sections are run, this is the real application start
  • when that is done, we in essence enter The Big Event Loop in the browser: lift off!

So on the client side, i.e. the browser, the entire application needs to be structured as a set of definitions. A few things need to be done early on (config), a few more once everything has been defined and Angular’s “$rootScope” has been put in place (read: the main page context is ready, a bit like the DOM’s “document.onload”), and then it all starts doing real work through the controllers tied to various HTML elements in the DOM.

Providers, factories, services, and even constants and values, are merely variations on a theme. There is an excellent page by Matt Haggard describing these differences. It’s all smoke and mirrors, really. Everything except directives is a provider in Angular.

Angular is like the casing of a watch, with a powerful pre-loaded spring already in place. You “just” have to figure out the role of the different pieces, put a few of them in place, and it’ll run all by itself from then on. It’s one of the most elegant frameworks I’ve ever seen.

With HouseMon, I’ve put the main pieces in place to show a web page in the browser with a navigation top bar, and the hooks to tie into Primus and RPC. The rest is all up to the modules, i.e. the subfolders added to the app/ folder. Each file is an Angular module. HouseMon will automatically load them all into the browser (via a /primus/primus.js file generated on-the-fly by Primus). This has turned out to be extremely modular and flexible – and it couldn’t have been done without Angular.

If you’re looking for a “way in” for the HouseMon client-side code, then the place to start is app/index.jade, which uses Jade’s extend mechanism to tie into main/layout.jade, so that would be the second place to start. After that, it’s all Angular-structured code, i.e. views, view templates with controllers, and the services they call.

I’m quite excited by all this, because I’ve never before been able to develop software in such a truly modular fashion. The interfaces are high-level, clean (and clear) and all future functionality can now be added, extended, replaced, or even removed again – step by step.

Ground control to… browser

In Software on Sep 19, 2013 at 00:01

After yesterday’s RPC example, which makes the server an easy-to-use service in client code, let’s see how to do things in the other direction.

I should note that there is usually much less reason to do this, since at any point in time there can be zero, one, or occasionally more clients connected to the server in the first place. Still, it could be useful for a server to collect client status info, or to periodically save some state. The benefit of server-initiated requests being that it turns authentication around, assuming we trust the server.

Ok, so here’s some silly code on the client which I’d like to be able to call from the server:

ng = angular.module 'myApp'

ng.config ($stateProvider, navbarProvider, primus) ->
  $stateProvider.state 'view1',
    url: '/'
    templateUrl: 'view1/view.html'
    controller: 'View1Ctrl'
  navbarProvider.add '/', 'View1', 11
  primus.client.view1_twice = (x) ->
    2 * x

ng.controller 'View1Ctrl', ->

This is the “View 1” module, similar to the Admin module, exposing code for server use:

  primus.client.view1_twice = (x) ->
    2 * x

On the server side, the first thing to note is that we can’t just call this at any point in time. It’s only available when a client is connected. In fact, if more than one client is connected, then there will be multiple instances of this code. Here is app/view1/

module.exports = (app, plugin) ->
  app.on 'running', (primus) ->
    primus.on 'connection', (spark) ->
      spark.client('view1_twice', 123).then (res) ->
        console.log 'double', res

The output on the server console, whenever a client connects, is – rather unsurprisingly:

    double 246

What’s more interesting here, is the logic of that server-side code:

  • the app/view1/ exports one function, accepting two args
  • when loaded on startup, it will be called with the app and Primus plugin object
  • we’re in setup mode at this point, so this is a good time to define event handlers
  • the app.on 'running' handler gets called when all server setup is complete
  • now we can set up a handler to capture all new client connections
  • when such a connection comes in, Primus will hand us a “spark” to talk to
  • a spark is essentially a WebSocket session, it supports a few calls and streaming
  • now we can call the ‘view1_twice’ code on the client via RPC
  • this creates and returns a promise, since the RPC will take a little bit of time
  • we set it up so that when the promise is fulfilled, console.log ... gets called

And that’s it. As with yesterday’s setup, all the communication and handling of asynchronous callbacks is taken care of behind the scenes.

There is some asymmetry in naming conventions and the uses of promises, but here’s the summary:

  • to define a function ‘foo’ on the host, define it as = …
  • to call that function on the client, use host ‘foo’, …
  • the result is an Angular promise, use “.then …” to pick it up if needed
  • to define a function ‘bar’ on the client, define it as primus.client = …
  • to call that function from the server, use spark.client ‘bar’, …
  • the result is a q promise, use “.then …” to pick it up

Ok, so there are some slight differences between the two promise API’s. But the main benefit of this all is that the passing of time and networking involved is completely abstracted away. Maybe I’ll be able to unify this further, some day.

Promise-based RPC

In Software on Sep 18, 2013 at 00:01

Or maybe this post should have been titled: “Taming asynchronous RPC”.

Programming with asynchronous I/O is tricky. The proper solution IMO, is to have full coroutine or generator support in the browser and in the server. This has been in the works from some time now (see ES6 Harmony). In short: coroutines and generators can “yield”, which means something that takes time can suspend the current stack state until it gets resumed. A bit like green (non-preemptive) threads, and with very little overhead.

But we ain’t there yet, and waiting for all the major browsers to reach that point might be a bit of a stretch (see the “Generators (yield)” entry on this page). The latest Node.js v0.11.7 development release does seem to support it, but that’s only half the story.

So promises it is for now. On the server side, this is available via Kris Kowal’s q package. On the client side AngularJS includes promises as $q. And as mentioned before, I’m also using q-connection as promise-based Remote Procedure Call RPC mechanism.

The result is really neat, IMO. First, RPC from the client, calling a function on the server. This is app/admin/ for the client side:

ng = angular.module 'myApp'

ng.config ($stateProvider) ->
  $stateProvider.state 'admin',
    url: '/admin'
    templateUrl: 'admin/view.html'
    controller: 'AdminCtrl'

ng.controller 'AdminCtrl', ($scope, host) ->
  $scope.hello = host 'admin_dbinfo'

This Angular boilerplate defines an Admin page with app/admin/view.jade template:

h3 Storage use
    th Group
    th Size
  tr(ng-repeat='(group,size) in hello')
    td {{group}}
    td(align='right') ≈ {{size}} b

The key is this one line:

$scope.hello = host 'admin_dbinfo'

That’s all there is to doing RPC with a round-trip over the network. And here’s the result, as shown in the browser (formatted with the Zurb Foundation CSS framework):

Screen Shot 2013-09-17 at 13.37.33

There’s more to it than meets the eye, but it’s all nicely tucked away: the host call does the request, and returns a promise. Angular knows how to deal with promises, so it will fill in the “hello” field when the reply arrives, asynchronously. IOW, the above code looks like synchronous code, but does lots of things at another point in time than you might think.

On the server, this is the app/admin/ code:

Q = require 'q'

module.exports = (app, plugin) ->
  app.on 'setup', -> = ->
      q = Q.defer()
      app.db.getPrefixDetails q.resolve

It’s slightly more involved because promises are more exposed. First, the RPC call is defined on application setup. When called, it creates a new promise, calls getPrefixDetails with a callback, and immediately returns this promise to fill in the result later.

At some point, getPrefixDetails will call the q.resolve callback with the result as argument, and the promise becomes fulfilled. The reply is then sent to the client by the q-connection package.

Note that there are three asynchronous calls involved in total:

    rpc call -> network send -> database action -> network reply -> show
                ============    ===============    =============

Each of the underlined steps is asynchronous and based on callbacks. Yet the only step we had to deal with explicitly on the server was that custom database action.

Tomorrow, I’ll show RPC in the other direction, i.e. the server calling client code.

Flow – the inside perspective

In Software on Sep 17, 2013 at 00:01

This is the last of a four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

The current 0.8 version of HouseMon is my first foray into the world of streams and promises, on the host server as well as in the browser client.

First a note about source code structure: HouseMon consists of a set of subfolders in the app folder, and differs from the previous SocketStream-based design in that dependency info, client-side code, client-side layout files + images, and host-side code are now all grouped per module, instead of strewn across client, static, and server folders.

The “main” module starts the ball rolling, the other modules mostly just register themselves in a global app “registry”, which is simply a tree of nested key/value mappings. Here’s a somewhat stripped down version of app/main/

module.exports = (app, plugin) ->
  app.register 'nodemap.rf12-868,42,2', 'testnode'
  app.register 'nodemap.rf12-2', 'roomnode'
  app.register 'nodemap.rf12-3', 'radioblip'
  # etc...

  app.on 'running', ->
    Serial = @registry.interface.serial
    Logger = @registry.sink.logger
    Parser = @registry.pipe.parser
    Dispatcher = @registry.pipe.dispatcher
    ReadingLog = @registry.sink.readinglog

    app.db.on 'put', (key, val) ->
      console.log 'db:', key, '=', val

    jeelink = new Serial('usb-A900ad5m').on 'open', ->

      jeelink # log raw data to file, as timestamped lines of text
        .pipe(new Logger) # sink, can't chain this further

      jeelink # log decoded readings as entries in the database
        .pipe(new Parser)
        .pipe(new Dispatcher)
        .pipe(new ReadingLog app.db)

This is all server-side stuff and many details of the API are still in flux. It’s reading data from a JeeLink via USB serial, at which point we get a stream of messages of the form:

{ dev: 'usb-A900ad5m', msg: 'OK 2 116 254 1 0 121 15' }

(this can be seen by inserting “.on('data', console.log)” in the above pipeline)

These messages get sent to a “Logger” stream, which saves things to file in raw form:

L 17:18:49.618 usb-A900ad5m OK 2 116 254 1 0 121 15

One nice property of Node streams is that you can connect them to multiple outputs, and they’ll each receive these messages. So in the code above, the messages are also sent to a pipeline of “transform” streams.

The “Parser” stream understands RF12demo’s text output lines, and transforms it into this:

{ dev: 'usb-A900ad5m',
  msg: <Buffer 02 74 fe 01 00 79 0f>,
  type: 'rf12-868,42,2' }

Now each message has a type and a binary payload. The next step takes this through a “Dispatcher”, which looks up the type in the registry to tag it as a “testnode” and then processes this data using the definition stored in app/drivers/

module.exports = (app) ->

  app.register 'driver.testnode',
    in: 'Buffer'
        title: 'Battery status', unit: 'V', scale: 3, min: 0, max: 5

    decode: (data) ->
      { batt: data.msg.readUInt16LE 5 }

A lot of this is meta information about the results decoded by this particular driver. The result of the dispatch process is a message like this:

{ dev: 'usb-A900ad5m',
  msg: { batt: 3961 },
  type: 'testnode',
  tag: 'rf12-868,42,2' }

The data has now been decoded. The result is one parameter in this case. At the end of the pipeline is the “ReadingLog” write stream which timestamps the data and saves it into the database. To see what’s going on, I’ve added an “on ‘put'” handler, which shows all changes to the database, such as:

db: reading~testnode~rf12-868,42,2~1379353286905 = { batt: 3961 }

The ~ separator is a convention with LevelDB to partition the key namespace. Since the timestamp is included in the key, this storage will store each entry in the database, i.e. as historical storage.

This was just a first example. The “StatusTable” stream takes a reading such as this:

db: reading~roomnode~rf12-868,5,24~1379344595028 = {
  "light": 20,
  "humi": 68,
  "moved": 1,
  "temp": 143

… and stores each value separately:

db: status~roomnode/rf12-868,5,24/light = {
// and 3 more...

Here, the information is placed inside the message, including the time stamp. Keys must be unique, so in this case only the last value will be kept in the database. In other words: this fills a “status” table with the latest value of each measurement value.

As last example, here is a pipeline which replays a saved log file, decodes it, and treats it as new readings (great for testing, but useless in production, clearly):

createLogStream = @registry.source.logstream

  # .pipe(new Replayer)
  .pipe(new Parser)
  .pipe(new Dispatcher)
  .pipe(new StatusTable app.db)

Unlike the previous HouseMon design, this now properly deals with back-pressure.

The “Replayer” stream is a fun one: it takes each message and delays passing it through (with an updated timestamp), so that this stream will simulate a real-time feed with new messages trickling in. Without it, the above pipeline processes the file as fast as it can.

The next step will be to connect a change stream through the WebSocket to the client, so that live status information can be displayed on-screen, using exactly the same techniques.

As you can see, it’s streams all the way down. Onwards!

Flow – the application perspective

In Software on Sep 16, 2013 at 00:01

This is the third of a four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Because I couldn’t fit it in three parts after all.

Let’s talk about the big picture, in terms of technologies. What goes where, and such.

A decade or so ago, this would have been a decent model of how web apps work, I think:


All the pages, styles, images, and some code fetched as HTTP requests, rendered on the server, which sits between the persistent state on the server (files and databases), and the network-facing end.

A RESTful design would then include a clean structure w.r.t how that state on the server is accessed and altered. Then we throw in some behind-the-secnes logic with Ajax to make pages more responsive. This evolved to general-purpose client-side JavaScript libraries like jQuery to get lots of the repetitiveness out of the developer’s workflow. Great stuff.

The assumption here is that servers are big and fast, and that they work reliably, whereas clients vary in capabilities, versions, and performance, and that network connection stability needs to stay out of the “essence” of the application logic.

In a way, it works well. This is the context in which database-backed websites became all the rage: not just a few files and templating to tweak the website, but the essence of the application is stored in a database, with “widgets”, “blocks”, “groups”, and “panes” sewn together by an ever-more elaborate server-side framework – Ruby on Rails, Django, Drupal, etc. WordPress and Redmine too, which I gratefully rely on for the JeeLabs sites.

But there’s something odd going on: a few days ago, the WordPress server VM which runs this daily weblog here at JeeLabs crashed on an out-of-memory error. I used to reserve 512 MB RAM for it, but had scaled it back to 256 MB before the summer break. So 256 MB is not enough apparently, to present a couple of weblog pages and images, and handle some text searches and a simple commenting system.

(We just passed 6000 comments the other day. It’s great to see y’all involved – thanks!)

Ok, so what’s a measly 512 MB, eh?

Well, to me it just doesn’t add up. Ok, so there’s a few hundred MB of images by now. Total. The server is running off an SSD disk, so we should be able to easily serve images with say 25 MB of RAM. But why does WP “need” hundreds of MB’s of RAM to serve a few dozen MB’s of daily weblog posts? Total.

It doesn’t add up. Self-inflicted overhead, for what is 95% a trivial task: serving the same texts and images over and over again to visitors of this website (and automated spiders).

The Redmine project setup is even weirder: currently consuming nearly 700 MB of RAM, yet this ought to be a trivial task, which could probably be served entirely out of say 50 MB of RAM. A tenfold resource consumption difference.

In the context of the home monitoring and automation I’d like to take a bit further, this sort of resource waste is no longer a luxury I can afford, since my aim is to run HouseMon on a very low-end Linux board (because of its low power consumption, and because it really doesn’t do much in terms of computation). Ok, so maybe we need a bit more than 640 KB, but hey… three orders of magnitude?

In this context, I think we can do better. Instead of a large server-side byte-shuffling engine, we can now do this – courtesy of modern browsers, Node.js, and AngularJS:


The server side has been reduced to its bare minimum: every “asset” that doesn’t change gets served as is (from files, a database, whatever). This includes HTML, CSS, JavaScript, images, plain text, and any other “document” type blob of information. Nginx is a great example of how a tiny web server based on async I/O and written in C can take care of any load we down-to-earth netizens are likely to encounter.

Let me stress this point: there is no dynamic “templating” or “page generation” on the server side. This takes place in the browser – abstracted away by AngularJS and the DOM.

In parallel, there’s a “Real-Time Server” running, which handles the application logic and everything that needs to be dynamic: configuration! status! measurements! commands! I’m using Node.js, and I’ve started using the fascinating LevelDB database system for all data persistence. The clever bit of LevelDB is that it doesn’t just fetch and store data, it actually lets you keep flows running with changes streamed in and out of it (see the level-livefeed and multilevel projects).

So instead of copying data in and out of a persistent data store, this also becomes a way of staying up to date with all changes on that data. The fusion of a database and pubsub.

On the client side, AngularJS takes care of the bi-directional flow between state (model) and display (view), and Primus acts as generic connection library for getting all this activity across the net – with connection keep-alive and automatic reconnect. Lastly, I’m thinking of incorporating the q-connection library for managing all asynchronicity. It’s fascinating to see network round trip delays being woven into the fabric of an (async / event-driven) JavaScript application. With promises, client/server apps are starting to feel like a single-system application again. It all looks quite, ehm… promising :)

Lots of bits that need to work together. So far, I’m really delighted by the flexibility this offers, and the fact that the server side is getting simpler: just the autonomous data acquisition, a scalable database, a responsive WebSocket server to make the clients (i.e. browsers) come alive, and everything else served as static data. State is confined to the essence of the app. The rest doesn’t change (other than during development of course), needs no big backup solution, and the whole kaboodle should easily fit on a Raspberry Pi – maybe something even leaner than that, one day.

The last instalment tomorrow will be about how HouseMon is structured internally.

PS. I’m not saying this is the only way to go. Just glad to have found a working concoction.

Flow – the developer perspective

In Software on Sep 15, 2013 at 00:01

This is the second of a three-part four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Software development is all about “flow”, in two very different ways:

I’ve been looking at lots of options on how to address that first bit, i.e. how to set up an environment which can help me through the edit-run-debug cycle as smoothly and quickly as possible. I didn’t find anything that really fits my needs, so as any good ol’ software developer would do, I scratched this itch recently and ended up creating a new tool.

It’s all about flow. Iterating through the coding process without pushing more buttons or keys than needed. Here’s how it works – there are three kinds of files I need to deal with:

  • HTML – app pages and docs, in the form of Jade and Markdown text files
  • CSS – layout and visual design, in the form of Stylus text files
  • CODE – the logic of it all, host- and client-side, in the form of CoffeeScript text files

Each of these are not the real thing, in the sense that they are all dialects which need to be translated to HTML, CSS, and JavaScript, respectively. But I don’t want a solution which introduces lots of extra steps into that oh-so treasured edit-run-debug cycle of software development. No temporary files, please – let’s generate everything as needed on-the-fly, and let’s set up a single task to take care of it all.

The big issue here is when to do all that pre- (and re-) processing. Again, since staying in the flow is the goal, that really should happen whenever needed and as quickly as possible. The workflow I’ve found to suit me well is as follows:

  • place all the development source files into one nested area (doh!)
  • have the system watch for changes to any files in this area
  • if it’s a HTML file change: reload the browser
  • if it’s a CSS file change: update the browser (no need to reload)
  • if it’s a CODE file change: restart the server and reload the browser

The term for this is “live reload”. It’s been in use for ages by web developers. You save a file, and BAM … the browser updates itself. But that’s only half the story with a client + server setup, as you also may have to restart the server. In the past, I’ve done so using the nodemon utility. It works, but along with other tools too watch for changes and pre-compile to the desired format, it all got a bit clunky.

In the new HouseMon project, it’s all done behind the scenes in a single process. Well, two actually: when in development mode, HouseMon forks itself into a supervisor and a worker child. The supervisor just restarts the worker whenever it exits. And the worker watches for files changes and either reloads or adjusts the browser, or exits. The latter becomes a way for the worker to quickly restart itself from scratch due to a code change.

The result works phenomenally well: I edit in MacVim, multiple files even, and nothing happens. I can bring up the Dash reference guides to look up something. Still nothing happens. But when I switch away from MacVim, it’s set up to save all changes, and everything then kicks in on auto-pilot. Instantly or a few seconds later (depending on the change), the latest changes will be running. The browser may lose a little of its context, but the URL stays the same, so I’m still looking at the same page. The console is open, and I can see what’s going on there. Without ever switching to the browser (ok, I need to switch away from MacVim to something else, but often that will be the command line).

It’s not yet the same as live in-app development (as was possible with Tcl), but it’s getting mighty close, because the app returns to where it was before without pushing any buttons.

There’s one other trick in this setup which is a huge time-saver: whenever the supervisor is launched (“node .“), it scans through all the folders inside the main ./app folder, and collects npm and bower package files as well as Makefile files. All dependencies and requirements are then processed as needed, making the use of 3rd party packages virtually effortless – on the server and in the client. Total automation. Look ma, no hands!

None of this machinery needs to be present in production mode. All this reloading and package dependency handling stuff can then be ignored. Deployment just needs a bunch of static files, served through Node.js, Nginx, Apache, whatever – plus a light-weight Node.js server for the host-side logic, using WebSockets or some fallback mechanism.

As for the big picture on how HouseMon itself is structured… more coming, tomorrow.

Flow – the user perspective

In Software on Sep 14, 2013 at 00:01

This is the first of a three-part four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Ok, so maybe what follows is not the user perspective, but my user perspective?

I’m not a fan of pushing clicking buttons. It’s a computer invention, and it sucks.

Interfaces (I’m not keen on the term user interfaces either, but it’s more to the point than just “interfaces”) – anyway, interfaces need interaction to indicate what information you want to see, and to perform a real action, i.e. something that has effect on some permanent state. From setting up preferences or a configuration, to turning on a light.

But apart from that, interaction to fetch or update something is tedious, unwanted, silly, and a waste of time. Pushing a button to see the time is 2,718,281 times more inconvenient than looking at the time. Just like asking for something involves a lot more interaction (and social subtleties) than having the liberty to just do it.

When I look outside the window, I expect to see the actual weather. And when I see information on a screen, I expect it to be up-to-date. I don’t care if the browser does constant refreshes to pull changes from a server or whether it’s set up to respond to push notifications coming in behind the scenes. When I see “22.7°C”, it needs to be a real reading, with a known and acceptable lag and accuracy – unless it’s historical data.

This goes a lot further than numbers and graphs. When I see a button, then to me that is a promise (and invitation) that some action will be taken when I push it. Touch on mobile devices has the advantage here, although clicking via the mouse or typing a power-user keyboard shortcut is often perfectly acceptable – even if slightly more indirect.

What I don’t want is a button which does nothing, or generates an error message. If that button isn’t intended to be used, then it should be disabled or it shouldn’t be there at all:

Screen Shot 2013-09-13 at 11.56.54

Screen Shot 2013-09-13 at 11.57.17

That’s the “Graphs” install page in HouseMon (the wording could have been better).

When a list or graph of readings is shown on-screen, this needs to auto-update in real time. That’s the whole point of HouseMon – access to specific information (that choice is a preference, and will require interaction) to see what’s going on. Now – or within the last few minutes, in the case of sensors which are periodically sending out their readings. A value without an indication of its “up-to-date-ness” is useless. That could be either implicit by displaying it in some special way if the information is stale (slowly turning red?), or explicit, by displaying a time stamp next to it (a bit ugly and tedious).

Would you want it any other way when doing online banking? Would you accept seeing your account balance without guarantee that it is recent?nah, me neither :) – so why do we put up with web pages with copies of some information, at some point in time, getting obsolete the very moment that page is shown?

When “on the web” – as a user – I want to deal with accurate information. When I see something tagged as “updated 2 minutes ago”, then I want to see that “2” change into a “3” within the next minute. See GitHub for an example of this. It works, it makes sense, and it makes the whole technology get out of the way: the web should be an intermediary between some information and me. All the stuff in between is secondary.

Information needs to update – we need to stop copying it into a web page, and send that copy off to the browser. All the lipstick in the world won’t help if what we see is a copy of what we’re looking for. As developers, we need to stop making web sites fancy – we should make them live first and then make them gorgeous. That’s what RIA‘s and SPA‘s are about.

One more thing – not only does information need to flow, these flows should be visualised:

Screen Shot 2013-09-13 at 11.02.15

Such a user interface can be used during development to manage data flows and to design software which ties each result shown on-screen back to its origin. In tomorrow’s apps, every change should propagate – all the way to the pixels shown on-screen by the browser.

Now let’s go back to static web servers to make it happen. Stay tuned…

Gearing up for the new web

In Software on Sep 13, 2013 at 00:01

Well… new for me, anyway. What a fascinating world in motion we live in:

  • first there was the “pre-web”, with email BBS’es and very clunky modem links
  • then came Nestcape Navigator – and the world would never be the same again
  • hey, we can make the results dynamic, let’s generate pages through CGI
  • and not just outgoing, we can have forms and page refreshes to interact!
  • we need lots of ways to generate page variations, let’s add modules to the server
  • nah, we can do better than that: let’s add PHP / Python / Ruby inside the server!
  • and so the first major web explosion of the internet was born…

It took a decade, and another decade to make fast internet access mainstream, which is where we are today. There are lots and lots and LOTS of “web frameworks” in existence now, for any programming language you care about.

The second major wave of evolution came with a whole bunch of new acronyms, such as RIA and SPA. This is very much Google’s turf, and this is the technology which powers Gmail, for example. Rich and responsive interactivity. It’s going everywhere by now.

It was made possible by the super-trio of HTML, CSS, and JS (JavaScript). They’ve all evolved to define the structure, style, and logic of everything you see in the browser.

It’s hard to keep up, but as you know, I’ve picked two very active technologies for doing all my new software development: AngularJS and Node.js. Both by Google, by the way – but also both freely available as wide-open source (here and here). They’ve changed my life :)

It’s been a steep learning curve. I don’t mean just getting something going. I mean getting comfortable with it all. Understanding the idioms and quirks (there always are), and figuring out how to debug stuff (async is nasty, especially when not using promises). Can’t say I’m in the clear yet, but the fog is lifting and the fun level is rising!

I started coding HouseMon many months ago and it’s been running here for a long time now, giving me a chance to let it all sink in. The reality is: I’ve been doing it all wrong.

That’s not necessarily a bad thing, by the way: ya gotta learn stuff by doin’, right?

Ok, so I’ve been using Events as the main decoupling mechanism in HouseMon, i.e. publish and subscribe inside the app. The beauty of this is that it lets you keep the functionality of the application highly modular. Each subsystem subscribes to what it is interested in, and publishes results when they are available. No need to consider who needs this, or even how many subscribers there will be for what gets published.

It sounds great, and in a way it is, but it is a bit like a loose canon: events fire all over the place, with little insight in what is going on and when. For some apps, this is probably great, but with fairly stable continuous processing flows as in a home monitoring setup, it’s more chaotic than need be. It all works fine “in the small”, i.e. when writing little modules (“Briqs” in HouseMon-speak) and adding minor drivers / decoders, but I constantly felt lost with respect to the big picture. And then there’s this nasty back pressure dilemma.

But now the pieces are starting to fall into place. I’ve figured out the “big” structure of the HouseMon application as well as the “little” inner organisation of it all.

It’s all about flow. Three Four stories coming up – stay tuned…

My development setup – utilities

In Software on Sep 12, 2013 at 00:01

As promised, one final post about my development setup – these are all utilities for Mac OSX. There are no doubt a number of equivalents for other platforms. Here’s my top 10:

  • Alfred – Keyboard commands for what normally requires the mouse. I have shortcuts such as cmd-opt-F1 for iTerm, cmd-opt-F2 for MacVim. It launches the app if needed, and then either brings it to the front or hides it. Highly configurable, although I don’t really use that many features – just things like “f blah” to see the list of all folder names matching “blah”. Apart from that, it’s really just an alternative for Spotlight (local disk searches) + Google (or DuckDuckGo) + app launcher. Has earned the #1 keyboard shortcut spot: cmd+space. Top time-saver.

  • HomeBrew – This is the package manager which handles (almost) anything from the Unix world. Installing node.js (w/ npm) is brew install node. Installing MacVim is brew install macvim. Updates too. The big win is that all these installs are compartmentalised, so they can be uninstalled individually as well. Empowering.

  • Dash – Technical reference guides at your fingertips. New guides and updates added a few times a week. Looking for the Date API in JavaScript is a matter of typing cmd-opt-F8 to bring up Dash, and then “js:date”. My developer-pedia.

  • GitHub – Actually, it’s called “GitHub for Mac” – a tool to let you manage all the daily git tasks without ever having to learn the intricacies of git. It handles the 90% most common actions, and it does those well and safely. Push early, push often!

  • SourceTree – Since I’m no git guru, I use SourceTree to help me out with more advanced git actions, with the GUI helping me along and (hopefully) preventing silly mistakes. Every step is trackable, and undoable, but when you don’t know what you’re doing with git, it can become a real mess in the project history. This holds my hand.

  • Dropbox – I’m not into iCloud (too much of a walled garden for me), but for certain things I still need a way to keep things in sync between different machines, and with a few people around the world. Dropbox does that well (though BitTorrent Sync may well replace it one day). Mighty convenient.

  • 1Password – One of the many things that get synced through Dropbox are all my passwords, logins, bank details, and other little secrets. Securely protected by a strong password of course. Main benefit of 1P is that it integrates well with all the browsers, so logging in anywhere, or even coming up with new arbitrary passwords is a piece of cake. Also syncs with mobile. Indispensable.

  • DevonThink – This is my heavyweight external brain. I dump everything in there which needs to be retained. Although plain local search is getting so good and fast that DT is no longer an absolute must, it partitions and organises my files in separate databases (all plain files on disk, so no extra risk of data loss). Intelligent storage.

  • CrashPlan – Nobody should be without (automated, offline) backups. CrashPlan takes care of all the machines around the house. I sleep much better at night because of it. Its only big drawback is that there’s always a 500 MB background process to make this happen (fat Java worker, apparently), and that it burns some serious CPU cycles when checking, or catching up, or cleaning up – whatever. Still, essential.

  • Textual – Yeah, well, ok, this one’s an infrequent member on this list: IRC chat, on the #jeelabs channel for example. Not used much, but when it’s needed, it’s there to get some collaboration done. I’m thinking of hooking this into some sort of personal agents, maybe even have a channel where my house chats about the more public readings coming in. Social, on demand.

The point of all the things I’ve been mentioning in these past few posts, is that it’s all about forming habits, so that a different part of the brain starts doing some of the work for you. There is a perfect French term for getting tools to work for you that way:

  • prolongements du corps (a great example of this is driving a car)

This is my main motivation for adopting touch typing and in particular for using the command-line shell and a keyboard-command based text editor. It (still) doesn’t come easy, but every day I notice some little improvement. As always: Your Mileage May Vary.

Ok, that wraps it up for now. Any other tools you consider really useful? Let me know…

My development setup – software

In Software on Sep 10, 2013 at 00:01

To follow up on yesterday’s post, here are the different software packages in my laptop-based development setup:

  • MacVim is my editor of choice these days. Now that I’ve switched to touch-typing mode, and given that I’ve used vi(m) for as long as I can remember, this is now growing on me every day. It’s all about muscle memory, and off-loading activities to have more spare brain cycles left for the task at hand.

    I chose MacVim over pure vim in the command line for one reason: it can be set to save all files on focus loss, i.e. when switching another app to the foreground. Tried a dark background for a while, but settled on a light one in the end. Dark backgrounds caused trouble with screen reflections when sitting with my back to a window.

  • TextMate is the second editor I use, but exclusively in browse mode. It can be fired up from the command line (e.g. cd housemon && mate .) and makes it very easy to skim and search across large source collections. Having two entirely different contexts means I can keep vim and its history focused on the code I’m actually working on. I only started using this dual-editor approach a few months ago, but by now it’s become a habit to start them both up – on different areas of the screen.

  • Google Chrome is my main development web browser. I settled on it (over Safari) for mostly one reason: it supports having the developer console pane open to the side of the main screen. There are plenty of dev tools included, from browsing the DOM structure to various performance graphs. Using a dedicated browser for development lets me flush the cache and muck with settings without disrupting other web activities.

  • Safari is my secondary browser. This is the one which keeps history, knows how to log into sites, and ends up having lots of tabs and windows open to keep track of all the distractions on the web. Reference docs, searching for information, and all the other time sinks the web has to offer. It drops nicely to the background when I want to get back to the real thing, but it’s there alongside MacVim whenever I need it.

  • iTerm 2 is what I use as command line. The only reason I’m using this instead of the built-in Terminal application, is that it supports split panes (I don’t use screen or tmux much). It’s set up to start up with two windows side-by-side, because tabs are not always convenient: sometimes you really need to see a few things at the same time.

    I do use tabs in all these apps, dragging them around between windows when needed. Sometimes, I’ll open a new iTerm window and make it much larger (all from the keyboard), to see all the output of a ps or grep command, for example.

  • nvAlt is a fork of Notational Velocity – a note-taking app which stores each note as a plain text file. Its main feature is that it starts up and searches everything instantly. Really. In the blink of an eye, I can find all the notes I wrote about anything. I use it for logging how I installed some tricky software and for lots and lots of lists with design ideas. Plain text. Instant. Clickable URLs. Shared through Dropbox. Perfect.

There’s a lot more to it, but these are the six tools I now use day in day out for software development. Even for embedded work, I’ll use the Arduino IDE only as compile + upload step, editing code in MacVim nowadays – because it has so much more functionality.

All of the above are either included in Mac OSX or open source software. I’m inclined to use OSS as a way to keep my learning investments safe. If ever one of these tools ends up going nowhere, or in an unwanted direction, then chances are that some developer(s) will fork the project to keep things on track (as has already happened with TextMate, iTerm, and nvAlt). In the long term, these core tools are simply too important…

My development setup

In Software on Sep 9, 2013 at 00:01

The tools I use for software development have evolved greatly over the years. Some have been replaced, other have simply gotten better, much better, over time. I’d like to describe the setup a bit in the next few posts – and why / how I made those choices. “YMMV”, as they say, but I think it could be of interest to other developers, whether soon-to-be or seasoned-switchers. Context: I develop in (for?) JavaScript (server- and browser-side).

Ok, here goes… the main tools I use are:

  • 2 editors – yes, two…
  • 2 browser – yes, two…
  • 2 command-line windows, yes, two…
  • a note-taking app (sorry, just one :)

Here is a snapshot of all the windows, taken at a fairly arbitrary moment in time:


And this is the setup in actual use:


Tomorrow, I’ll go over the different bits and pieces, and how they work out for me.

My software

In Software on Sep 7, 2013 at 00:01

Most of the software I write, and most of what I’ve written in the past, has gradually been making its way into GitHub. It’s a fantastic free resource for managing code (with no risk of lock-in – ever – since all git history gets copied), and it’s becoming more and more effective at collaboration as well, with a superb issue tracker and excellent searching and statistics facilities. Over the years, each developer home page has evolved into an excellent intro on what he/she has been up to lately:

Screen Shot 2013-09-03 at 14.58.03

As of recently, I’ve turned things up to 11 by giving everyone who submits a pull request full developer access (it’s a manual process, but effortless). As a result, if you submit a pull request, then whenever I merge it into the official repository, I’ll also add you as developer so that you no longer have to fork to get changes in. Just clone the original repo, and push changes back in.

It may sound crazy, but it really isn’t (and it’s not my idea, although I forgot where I first read about it). The point being that contributors are scarce, and good contributors who actually help take a project forward are scarce as gold. Not because there aren’t any, but because few people take the time to actually contribute and participate in a project created by someone else (shame on me for often being like that!). Time is the new scarce resource, so helping others become more productive by not having to go through the pull-request tango to add and tweak stuff really makes sense to me. Let alone giving real credit to people, by not only accepting their gifts but also giving back visibility and responsibility.

I don’t think there is a risk. Every serious developer will check with the team members before committing a contentious or disruptive change, and even then, all it takes is to create a branch and do the work there (while still letting others see what it’s about).

The other change is that I recently discovered the DocumentUp site, which can generate a very nice front page for any project on GitHub, from just its README page – like this one.

This is so convenient that I’ve set up a website with pointers for most of my code:

Screen Shot 2013-09-03 at 13.45.33

There! That URL ought to be fairly easy to remember :)

Another site (service?) which presents (remixes?) GitHub issues in a neat way is

Screen Shot 2013-09-06 at 00.16.16

There’s a lot more going on with GitHub. Did you know that all issue trackers support live chat? Just keep an issue page open, and you’ll see new comments come in – in real time! Perfect for a quick back-and-forth discussion about some intricate problem. Adding images like screen shots is equally easy: just drag and drop the image into the discussion page.

Did you know that each GitHub project can manage releases, either as is or with detailed information per release? Either way, you get a list of versioned ZIP and TAR archives, simply by adding public tags in Git. And even without tagged releases, there’s always a “Download ZIP” link on the right side of each project page, so you don’t even have to install or use git at all to obtain the latest version of that code.

Screen Shot 2013-09-03 at 15.23.37

There are some 4 million public projects on GitHub. Not all exciting, but tons of them are!

Decoding bit fields – part 2

In Software on Sep 6, 2013 at 00:01

To follow up on yesterday’s post, here is some code to extract a bit field. First, a refresher:


The field marked in blue is the value we’d like to extract. One way to do this is to “manually” pick each of the pieces and assemble them into the desired form:


Here is a little CoffeeScript test version:

buf = new Buffer [23, 125, 22, 2]

for i in [0..3]
  console.log "buf[#{i}] = #{buf[i]} = #{buf[i].toString(2)} b"

x = buf.readUInt32LE 0
console.log "read as 32-bit LE = #{x.toString 2} b"

b1 = buf[1] >> 7
b2 = buf[2]
b3 = buf[3] & 0b111
w = b1 + (b2 << 1) + (b3 << 9)
console.log "extracted from buf: #{w} = #{w.toString 2} b"

v = (x >> 15) & 0b111111111111
console.log "extracted from int: #{v} = #{v.toString 2} b"

Or, if you prefer JavaScript, the same thing:

var buf = new Buffer([23, 125, 22, 2]);

for (var i = 0; i <= 3; ++i) {
  console.log("buf["+i+"] = "+buf[i]+" = "+buf[i].toString(2)+" b");

var x = buf.readUInt32LE(0);
console.log("read as 32-bit LE = "+x.toString(2)+" b");

var b1 = buf[1] >> 7;
var b2 = buf[2];
var b3 = buf[3] & 0x7;
var w = b1 + (b2 << 1) + (b3 << 9);
console.log("extracted from buf: "+w+" = "+w.toString(2)+" b");

var v = (x >> 15) & 0xfff;
console.log("extracted from int: "+v+" = "+v.toString(2)+" b");

The output is:

    buf[0] = 23 = 10111 b
    buf[1] = 125 = 1111101 b
    buf[2] = 22 = 10110 b
    buf[3] = 2 = 10 b
    read as 32-bit LE = 10000101100111110100010111 b
    extracted from buf: 1068 = 10000101100 b
    extracted from int: 1068 = 10000101100 b

This illustrates two ways to extract the data: “w” was extracted as described above, but “v” used a trick: first use built-in logic to extract an integer field which is too big (but with all the bits in the right order), then use bit shifting and masking to pull out the required field. The latter is much simpler, more concise, and usually also a little faster.

Here’s an example in Python, illustrating that second approach via “unpack”:

import struct

buf = struct.pack('BBBB', 23, 125, 22, 2)
(x,) = struct.unpack('<l', buf)

v = (x >> 15) & 0xFFF

And lastly, the same in Lua, another popular language:

require 'struct'
require 'bit'

buf = struct.pack('BBBB', 23, 125, 22, 2)
x = struct.unpack('<I4', buf)

v =, 15), 0xFFF)

So there you go, lots of ways to extract useful data from the RF12demo sketch output!

Decoding bit fields

In Software on Sep 5, 2013 at 00:01

Prompted by a recent discussion on the forum, and because it’s a recurring theme, I thought I’d go a bit (heh…) into how bit fields work.

One common use-case in Jee-land, is with packets coming in from a roomNode sketch, which are reported by the RF12demo sketch on USB serial as something like:

OK 3 23 125 22 2

The roomNode sketch tightly packs its results into a packet using this C structure:

struct {
    byte light;     // light sensor: 0..255
    byte moved :1;  // motion detector: 0..1
    byte humi  :7;  // humidity: 0..100
    int temp   :10; // temperature: -500..+500 (tenths)
    byte lobat :1;  // supply voltage dropped under 3.1V: 0..1
} payload;

So how do you get the values back out, given just those 4 bytes (plus a header byte)?

To illustrate, I’ll use a slightly more complicated example. But first, some diagrams:



That’s 4 consecutive bytes in memory, and two ways a 32-bit long int can be mapped to it:

  • little-endian (LE) means that the first byte (at #00) points to the little end of the int
  • big-endian (BE) means that the first byte (at #00) points to the big end of the int

Obvious, right? On x86 machines, and also the ATmega, everything is stored in LE format, so that’s what I’ll focus on from now on (PIC and ARM are also LE, but MIPS is BE).

Let’s use this example, which is a little more interesting (and harder) to figure out:

struct {
    unsigned long before :15;
    unsigned long value :12;  // the value we're interested in
    unsigned long after :3;
} payload;

That’s 30 bits of payload. On an ATmega the compiler will use the layout shown on the left:


But here’s where things start getting hairy. If you had used the definition below instead, then you’d have gotten a different packed structure, as shown above on the right:

struct {
    unsigned int before :15;
    unsigned int value :12;  // the value we're interested in
    unsigned int after :3;
} payload;

Subtle difference, eh? The reason for this is that the declared size (long vs int) determines the unit of memory in which the compiler tries to squeeze the bit field. Since an int is 16 bits, and 15 bits were already assigned to the “before” field, the next “value” field won’t fit so the compiler will restart on a fresh byte boundary! IOW, be sure to understand what the compiler does before trying to decode stuff. One way to figure out what’s going on, is to set the values to some distinctive bit patterns and see what comes out as bytes:

payload.before = 0b1;  // bit 0 set
payload.value = 0b11;  // bits 0 and 1 set
payload.after = 0b111; // bits 0, 1, and 2 set

Tomorrow, I’ll show a couple of ways to extract this bit field.

Back pressure

In Software on Sep 4, 2013 at 00:01

No, not the medical term, and not this carambolage either (though it’s related):


What I’m talking about is the software kind: back pressure is what you need in some cases to avoid running into resource limits. There are many scenario’s where this can be an issue:

  • In the case of highway pile-ups, the solution is to add road signalling, so drivers can be warned to slow down ahead of time – before that decision is taken from them…
  • Sending out more data than a receiver can handle – this is why traditional serial links had “handshake” mechanisms, either software (XON/XOFF) or hardware (CTS/RTS).
  • On the I2C bus, hardware-based clock stretching is often supported as mechanism for a slave to slow down the master, i.e. an explicit form of back pressure.
  • Sending out more packets than (some node in) a network can handle – which is why backpressure routing was invented.
  • Writing out more data to the disk than the driver/controller can handle – in this case, the OS kernel will step in and suspend your process until things calm down again.
  • Bringing a web server to its knees when it gets more requests than it can handle – crummy sites often suffer from this, unless the front end is clever enough to reject requests at some point, instead of trying to queue more and more work which it can’t possibly ever deal with.
  • Filling up memory in some dynamic programming languages, where the garbage collector can’t keep up and fails to release unused memory fast enough (assuming there is any memory to release, that is).

That last one is the one that bit me recently, as I was trying to reprocess my 5 years of data from JeeMon and HouseMon, to feed it into the new LevelDB storage system. The problem arises, because so much in Node.js is asynchronous, i.e. you can send off a value to another part of the app, and the call will return immediately. In a heavy loop, it’s easy to send off so much data that the callee never gets a chance to process it all.

I knew that this sort of processing would be hard in HouseMon, even for a modern laptop with oodles of CPU power and gigabytes of RAM. And even though it should all run on a Raspberry Pi eventually, I didn’t mind if reprocessing one year of log files would take, say, an entire day. The idea being that you only need to do this once, and perhaps repeat it when there is a major change in the main code.

But it went much worse than I expected: after force-feeding about 4 months of logs (a few hundred thousand converted data readings), the Node.js process RAM consumption was about 1.5 GB, and Node.js was frantically running its garbage collector to try and deal with the situation. At that point, all processing stopped with a single CPU thread stuck at 100%, and things locked up so hard that Node.js didn’t even respond to a CTRL-C interrupt.

Now 1.5 GB is a known limit in the V8 engine used in Node.js, and to be honest it really is more than enough for the purposes and contexts for which I’m using it in HouseMon. The problem is not more memory, the problem is that it’s filling up. I haven’t solved this problem yet, but it’s clear that some sort of back pressure mechanism is needed here – well… either that, or there’s some nasty memory leak in my code (not unlikely, actually).

Note that there are elegant solutions to this problem. One of them is to stop having a producer push data and calls down a processing pipeline, and switch to a design where the consumer pulls data when it is ready for it. This was in fact one of the recent big changes in Node.js 0.10, with its streams2 redesign.

Even on an embedded system, back pressure may cause trouble in software. This is why there is an rf12_canSend() call in the RF12 driver: because of that, you cannot ever feed it more packets than the (relatively slow) wireless RFM12B module can handle.

Soooo… in theory, back pressure is always needed when you have some constraint further down the processing pipeline. In practice, this issue can be ignored most of the time due to the slack present in most systems: if we send out at most a few messages per minute, as is common with home monitoring and automation, then it is extremely unlikely that any part of the system will ever get into any sort of overload. Here, back pressure can be ignored.

Electricity usage patterns

In Hardware, Software on Sep 3, 2013 at 00:01

Given that electricity usage here is monitored with a smart meter which periodically phones home to the electricity company over GPRS, this is the sort of information they get to see:

Screen Shot 2013-09-02 at 11.20.38

Consumption in blue, production in green. Since these are the final meter readings, those two data series will never overlap – ya can’t consume and produce at the same time!

I’m reading out the P1 data and transmitting it wirelessly to my HouseMon monitoring setup (be sure to check the develop branch, which is where all new code is getting added).

There’s a lot of information to be gleaned from that. The recurring 2000+ W peaks are from a 7-liter kitchen boiler (3 min every 2..3 hours). Went out for dinner on Aug 31st, so no (inductive) home cooking, and yours truly burning lots of midnight oil into Sep 1st. Also, some heavy-duty cooking on the evening of the 1st (oven dish + stove).

During the day, it’s hard to tell if anyone is at home, but evenings and nights are fairly obvious (if only by looking at the lights lit in the house!). Here’s Sep 2nd in more detail:

Screen Shot 2013-09-02 at 11.23.50

This one may look a bit odd, but that double high-power blip is the dish washer with its characteristic two heating cycles (whoops, colours reversed: consumption is green now).

Note that whenever there is more sun, there would be fewer consumption cycles, and hence less information to glean from this single graph. But by matching this up with other households nearby, you’d still get the same sort of information out, i.e. from known solar power vs. returned power from this household. Cloudy patterns will still match up across a small area (you can even determine the direction of the clouds!).

I don’t think there’s a concern for (what little) privacy (we have left), but it’s quite intriguing how much can be deduced from this.

Here’s yet more detail, now including the true house usage and solar production values, as obtained from some pulse counters, this hardware and the homePower driver:

Screen Shot 2013-09-02 at 11.52.21

There is a slight lag in smart meter reporting (a value on the P1 port every 10s). This is not an issue of the smart meter though: the DyGraphs package is only able to plot step lines with values at the start of the step, even though these values pertain to the past 10 seconds.

Speaking of which – there was a problem with the way data got stored in Redis. This is no longer an issue in this latest version of HouseMon, because I’m switching over to LevelDB, a fascinating time- and space-efficient database engine.

FanBot Registration System

In Software, Linux on Jun 29, 2013 at 00:01

Nearly 400 FanBots have already been built – see this page and the KekBot weblog.

To finish this little series, I wanted to describe the little setup I built and how it was done. Nothing spectacular, more an example of how you can create a software cocktail / mashup to get a one-off short-lived project like this going fast.

First the basic task at hand: allow kids to register their FanBot and print out a “passport” with a picture of their creation and some information about when and how to get their FanBot back after the RoboCup 2013 event. At the same time, it would be nice to have a monitor display the current count and show images of the last few FanBot pictures taken.


I decided to use 3 Raspberry Pi’s as “slave”, each with a webcam, an LCD screen (thanks David, for the laser-cut frame!), and Ethernet + power. The code running on them is on github. It would be nice if each slave could continue to accept registrations even with the network connection temporarily down, so I went for the following approach:

  • collect registrations locally, using rsync to push new ones to the master
  • everything is based on a few simple shell scripts for quick adjustment
  • two C/C++ programs were needed: to read the FanBot data and to drive the LCD
  • do a git pull on power-up and re-compile as needed before starting operation
  • periodic webcam readout and periodic rsync, using loops and sleep calls

For the master, I thought about using the Odroid/U2, but decided against it as there would be too much setup involved in the last minute. Next idea was to use an Acer Inspire One Linux netbook, but it turned out to be way too slow. For some reason, Node.js was consuming 100% CPU (I suspect a bad nodemon disk change polling error on the slow SSD). No time to figure that out, so I ended up using my second Mac laptop instead.

The master logic started out as follows:

  • an rsync server was set up to collect files from all the slaves in one spot
  • the display is presented via a browser in full-screen mode and Node.js
  • the original plan was to also allow outside access, but we never got to that
  • for live updating, I decided to use SocketStream (same as in HouseMon)
  • a PDF was generated for each image by convert, which is part of ImageMagick
  • an AppleScript then prints each PDF to the USB-attached laser-printer

These choices turned out to be very convenient: by having the rsync folder inside the Node.js application area, and using nodemon, refresh became automatic (nodemon + socketstream will force a refresh on each client whenever a file system change is detected). So all I had to do was write a static Node.js app to display the count and the latest images.

So far so good. A couple of glitches, such as an erratic printer (new firmware fixed it), an overheating RPi (fixed by clocking at 700 MHz again), and all the slave rsyncs fighting when the same registration was present on more than one (fixed by adding the “-u” flag).

During installation, we found out that there was no wired ethernet w/ DHCP, as I had assumed. Oops. So the Mac laptop was turned into a WiFi -> wired bridge, since WiFi was available most of the time. Whoops: “most” is not good enough, the RPi’s have no clock, they really need an NTP server – especially since the whole system was based on rsync and file modification times! We hacked a fixed DNS – but internet still has to be there.

And then the fun started – various little wishes came up after the fact:

  • can we get images + passports off-site during the event? sure: rsync them to Dropbox
  • could we have a page with all images? sure: on my server at JeeLabs, I made an rsync loop from one Dropbox area to another, and made a little photo-album using Dropbox
  • could we show a live count, even though the monitor website was not accessible? sure: I simply added another shell loop on my server again, to generate an image with the count in it, and made it show up as first one in the photo album.

Here is the shell script for that last feature:

cd ~/Dropbox/FanBots
while :
  N=$(echo `ls *.jpg | wc -l`)
  if [ "$N" != "$P" ]
    echo $N - `date`
    convert -background black -fill red -size 480x640 \
      -gravity center "label:$N\n\nFanBots" 0-teller.png
  sleep 10

This can run anywhere, since Dropbox is taking care of all updates in both directions!

The point of all this is not which languages I used (your choice will definitely be superior), nor what hardware was selected for this (it changed several times). The point is that:

  1. it all got done in a very limited amount of time
  2. the basic rsync approach made things resilient and self-recovering
  3. github pulls made it possible to apply last-minute tweaks
  4. some nice changes were possible very late in the game
  5. dropbox integration made it possible to add features remotely

Happy ending: all the kids are having a good time … including me :)

PS. Tomorrow (Sunday) is the RoboCup 2013 finals. See you there?

Packaged but not virtualised

In Hardware, Software, Linux on Jun 26, 2013 at 00:01

Since VMware started virtualising x86 hardware many years ago, many uses have been found for this approach of creating fully self-contained software environments. I’ve used VMware many years, and was pleased when Apple switched the Macs to “Intel inside”.

Nowadays, I can run numerous variants of Windows as well as Linux on a single laptop. I switched to Parallels as VM core, but the principle remains the same: a large file is used as virtual hard disk, with an operating system installed, and from then on the whole environment simply “runs” inside the Mac host (in my case).

Nowadays, we all take this for granted, and use virtual environments as ultra-low cost web servers without even giving it any thought. I have a number of virtual machines running on a Mac Mini for all the JeeLabs sites in fact (and some more). One piece of hardware, able to suspend and resume N completely independent systems at the click of a mouse.

But there’s a drawback: you end up duplicating a lot of the operating system and main services in each VM, and as a result this stuff not only consumes disk space but also leads to a lot of redundant backup data. I keep the VM’s very lean and mean here, but they still all eat up between 3 and 10 gigbytes of disk space. Doesn’t sound like much, but I’m pretty sure my own data uses less than 10% of that space…

Both Parallels and VMware (and probably also VirtualBox) support snapshotting, which means that in principle you could set up a full Linux environment with a LAMP stack, and then use snapshots to only store what’s different for each VM. But that’s not useful in the long run, because as soon as you get system updates, they’ll end up being duplicated again.

It’s a bit like git “forks”: set up a state, create a fork to create a variant, and then someone decides to add some features (or fix some bugs) to the original which you’d also like to adopt. For git, the solution is to merge the original changes into your fork, and use “fast-forwarding” (if I understand it correctly – it’s all still pretty confusing stuff).

So all in all, I think VM’s are really brilliant… but not always convenient. Before you know it, you end up maintaining and upgrading N virtual machines every once in a while.

I’ve been using TurnkeyLinux as basis of all my VM’s because they not only maintain a terrific set of ready-to-run VM images, but because they also make them self-updating out of the box, and because they’ve added a very convenient incremental daily backup mechanism which supports restarting from the back-up. You can even restart in the cloud, using Amazon’s EC2 VM “instances”.

And then someone recently told me about Docker – “The Linux container engine”.

This is a very interesting new development. Instead of full virtualisation, it uses “chroot” and overlay file systems to completely isolate code from the rest of the 64-bit Linux O/S it is running on. This lets you create a “box” which keeps every change inside.

My view on this, from what I understand so far, is that Docker lets you play git at the operating system level: you take a system state, make changes to it, and then commit those changes. Not once, but repeatedly, and nested to any degree.

The key is the “making changes” bit: you could take a standard Linux setup as starting point (Docker provides several base boxes), install the LAMP stack (if someone hasn’t done this already), commit it, and then launch that “box” as web server. Whereby launching could include copying some custom settings in, running a few commands, and then starting all the services.

The way to talk to these Docker boxes is via TCP/IP connections. There’s also a mechanism to hook into stdin/stdout, so that you can see logs and such, or talk to an interactive shell inside the box, for example.

The difference with a VM, is that these boxes are very lightweight. You can start and stop them as quickly as any other process. Lots of ’em could be running at the same time, all within a single Linux O/S environment (which may well be a VM, by the way…).

Another example would be to set up a cross compiler for AVR’s/Arduino’s (or ARMs) as a box in Docker. Then a compile would be: start the box, feed it the source files, get the results out, and then throw the box away. Sounds a bit drastic, but it’s really the same as quitting an application – which throws all the state in RAM away. The point is that now you’re also throwing away all the uninteresting side-effects accumulated inside the box!

I haven’t tried out much so far. Docker is very new, and I need to wrap my mind around it to really understand the implications, but my hunch is that it’s going to be a game changer. With the same impact on computation and deployment as VMware had at the time.

As with VM’s, Docker boxes can be saved, shared, moved around, and launched on different machines and even different environments.

For more info, see this page which draws an analogy with freight containers, and how one standard container design has revolutionised the world of shipping and freight / cargo handling. I have this feeling that the analogy is a good one… a fascinating concept!

Update – Here’s a nice example of running a Node.js app in Docker.

FanBots at RoboCup 2013

In Hardware, Software, Linux, ARM on Jun 25, 2013 at 00:01

There’s an event coming up in a few days in Eindhoven, called the RoboCup 2013:


This is the world championship of robotic football. For all ages, and with various levels of sophistication. Some robots are tiny cars, trying to work together to play a match, some robots are little human-like machines consisting of lots of servos to make them walk on two feet, and some robots are very sophisticated little “towers” on three wheels, zipping along at impressive speeds and able to catch, hold, and kick a standard-size football.

I’m not directly involved in this, but I saw some of the preparations and it really promises to be a fantastic event. Lots of geeks with laptops, soldering irons, and mechanical stuff. The kick-off is next Wednesday and the event lasts for five days, with the finals on Sunday.

On the side, there is a gorgeous (not-so) little project going on, whereby kids of ages 8 to 12 can visit a workshop to build their own FanBot (for free). The FanBot can blink its eyes and mouth, and wave its arms – part of the exercise is to create a little “program” of things it does at the press of a button, so this will also be a first encounter with programming!

There’s a Dutch page about the project and a general page in English mentioning FanBots.

But the really exciting goal will be to create a huge stand for all the FanBots and make them cheer the football-playing robots in the main event hall. The aim is to get up to 1000 of them, all blinking and waving – and all orchestrated and controlled from a central system. It’s all being designed and built by Peter Brier and by Marieke Peelen of KekkeTek.

One task in the project is to register all the FanBots and print out a little “passport” as last step – which the kids can then use to pick up their own FanBot once the RoboCup 2013 event is over. This is the part I volunteered to help with.

I’ll describe the setup in more detail soon, but that’s where the Raspberry Pi’s w/ LCD screen and webcams will be used. It’ll be fun to see whether we can get 1000 kids to build and sign up their little robots during three days – there’s quite a bit of logistics involved!

Admission is free, so if you’re in the neighbourhood: don’t miss it – it’s going to be fun!

Fast I/O on a Raspberry Pi

In Hardware, Software, Linux on Jun 15, 2013 at 00:01

To follow up on yesterday’s post about WiringPi and how it was used to display an image on a colour LCD screen attached to the RPi via its GPIO pins.

A quick calculation indicated that the WiringPi library is able to toggle the GPIO pins millions of times per second. Now this might not seem so special if you are used to an ATmega328 on a JeeNode or other Arduino-like board, but actually it is

Keep in mind that the RPi code is running under Linux, a multi-tasking 32-bit operating system which not only does all sorts of things at (nearly) the same time, but which also has very solid memory and application protection mechanisms in place:

  • each application runs in its own address space
  • the time each application runs is controlled by Linux, not the app itself
  • applications can crash, but they cannot cause the entire system to crash
  • dozens, hundreds, even thousands of applications can run next to each other
  • when memory is low, Linux can swap some parts to disk (not recommended with SD’s)

So while the LCD code works well, and much faster than one might expect for all the GPIO toggling it’s doing, this isn’t such a trivial outcome. The simplest way to deal with the RPi’s GPIO pins, is to manipulate the files and directories under the /sys/class/gpio/ “device tree”. This is incredibly useful because you can even manipulate it via plain shell script, using nothing but the “echo”, “cat”, and “ls” commands. Part of the convenience is the fact that these manipulations can take place entirely in ASCII, e.g. writing the string “1” or “0” to set/reset GPIO pins.

But the convenience of the /sys/class/gpio/ virtual file system access comes at a price: it’s not very fast. There is too much involved to deal with individual GPIO pins as files!

WiringPi uses a different approach, called “memory mapped files”.

Is it fast? You bet. I timed the processing time of this snippet of C code:

int i;
for (i = 0; i < 100000000; ++i) {
  digitalWrite(1, 1);
  digitalWrite(1, 0);
return 0;

Here’s the result:

    real    0m19.592s
    user    0m19.490s
    sys     0m0.030s

That’s over 5 million pulses (two toggles) per second.

This magic is possible because the I/O address space (which is normally completely inaccessible to user programs) has been mapped into a section of the user program address space. This means that there is no operating system call overhead involved in toggling I/O bits. The mapping is probably virtualised, i.e. the kernel will kick in on each access, but this is an interrupt straight into protected kernel code, so overhead is minimal.

How minimal? Well, it takes less than 90 ns per call to digitalWrite(), so even when running at the RPi’s maximum 1 GHz rate, that’s less than 90 machine cycles.

Note how the RPi can almost toggle an I/O pin as fast as an ATmega running at 16 MHz. But the devil is in the details: “can” is the keyword here. Being a multi-tasking operating system, there is no guarantee whatsoever that the GPIO pin will always toggle at this rate. You may see occasional hick-ups in the order of milliseconds, in fact.

Is the RPi fast? Definitely! Is it guaranteed to be fast at all times? Nope!

What it took to generate that LCD image

In Hardware, Software, Linux on Jun 14, 2013 at 00:01

As shown yesterday, it’s relatively easy to get a bitmap onto an LCD screen connected to a few I/O pins of the Raspberry Pi.

But there were some gotcha’s:

  • the image I wanted to use was 640×480, but the LCD is only 320×240 pixels
  • the WiringPi library has code to put a bitmap on the screen, but not JPEGs
  • this is easy to fix using the ImageMagick “convert” command
  • but the result has 24-bit colour depth, whereas the LCD wants 16-bit (5-6-5)

The “convert” command makes things very easy, but first I had to install ImageMagick:

    sudo apt-get install ImageMagick

Then we can run “convert” from the command-line (long live the Unix toolkit approach!):

    convert snap.jpg -geometry 320x240 -normalize snap.rgb

This handles both the rescaling and the transformation to a file with (R,G,B) types, each colour is one byte. As expected, the resulting image file is 320 x 240 x 3 = 230,400 bytes.

I didn’t see a quick way to convert the format to the desired 5-6-5-bit coding needed for the LCD, and since the code to write to the LCD is written in C anyway (to link to WiringPi), I ended up doing it all in a straightforward loop:

#define PIXELS (320*240)
uint8_t rgbIn24 [PIXELS][3];
unsigned rgbOut15 [PIXELS];


for (x = 0; x < PIXELS; ++x) {
  uint8_t r = rgbIn24[x][0] >> 3;
  uint8_t g = rgbIn24[x][1] >> 2;
  uint8_t b = rgbIn24[x][2] >> 3;
  rgbOut15[x] = (r << 11) | (g << 5) | b;

Note that this is a C program compiled and running under Linux on the RPi, and that it can therefore easily allocate some huge arrays. This isn’t some tiny embedded 8-bit µC with a few kilobytes – it’s a full-blown O/S running on a 32-bit CPU with virtual memory!

The other gotcha was that the bitmap to be supplied to the LCD library on top of WiringPi was expected to store each pixel in a 32-bit int. Total waste of space, and probably the result of a rushed port from the UTFT code (written to assume 16-bit ints) to the RPi. Which is why rgbOut15 is declared as “unsigned” (int, i.e. uint32_t) array, although the image really only needs an uint16_t. Oh well, what’s an extra 150 kB, eh?

Anyway, it works. Perfection can wait.

Note that this sends out all the data pixel by pixel to the display, and that each pixel takes the bit-banging of 2 bytes (the interface to the LCD is 8-bit wide), so each pixel takes at least 20 calls to digitalWrite(): 8 to set the data bits, and 2 two to toggle a clock pin – times two for the 2 bytes per pixel. That’s at least 4.6 million GPIO pin settings in half a second!

Tomorrow, I’ll show how WiringPi can make GPIO pins change in under 100 nanoseconds.

Idling in low-power mode

In AVR, Software on May 28, 2013 at 00:01

With a real-time operating system, going into low-power mode is easy. Continuing the recent ChibiOS example, here is a powerUse.ino sketch which illustrates the mechanism:

#include <ChibiOS_AVR.h>
#include <JeeLib.h>

const bool LOWPOWER = true; // set to true to enable low-power sleeping

// must be defined in case we're using the watchdog for low-power waiting
ISR(WDT_vect) { Sleepy::watchdogEvent(); }

static WORKING_AREA(waThread1, 50);

void Thread1 () {
  while (true)

void setup () {
  rf12_initialize(1, RF12_868MHZ);


void mainThread () {
  chThdCreateStatic(waThread1, sizeof (waThread1),
                    NORMALPRIO + 2, (tfunc_t) Thread1, 0);

  while (true)

void loop () {
    Sleepy::loseSomeTime(16); // minimum watchdog granularity is 16 ms

There’s a separate thread which runs at slightly higher priority than the main thread (NORMALPRIO + 2), but is idle most of the time, and there’s the main thread, which in this case takes the role of the idling thread.

When LOWPOWER is set to false, this sketch runs at full power all the time, drawing about 9 mA. With LOWPOWER set to true, the power consumption drops dramatically, with just an occasional short blip – as seen in this current-consumption scope capture:


Once every 16..17 ms, the watchdog wakes the ATmega out of its power-down mode, and a brief amount of activity takes place. As you can see, most of these “blips” take just 18 µs, with a few excursions to 24 and 30 µs. I’ve left the setup running for over 15 minutes with the scope background persistence turned on, and there are no other glitches – ever. Those 6 µs extensions are probably the milliseconds clock timer.

For real-world uses, the idea is that you put all your own code in threads, such as Thread1() above, and call chThdSleepMilliseconds() to wait and re-schedule as needed. There can be a number of these threads, each with their own timing. The lowest-priority thread (the main thread in the example above) then goes into a low-power sleep mode – briefly and repeatedly, thus “soaking” up all unused µC processor cycles in the most energy-efficient manner, yet able to re-activate pending threads quickly.

What I don’t quite understand yet in the above scope capture is the repetition frequency of these pulses. Many pulses are 17 µs apart, i.e. the time Sleepy::loseSomeTime() goes to sleep, but there are also more frequent pulses, spread only 4..9 ms apart at times. I can only guess that this has something to do with the ChibiOS scheduler. That’s the thing with an RTOS: reasoning about the repetitive behavior of such code becomes a lot trickier.

Still… not bad: just a little code on idle and we get low-power behaviour almost for free!

ChibiOS for the Arduino IDE

In AVR, Software on May 25, 2013 at 00:01

A real-time operating system is a fairly tricky piece of software, even with a small RTOS – because of the way it messes with several low-level details of the running code, such as stacks and interrupts. It’s therefore no small feat when everything can be done as a standard add-on library for the Arduino IDE.

But that’s exactly what has been done by Bill Greiman with ChibiOS, in the form of a library called “ChibiOS_AVR” (there’s also an ARM version for the Due & Teensy).

So let’s continue where I left off yesterday and install this thing for use with JeeNodes, eh?

  • download a copy of from this page on Google Code
  • unpack it and inside you’ll find a folder called ChibiOS_AVR
  • move it inside the libraries folder in your IDE sketches folder (next to JeeLib, etc)
  • you might also want to move ChibiOS_ARM and SdFat next to it, for use later
  • other things in that ZIP file are a README file and the HTML documentation
  • that’s it, now re-launch the Arduino IDE to make it recognise the new libraries

That’s really all there is to it. The ChibiOS_AVR folder also contains a dozen examples, each of which is worth looking into and trying out. Keep in mind that there is no LED on a standard JeeNode, and that the blue LED on the JeeNode SMD and JeeNode USB is on pin 9 and has a reverse polarity (“0” will turn it on, “1” will turn it off).

Note: I’m using this with Arduino IDE 1.5.2, but it should also work with IDE 1.0.x

Simple things are still relatively simple with a RTOS, but be prepared to face a whole slew of new concepts and techniques when you really start to dive in. Lots of ways to make tasks and interrupts work together – mutexes, semaphores, events, queues, mailboxes…

Luckily, ChibiOS comes with a lot of documentation, including some general guides and how-to’s. The AVR-specific documentation can be found here (as well as in that ZIP file you just downloaded).

Not sure this is the best place for it, but I’ve put yesterday’s example in JeeLib for now.

I’d like to go into RTOS’s and ChibiOS some more in the weeks ahead, if only to see how wireless communication and low-power sleep modes can be fitted in there.

Just one statistic for now: the context switch latency of ChibiOS on an ATmega328 @ 16 MHz appears to be around 15 µs. Or to put it differently: you can switch between multiple tasks over sixty thousand times a second. Gulp.

Blinking in real-time

In AVR, Software on May 24, 2013 at 00:01

As promised yesterday, here’s an example sketch which uses the ChibiOS RTOS to create a separate task for keeping an LED blinking at 2 Hz, no matter what else the code is doing:

#include <ChibiOS_AVR.h>

static WORKING_AREA(waThread1, 50);

void Thread1 () {
  const uint8_t LED_PIN = 9;
  pinMode(LED_PIN, OUTPUT);
  while (1) {
    digitalWrite(LED_PIN, LOW);
    digitalWrite(LED_PIN, HIGH);

void setup () {

void mainThread () {
  chThdCreateStatic(waThread1, sizeof (waThread1),
                    NORMALPRIO + 2, (tfunc_t) Thread1, 0);
  while (true)

void loop () {

There are several things to note about this approach:

  • there’s now a “Thread1” task, which does all the LED blinking, even the LED pin setup
  • each task needs a working area for its stack, this will consume a bit of memory
  • calls to delay() are forbidden inside threads, they need to play nice and go to sleep
  • only a few changes are needed, compared to the original setup() and loop() code
  • chBegin() is what starts the RTOS going, and mainThread() takes over control
  • to keep things similar to what Arduino does, I decided to call loop() when idling

Note that inside loop() there is a call to delay(), but that’s ok: at some point, the RTOS runs out of other things to do, so we might as well make the main thread similar to what the Arduino does. There is also an idle task – it runs (but does nothing) whenever no other tasks are asking for processor time.

Note that despite the delay call, the LED still blinks in the proper rate. You’re looking at a real multitasking “kernel” running inside the ATmega328 here, and it’s preemptive, which simply means that the RTOS can (and will) decide to break off any current activity, if there is something more important that needs to be done first. This includes suddenly disrupting that delay() call, and letting Thread1 run to keep the LEDs blinking.

In case you’re wondering: this compiles to 3,120 bytes of code – ChibiOS is really tiny.

Stay tuned for details on how to get this working in your projects… it’s very easy!

It’s time for real-time

In Software on May 23, 2013 at 00:01

For some time, I’ve been doodling around with various open-source Real-time operating system (RTOS) options out there. There are quite a few out there to get lost in…

But first, what is an RTOS, and why would you want one?

The RTOS is code which can manage multiple tasks in a computer. You can see what it does by considering what sort of code you’d write if you wanted to periodically read out some sensors, not necessarily all at the same time or equally often. Then, perhaps you want to respond to external events such as a button press of a PIR sensor firing, and let’s also try and report this on the serial port and throw in a command-line configuration interface on that same serial port…

Oh, and in between, let’s go into a low-power mode to save energy.

Such code can be written without RTOS, in fact that’s what I did with a (simpler) example for the roomNode sketch. But it gets tricky, and everything can become a huge tangle of variables, loops, conditions, and before you know it … you end up with spaghetti!

In short, the problem is blocking code – when you write something like this, for example:

void setup () {}

void loop () {
  digitalWrite(LED, HIGH);
  digitalWrite(LED, LOW);

The delay() calls will put the processor into a busy loop for as long as needed to make the requested number of milliseconds pass. And while this is the case, nothing else can be done by the processor, other than handling hardware interrupts (such as timer ticks).

What if you wanted to respond to button presses? Or make a second LED blink at a different rate at the same time? Or respond to commands on the serial port?

This is why I added a MilliTimer class to JeeLib early on. Let’s rewrite the code:

MilliTimer ledTimer;
bool ledOn;;

void setup () {
  ledTimer.set(1); // start the timer

void loop () {
  if (ledTimer.poll()) {
    if (ledOn) {
      digitalWrite(LED, LOW);
    } else {
      digitalWrite(LED, HIGH);
    ledOn = ! ledOn;
  // anything ...

It’s a bit more code, but the point is that this implementation is no longer blocking: instead of stopping on a delay() call, we now track the progress of time through the MilliTimer, we keep track of the LED state, and we adjust the time to wait for the next change.

As a result, the comment line at the end gets “executed” all the time, and this is where we can now perform other tasks while the LED is blinking in the background, so to speak.

You can get a lot done this way, but things do tend to become more complicated. The simple flow of each separate activity starts to become a mix of convoluted flows.

With a RTOS, you can create several tasks which appear to all run in parallel. You don’t call delay(), but you tell the RTOS to suspend your task for a certain amount of time (or until a certain event happens, which is the real magic sauce of RTOS’es, actually).

So in pseudo code, we can now write our app as:

  TASK 1:
    turn LED on
    wait 100 ms
    turn LED off
    wait 400 ms

    start task 1
    do other stuff (including starting more tasks)

All the logic related to making the LED blink has been moved “out of the way”.

Tomorrow I’ll expand on this, using an RTOS which works fine in the Arduino IDE.

Tricky graphs – part 2

In Software on May 20, 2013 at 00:01

As noted in a recent post, graphs can be very tricky – “intuitive” plots are a funny thing!

Screen Shot 2013-05-14 at 10.35.43

I’ve started implementing the display of multiple series in one graph (which is surprisingly complex with Dygraphs, because of the way it wants the data points to be organised). And as you can see, it works – here with solar power generation measured two different ways:

  • using the pulse counter downstairs, reported up to every 3 seconds (green)
  • using the values read out from the SB5000TL inverter via Bluetooth (blue)

In principle, these represent the same information, and they do match more-or-less, but the problem mentioned before makes the rectangles show up after each point in time:

Screen Shot 2013-05-14 at 10.40.11

Using a tip from a recent comment, I hacked in a way to shift each rectangle to the left:

Screen Shot 2013-05-14 at 10.40.34

Now the intuition should be that the area in the coarser green rectangles should be the same as the blue area they overlap. But as you can see, that’s not the case. Note that the latter is the proper way to represent what was measured: a measurement corresponds to the area preceding it, i.e. that hack is an improvement.

So what’s going on, eh?

Nice puzzle. The explanation is that the second set of readings is incomplete: the smaRelay sketch only obtains a reading from the inverter ever 5 minutes or so, but that’s merely an instantaneous power estimate, not an averaged reading since the previous readout!

So there’s not really enough data in the second series to produce an estimate over each 5-minute period. All we have is a sample of the power generation, taken once every so often.

Conclusion: there’s no way to draw the green rectangles to match our intuition – not with the data as it is being collected right now anyway. In fact, it’ probably more appropriate to draw an interpolated line graph for that second series, or better still: dots

Update – Oops, the graphs are coloured differently, my apologies for the extra confusion.

Measurement intervals and graphs

In Software on May 13, 2013 at 00:01

One more post about graphs in this little mini-series … have a look at this example:

Screen Shot 2013-05-10 at 19.33.51

Apart from some minor glitches, it’s in fact an accurate representation of the measurement data. Yet there’s something odd: some parts of the graph are more granular than others!

The reason for this is actually completely unrelated to the issues described yesterday.

What is happening is that the measurement is based on measuring the time between pulses in my pulse counters, which generate 2000 pulses per kWh. A single pulse represents 0.5 Wh of energy. So if we get one pulse per hour, then we know that the average consumption during that period was 0.5 W. And when we get one pulse per second, then the average (but near-instantaneous) power consumption over that second must have been 1800 W.

So the proper way to calculate the actual power consumption, is to use this formula:

    power = 1,800,000 / time-since-last-pulse-in-milliseconds

Which is exactly how I’ve been measuring power for the past few years here at JeeLabs. The pulses represent energy consumption (i.e. kWh), whereas the time between pulses represents estimated actual power use (i.e. W).

This type of measurement has the nice benefit of being more accurate at lower power levels (because we then divide by a larger and relatively more accurate number of milliseconds).

But this is at the same time also somewhat of a drawback: at low power levels, pulses are not coming in very often. In fact, at 100 W, we can expect one pulse every 18 seconds. And that’s exactly what the above graph is showing: less frequent pulses at low power levels.

Still, the graph is absolutely correct: the shaded area corresponds exactly to the energy consumption (within the counter’s measurement tolerances, evidently). And line drawn as boundary at the top of the area represents the best estimate we have of instantaneous power consumption across the entire time period.

Odd, but accurate. This effect goes away once aggregated over longer period of time.

Tricky graphs

In Software on May 12, 2013 at 00:01

Yesterday’s post introduced the new graph style I’ve added to HouseMon. As mentioned, I am using step-style graphs for the power (i.e. Watt) based displays. Zooming in a bit on the one shown yesterday, we get:

Screen Shot 2013-05-10 at 19.21.18

There’s something really odd going on here, as can be seen in the blocky lines up until about 15:00. The blocks indicate that there are many missing data points in the measurement data (normally there should be several measurements per minute). But in itself these rectangular shapes are in fact very accurate: the area is Watt times duration, which is the amount of energy (usually expressed in Joules, Wh, or kWh).

So rectangles are in fact exactly right for graphing power levels: we want to represent each measurement as an area which matches the amount of energy used. If there are fewer measurements, the rectangles will be wider, but the area will still represent our best estimate of the energy consumption.

But not all is well – here’s a more detailed example:

Screen Shot 2013-05-10 at 19.17.03

The cursor in the screenshot indicates that the power usage is 100.1 W at 19:16:18. The problem is that the rectangle drawn is wrong: it should precede the cursor position, not follow it. Because the measurement is the result of sampling power and returning the cumulative power level at the end of that measurement period. So the rectangle which is 100.1 W high should really be drawn from 19:18:00 to 19:18:16, i.e. from the time of the previous measurement.

I have yet to find a graphing system which gets this right.

Tomorrow: another oddity, not in the graph, but related to how power is measured…

New (Dy)graphs in HouseMon

In Software on May 11, 2013 at 00:01

I’ve switched to the Dygraphs package in HouseMon (see the development branch). Here’s what it now looks like:

Screen Shot 2013-05-10 at 19.15.08

(click for the full-scale resolution images)

Several reasons to move away from Flotr2, actually – it supports more line-chart modes, it’s totally geared towards time-series data (no need to adjust/format anything), and it has splendid cursor and zooming support out of the box. Oh, and swipe support on tablets!

Zooming in both X and Y directions can be done by dragging the mouse inside the graph:

Screen Shot 2013-05-10 at 19.16.19

You’re looking at step-style line drawing, by the way. There are some quirks in this dataset which I need to figure out, but this is a much better representation for energy consumption than connecting dots by straight lines (or what would be even worse: bezier curves!).

Temperatures and other “non-rate” entities are better represented by interpolation:

Screen Shot 2013-05-10 at 19.17.40

BTW, it’s surprisingly tricky to graphically present measurement data – more tomorrow…

New series – What If?

In Hardware, News, Software on Apr 23, 2013 at 00:01

Questions are very useful: “what would happen if…” is the foundation of science, after all.

Conjectures and Refutations is a famous book by the late philosopher Sir Karl Popper. I could not possibly summarise it (heck, I haven’t even read it), but what I take away from what I’ve read and heard about it, is that theories can be judged on their predictive value. A theory in itself is no more than an intellectual exercise, but its real value lies in being able to apply it to what-if questions. The stronger a theory, the better it should predict outcomes. The way to “refute” a theory, is to come up with an example where it fails. Rinse and repeat, and you’ve captured the essence of science.

Want to predict what will happen when you place a 100 Ω resistor across a 9V battery? That’s easy, given the proper theory: take Ohm’s Law (i.e. a theory which has stood the test of time), and apply it – a current of ≈ 11 90 mA will flow. Actually a bit less due to the internal resistance of the battery, which goes to show how strong theories can be refined further, leading to even more accurate predictions.

The what-if question is a great way to experiment, especially in electronics and electro-mechanics, because it lets you be prepared and avoid silly (and sometimes catastrophic) outcomes, such as a damaged component, a harmful burn, or even an explosion.

This approach lends itself to all sorts of practical questions:

  • What if I short out a 3x AA battery pack?
  • What if I connect my chip the wrong way around?
  • What if I have to use a 12V power supply instead of 5V?

But also issues as varied as:

  • What if I omit a certain component from my circuit?
  • What if I unplug the Raspberry Pi without shutting it down?
  • What if I wanted to use HouseMon in combination with MySQL?

Properly phrased, what-if questions are essential for practical experiments, and – by extension – also the key to building useful circuits and automated installations.

A useful variation of the what-if question is to help predict “bad” outcomes and estimate the risk of an experiment, such as: can shorting out my power supply cause real damage?

Starting tomorrow, I’m launching a new series on this weblog, titled “What-If Wednesday”. As far as I’m concerned, it can run as long as there are interesting questions I can answer, so please feel free to suggest lots of topics in the comments below. These weekly posts will be tagged What-If, and I’m also setting up a new wiki page to collect them all.

Sooo… please help me out folks, and send in some nice what-if questions!

(9)50 days and counting

In AVR, Hardware, Software on Apr 18, 2013 at 00:01

The new JeeNode Micro has joined the ranks of the test nodes running here to determine battery life in real time. The oldest one has been running over 2 and a half years now:


That LiPo battery has a capacity of 1300 mAh, and since it’s still running, this implies that the average current draw must be under 1300/24/950 = 57 µA, including self-discharge.

The other two battery tests now running here are based on the new JeeNode Micro v3, i.e. this one – using a boost converter:


… and another one using a coin cell. Here’s a picture of a slightly older version:


The boost version is running off a rechargeable AA battery, of type Eneloop, which I use all over the house now. These batteries hold 1900 mAh, but there’s a substantial penalty for running off one AA cell with a boost converter:

  • energy is not free, i.e. drawing 10 µA at 3.3V means we will need to draw 30 µA at 1.1V, even if we could use a 100% efficient boost converter
  • real-world efficiency is more like 70..80% at these levels, so make that 40..45 µA
  • the boost converter has some idle current consumption, probably 10..20 µA, so this means we’ll draw 50..65 µA even if the JeeNode Micro only uses up 10 µA at 3.3V (actually, it’s 3.0V in the latest JNµ)

This would translate to 1900/.065/24/365, i.e. ≈ 3-year life. Or perhaps 2, if we account for the Eneloop’s 85% per year self-discharge.

The coin cell option runs off a CR2032 battery, which is rated at about 225 mAh, i.e. considerably less than the above options. Still, since there are no boost converter losses, this translates to 225/0.010/24/365, i.e. ≈ 2.5 years of life if the JNµ draws only 10 µA.

These figures look excellent, but keep in mind that 10 µA average power consumption is a very very optimistic value, particularly when there are sensors hooked up and packets are being sent out. I’d be quite happy with a 6-month battery life for a single AA or a coin cell.

For reference, here is an earlier post about all these power calculations.

Here are the current reports I’m getting via HouseMon:

Screen Shot 2013-04-17 at 11.12.08

That’s just about 50 days off a coin cell. Let’s see how it holds up. The nice bit in these tests is that the new nodes now report several different voltage level measurements as well (this also consumes some energy!). They haven’t dropped much yet, but when they do, I hope that we’ll be able to use the drop as a predictor.


Sharing Node ID’s

In Software on Mar 30, 2013 at 00:01

The RF12 driver has a 5-bit slot in its header byte to identify either the sender or the destination of a packet. That translates to 32 distinct values, of which node ID “0” and “31” are special (for OOK and catch-all use, respectively).

So within a netgroup (of which there can be over 250), you get at most 30 different nodes to use. Multiple netgroups are available to extend this, of course, but then you need to set up one central node “listener” for each netgroup, as you can’t listen to multiple netgroups with a single radio.

I’m using JeeNodes all over the place here at JeeLabs, and several of them have been in use for several years now, running on their own node ID. In fact, many have never been reflashed / updated since the day they were put into operation. So node ID’s are starting to become a bit scarce here…

For testing purposes, I use a different netgroup, which nicely keeps the packets out of the central HouseMon setup used for, eh… production. Very convenient.

But with the new JeeNode Micro, I actually would like to get lots more nodes going permanently. Especially if the coin cell version can be made to run for a long time, then it would be great to use them as door sensors, lots more room nodes, and also stuff outside the house, such as the mailbox, humidity sensors, and who knows what else…

Now the nice thing about send-only nodes such as door sensors, is that it is actually possible to re-use the same node ID. All we need is a “secondary ID” inside the packet.

I’ve started doing this with the radioBlip2 nodes I’m using to run my battery lifetime tests:

Screen Shot 2013-03-26 at 10.11.14

(BATT-0 is this unit, BATT-1 is a JNµ w/ coin cell, BATT-2 is a JNµ w/ booster on 1x AA)

This is very easy to do: add the extra unique secondary ID in the payload and have the decoder pick up that byte as a way to name the node as “BATT-<N>”. In this case, it’s even compatible with the original radioBlip node which does not have the ID + battery levels.

Here is the payload structure from radioBlip2, with secondary ID and two battery levels:

struct {
  long ping;      // 32-bit counter
  byte id :7;     // identity, should be different for each node
  byte boost :1;  // whether compiled for boost chip or not
  byte vcc1;      // VCC before transmit, 1.0V = 0 .. 6.0V = 250
  byte vcc2;      // battery voltage (BOOST=1), or VCC after transmit (BOOST=0)
} payload;

And here’s the relevant code snippet from the decoder in HouseMon:

  decode: (raw, cb) ->
    count = raw.readUInt32LE(1)
    result =
      tag: 'BATT-0'
      ping: count
      age: count / (86400 / 64) | 0
    if raw.length >= 8
      result.tag = "BATT-#{raw[5]&0x7F}"
      result.vpre = 50 + raw[6]
      if raw[5] & 0x80
        # if high bit of id is set, this is a boost node reporting its battery
        # this is ratiometric (proportional) w.r.t. the "vpre" just measured
        result.vbatt = result.vpre * raw[7] / 255 | 0
        # in the non-boost case, the second value is vcc after last transmit
        # this is always set, except in the first transmission after power-up
        result.vpost = 50 + raw[7] if raw[7]
    cb result

In case you’re wondering: the battery voltage is encoded as 20 mV steps above 1.0V, allowing a single byte to represent values from 1.0 to 6.1V (drat, it’ll wrap when the boost version drops under 1.0V, oh well…).

Note that this trick works fine for send-only nodes, but it would need some additional logic in nodes which expect an ACK, since the ACK will be addressed to all nodes with the same ID. In many cases, that won’t matter, since presumably all the other nodes will be sound asleep, but for precise single-node ACKs, the reply will also need to include the extra secondary ID, and all the nodes will then have to check and compare it.

So now it’s possible to re-use a node ID for an entire “family” of nodes:

  • over 250 different netgroups are available for use
  • each netgroup can have a maximum of 30 different node ID’s
  • each node ID can be shared with over 250 send-only nodes

That’s over 1.8 million nodes supported by the RF12 driver. More than enough, I’d say :)

Software development

In Software, Musings on Mar 28, 2013 at 00:01

As you probably know, I’m now doing all my software development in just two languages – C++ in the embedded world and JavaScript in the rest. Well, sort of, anyway:

  • The C++ conventions I use are very much watered down from where C++ has been going for some time (with all the STL machinery). I am deliberately not using RTTI, exceptions, namespaces, or templates, because they add far too much complexity.
  • Ehm… C++ templates are actually extremely powerful and would in fact be a big help in reducing code overhead on embedded processors, but: 1) they require a very advanced understanding of C++, 2) that would make my code even more impenetrable to others trying to understand it all, and 3) it can be very hard to understand and fix templating-related compiler error messages.
  • I also use C++ virtual functions sparingly, because they tend to generate more code, and what’s worse: VTables use up RAM, a scarce resource on the ATmega / ATtiny!
  • As for programming in JavaScript: I don’t, really. I write code in a dialect called CoffeeScript, which then gets compiled to JavaScript on-the-fly. The main reason is that it fixes some warts in the JavaScript syntax, and keeps the good parts. It’s also delightfully concise, although I admit that you have to learn to read it before you can appreciate the effect it has on making the essence of the logic stand out better.
  • There is an incredible book called CoffeeScript Ristretto by Reginald Braithwaite, which IMO is a must read. There is also a site called which appears to have the entire content of that book online (although I find the PDF version more readable). Written in the same playful style as The Little Schemer, so be prepared to get your mind twisted by the (deep and valuable) concepts presented.
  • To those who dismiss today’s JavaScript, and hence CoffeeScript, on the basis of its syntax or semantics, I can only point to an article by Paul Graham, in which he describes The Blub Paradox. There is (much) more to it than meets the eye.
  • In my opinion, CoffeeScript + Node.js bring together the best ideas from Scheme (functions), Ruby (notation), Python (indentation), and Tcl (events).
  • If you’re craving for more background, check out How I Learned To Enjoy JavaScript and some articles it links to, such as JS: The Right Way and Idiomatic JavaScript.

I’m quite happy with the above choices now, even though I still feel frustratingly inept at writing CoffeeScript and working with the asynchronous nature and libraries of Node.js – but at every turn, the concepts do “click” – this really feels like the right way to do things, staying away from all the silliness of statically compiled languages and datatypes, threads, and blocking system calls. The Node.js community members have taken some very bold steps, adopted what people found worthwhile in Ruby on Rails and other innovations, and lived through the pain of the all-async nature by coming up with libraries such as Async, as well as other great ones like Underscore, Connect cq. Express, Mocha, and Marked.

I just came across a very nice site about JavaScript, called SuperHero, via this weblog post. Will be going through it soon, to try and improve my understanding and (hopefully) skills. If you like video’s, check out PeepCode, i.e. this one on Node.js + Express + CoffeeScript.

As for the client-side JavaScript framework, I’m using AngularJS. There’s a nice little music player example here, with the JavaScript here, illustrating the general style quite well, IMO.

Isn’t it amazing how much knowledge and tools we have at our fingertips nowadays?

It seems to me that for software technologies and languages to gain solid traction and momentum, it will all come down to how “learnable” things are. Because we all start from scratch at some point. And we all only have so many hours in the day to learn things well. There is this never-ending struggle between picking the tools which are instantly obvious (but perhaps limited in the long rung) vs. picking very advanced tools (which may take too much time to become proficient in). Think BASIC vs. Common Lisp, for example. It gets even worse as the world is moving on – how can anyone decide to “dive in” and go for the deep stuff, when it might all be obsolete by the time the paybacks really set in?

In my case, I want to not only build stuff (could have stayed with Tcl + JeeMon for that), but also take advantage of what others are doing, and – with a bit of luck – come up with something which some of you find attractive enough to build upon and extend. Time will tell whether the choices made will lead there…

One other very interesting new development I’d like to mention in this context, is the Markdown-based Literate Programming now possible with CoffeeScript: see this weblog post by Jeremy Ashkenas, the inventor & implementor of CoffeeScript. I’m currently looking into this, alongside more traditional documentation tools such as Docco and Groc.

Would it make sense to write code for HouseMon as a story? I have no idea, yet…

Avrdude in CoffeeScript

In AVR, Software on Mar 27, 2013 at 00:01

Here’s a fun project – rewriting avrdude in CoffeesCript, as I did a while back with Tcl:

{SerialPort} = require 'serialport'

pageBytes = 128

avrUploader = (bytes, tty, cb) ->
  serial = new SerialPort tty, baudrate: 115200

  done = (err) ->
    serial.close ->
      cb err

  timer = null
  state = offset = 0
  reply = ''

  states = [ # Finite State Machine, one function per state
      ['0 ']
      buf = new Buffer(20)
      buf.fill 0
      buf.writeInt16BE pageBytes, 12
      ['B', buf, ' ']
      ['P ']
      state += 1  if offset >= bytes.length
      buf = new Buffer(2)
      buf.writeInt16LE offset >> 1, 0
      ['U', buf, ' ']
      state -= 2
      count = Math.min bytes.length - offset, pageBytes
      buf = new Buffer(2)
      buf.writeInt16BE count, 0
      pos = offset
      offset += count
      ['d', buf, 'F', bytes.slice(pos, offset), ' ']
      ['Q ']

  next = ->
    if state < states.length
      serial.write x  for x in states[state++]()
      reply = ''
      timer = setTimeout (-> done state), 300

  serial.on 'open', next

  serial.on 'error', done

  serial.on 'data', (data) ->
    reply += data
    if reply.slice(-2) is '\x14\x10'
      clearTimeout timer

And here’s a little test which uploads the RF12demo sketch into a JeeNode over serial:

fs = require 'fs'
hex = fs.readFileSync '/Users/jcw/Desktop/RF12demo.cpp.hex', 'ascii'

hexToBin = (code) ->
  data = ''
  for line in code.split '\n'
    count = parseInt line.slice(1, 3), 16
    if count and line.slice(7, 9) is '00'
      data += line.slice 9, 9 + 2 * count
  new Buffer(data, 'hex')

avrUploader hexToBin(hex), '/dev/tty.usbserial-AH01A0GD', (err) ->
  console.error 'err', err  if err
  console.log hexToBin(hex).length

Just copy it all into a file and run it as: coffee

It’s fairly dense code, but as you can see, the stk500v1 protocol requires very little logic!

It’s a software goose (again)

In AVR, Software on Mar 26, 2013 at 00:01

Yesterday’s weblog post documented my attempts to debug a problem while programming the JeeNode Micro with a Flash Board. I really thought it was a hardware bug.

Given that the results where slightly changing as I tried different things, but nevertheless seemed really consistent and repeatable when retrying exactly the same steps over and over again should have caused an alarm bell to ring, but alas it took me quite some time to figure it out: repeatable is a hint that the problem might be a software issue, not hardware.

So the next step was to use the data dump feature from the scope to collect the data written and well as the data being read back. These can be saved as CSV files, and with a bit of scripting, I ended up with this comparison:

Screen Shot 2013-03-19 at 10.47.35

This shows three editor windows super-imposed and lined up with, from left to right:

  • the data read back
  • the data written
  • the hex file with the compiled sketch

As you can see, the writes are exactly what they should be – it’s the read which gives junk. And it’s not even that far off – just a few bits!

Looking closer, it is clear that some bits are “0” where they should have been “1”.

Uh, oh – I think I know what’s going on…

It looks like the chip hasn’t gone through an erase cycle! Flash programming works as follows: you “erase” the memory to set all the bits to “1”, and then you “program” it to set some of them to “0”. Programming can only flip bits from “1” to “0”!

And indeed… that was the problem. The Arduino boot loaders are set up to auto-erase on request, so avrdude is called from the IDE with “bulk erase” disabled. That way, the IDE just sends the pages it wants to program, and the boot loader will make sure the erase is performed just before writing the page.

In this case, however, there is no boot loader: we are using the normal ISP conventions, whereby erasing and programming must be explicitly requested.

It was a matter of dropping the “-d” “-D” flag from the avrdude command in platform.txt:


And now it works, flawlessly! Doh – this “bug” sure has kept me busy for a while…

Wild goose chase

In AVR, Hardware, Software on Mar 25, 2013 at 00:01

Sometimes, when messing with stuff, you end up in a blind alley which makes no sense…

This happened to me a few days ago, as I was trying to get a good setup going for programming the new JeeNode Micro v3.

The code in the IDE was working, I was able to program the JNµ via avrdude on the command line, but for some crazy reason, the same thing just kept failing when launched through the new 1.5.2 Arduino IDE. Getting that to work would make all the difference in the world, since it means you don’t need to install anything other than the IDE (on any platform) and then simply drop in the ide-hardware folder.

Very odd: a Flash Board which works on one Mac with avrdude, but fails with the IDE, which also uses avrdude. Time to set up the scope in Logic Analyser mode! Lots of wires, lots of configuration to identity all the signals, and then the fun starts: locating the spot in the data stream which would allow me to see what was going on.

Here is a peek at the SPI bus, decoded where it is reporting the ATtiny84 signature bytes:


Looks ok. Next, instead of looking at the SPI bus, I looked at the 9600 baud serial data:


It’s a slower time scale, since the bit stream is not as fast, thus allowing to capture more information (2 M samples are not that much when sampling at fixed time intervals, since you have to sample often enough to reliably decode the data stream).

I even suspected the power supply lines, and had a look at VCC as an analog trace:


And although it’s not very clean, this wasn’t the problem either. Drat…

Since the verification was failing, I decided to capture the data read-back in the second half of the programming process. The way to do this, is to set the trigger on the RESET line going low at the start, and then delaying the capture by about 4 seconds. As you can see in that last screenshot, this is the point where avrdude switches from writing data into flash memory to reading the data back.

And sure enough, the data read back was wrong. Not much, but just some bits flipped – consistently, and reproducably!

It wasn’t a hardware problem after all! A wild good chase: final episode tomorrow…

Programming the JNµ – at last!

In AVR, Hardware, Software on Mar 21, 2013 at 00:01

Yesterday’s post shows that it the JNµ can be easily be programmed using the standard Arduino IDE, if you get all the pieces right (isn’t that always the case?):

Screen Shot 2013-03-19 at 14.34.08

The easiest way to get the connections right, is to assemble a custom cable (more):


What about the software setup? Well, that too is now very simple:

  • download and install a copy of the Arduino IDE 1.5.2
  • create a folder called “hardware” inside your IDE’s “sketchbook” folder
  • download or – preferably – clone the new ide-hardware package from GitHub
  • rename it to “jeelabs” (!) and move it inside the “hardware” folder

The new jcw/ide-hardware project on GitHub was adapted from the arduino-tiny project on Google Code. I didn’t want to wait for that project (it hasn’t been updated for over 6 months), and decided to create a fork on GitHub instead. I did find two other such forks – rambo/arduino-tiny and james147/arduino-tiny, but neither of them appears to offer a major advantage – and since I had to rearrange things to make it work with the IDE 1.5.x structure, there does not seem to be much point in basing things off those forks.

The result of all these steps is that you now have the following new options:

  • use the Tools -> Programmer menu to select “Arduino as ISP”
  • select “JeeNode Micro” from the Tools -> Boards menu to build for that setup
  • use the Tools -> Burn Bootloader menu to set up the fuses
  • use the standard “Upload” button to upload a sketch using this ISP-based setup:

    Screen Shot 2013-03-20 at 16.42.05

Those two warnings are harmless, and can be ignored (I don’t know how to avoid them).

That’s it!

Update – Fixed a problem with setting fuses, make sure you use latest code from github.

Programming the JNµ v3 – part 2

In AVR, Software on Mar 7, 2013 at 00:01

It’s a long and winding road, that path to finding an easy way to program the JeeNode Micro v3, as mentioned two days ago. With emphasis on the “easy” adjective…

Let me at least describe the process details of what I’ve been using so far:

  1. A custom cable to connect the JNµ to an ISP programmer
  2. Installed copy of the “arduino-tiny” project from Google Code
  3. A trick to get the Arduino IDE compile the sketch into a HEX file
  4. Setting up the JeeNode + Flash Board as an ISP programmer
  5. Using avrdude directly, to set the fuses and upload the sketch into the JNµ

It works, in fact it works fine once you’ve set it all up – but easy? Hm…

Let’s go through each of these in turn.

1. Custom cable

This was described a few days ago. Let me just add that it’s very easy to take apart the connectors on the Extension Cable, once you know the trick:


It’s all a matter of gently lifting the plastic tabs, then the wires come sliding out. Note that this is the wrong end of the cable for our case – but the other end works the same.

2. Arduino-tiny

These files on Google Code contain all the necessary code to trick the Arduino IDE into compiling for ATtiny processors, including the ATtiny84 used in the JNµ.

It works with IDE 1.0.x as is, if you follow the instructions, but I’ve been reorganising it to meet the requirements of the new multi-architecture IDE 1.5.x series, which would support this as a straight drop-in. I’m still working on it, but it means you can then just press the “Upload” button – a lot simpler than the process being described in this post!

3. Hex files

Normally, the Arduino IDE takes care of everything: compilation as well as uploading. However, for this setup, we need to get hold of the raw HEX file that gets generated before uploading. There’s a preference in the IDE for that:

Screen Shot 2013-03-06 at 10.22.12

Now the nasty bit: you have to compile (“run” in Arduino IDE terms, go figure!) the sketch to see the output that gets generated, and then copy the full path from the output window:

Screen Shot 2013-03-06 at 10.23.36

In this case, it’s that “/var/folders/…/radioBlip2.cpp.hex” thing.

4. ISP programmer

To turn the JeeNode + Flash Board combo into an ISP programmer, upload the isp_flash.ino to it. Make sure you get the very latest version, which uses 9600 Baud for communication, as that makes it compatible with the “Arduino as ISP” programmer:

Screen Shot 2013-03-06 at 15.36.07

As mentioned before, you don’t really need the Flash Board at this stage – a couple of wires will also work fine (see these posts about all the ways this can be done).

5. Avrdude

The last step is to call the “avrdude” uploader from the command line, specifying all the arguments needed, and pasting in that full file name. In my case, it was something like:

avrdude -v -v -pattiny84 -cstk500v1 -P/dev/tty.usbserial-A600K04C \
  -b9600 -D -Uflash:w:/var/folders/.../radioBlip2.cpp.hex:i

Oh, and if this is the first time you’re programming this ATtiny84 chip, then you also have to first set the fuses properly, as follows:

avrdude -v -v -pattiny84 -cstk500v1 -P/dev/tty.usbserial-A600K04C \
  -b9600 -e -U lfuse:w:0xC2:m -U hfuse:w:0xDF:m -U efuse:w:0xFF:m

That’s the internal RC oscillator with fastest startup, and the brown-out level set to 1.8V.

As I said, this is the messy way to do things. I’ll try to have something simpler soon!

HouseMon 0.5.1

In Software on Feb 27, 2013 at 00:01

It feels a bit odd to give this piece a software a version number, since it’s all so early still, but I’m releasing it as HouseMon 0.5.1 anyway, to set a baseline for future development.

Note that the instructions so far were all about development. To get an idea of the true performance on a Raspberry Pi, you should start up as follows (make sure HouseMon is not running already): cd ~/housemon && SS_ENV=production node app.js – this’ll take a few minutes while SocketStream combines and “minifies” everything the first time around. After that, all web accesses will be a lot snappier than in development mode.

But let’s not get ahead of ourselves… tons of work to do before HM becomes useful!

I suspect that this project will lead to more questions than answers at this stage, but I guess that’s what you get when following the “release early, release often” approach of open source software. There seems to be no other way to go through this than just bite bullet, get a basic release out in the wild, and see how it holds up when installed in more places.

Here’s an attempt to map out what’s in the 0.5.x release:


The dotted lines indicate that logged and archived data is being saved but not used yet (logged data is the raw incoming text from serial interfaces, etc – archived data is per-sensor information, aggregated hourly).

This diagram is still too complicated for my tastes. Part of the process is getting to grips with complexity: multiple sensor values per reading, “de-multiplexing” different types of packets from the ookRelay, mapping node ID’s to locations, ignoring some repeated data which contains no extra information, and more real-world messiness…

There is currently very little error checking and error messages can (will!) be cryptic.

Which is all a long-winded way of saying that HouseMon is still in its very early stages. Let’s see how it evolves, and please do comment and hack on it if you are interested.

Note that HouseMon is a moving target. The “develop” branch is already moving ahead:

Screen Shot 2013-02-24 at 10.11.57

As you can see, it’s all made up of “briqs” which can be added and enabled as needed.

PS. I’ve added some documentation on how to track changes in the development branch.

Issues on GitHub

In News, Software on Feb 26, 2013 at 00:01

Since doing more and more with GitHub, I’ve been getting increasingly impressed with the features and workflow of that website. One of the things which really turned me around, was its issue tracker. This is already in use on projects such as EtherCard, as you can see:

Screen Shot 2013-02-19 at 22.39.00

The way things are implemented, including mail integration, keyboard shortcuts, and the fact that issue pages update in real time (making them little real-time chat environments) has convinced me that it really is the best place to deal with these things for all JeeLabs projects. This means I’m also not going to enable issue tracking in Redmine after all.

I’m not too concerned about lock-in, because all source code, its history, and even the entire issue tracking dataset is available for download from their site via a REST API.

I’ve also been using GitHub issues quite a bit for HouseMon lately, and have just started using its “milestones” feature, to help organise and plan the pile of work ahead:

Screen Shot 2013-02-19 at 22.20.22

Sooo… if you have a problem with JeeLib, Ethercard, etc. and want to report a bug, feel free to do so on the GitHub site. This doesn’t replace or interfere with the discussion forums on the JeeLabs “Café” in any way, as these continue to be the place for discussion and general questions and support. But for obvious bugs and clear-cut enhancement requests, GitHub is the place to be from now on…

As some people have started to discover, GitHub is also the fastest way to get patches and new features implemented – the mechanism for this is called “forking” and “submitting a pull request” (me pulling your changes into the official libraries, that is).

Here is a nice write-up about the process, written by the developers of “ThinkUp” – but it really applies to any project on GitHub.

As for release versioning, I’ll be using this approach for HouseMon from now on. This means that if you want to follow along and see the latest changes, you need to enter git checkout develop (once). The “master” branch will only update on the next release.

PS. Still struggling with all the terms and “features” of git. Thanks to @tht on the forum, I’m looking into SourceTree (for Mac), which acts as GUI wrapper around git commands, but I can’t say the coin has dropped quite yet. Things like merge vs rebase sound logical, but I’m sure I’ll get it wrong several more times before getting the hang of it…

Combined usage patterns

In Software on Feb 25, 2013 at 00:01

It’s going to be an interesting puzzle to try and deduce what all the different consumers are, just by looking at detailed electricity consumption graphs, such as this one:

Screen Shot 2013-02-18 at 16.03.20

I’ve already determined that the “double dip” patterns, such as the one at 15:41 to 15:52, is the thermostat + boiler “hunting” around to maintain its setpoint temperature.

The trouble of course, is that this whole house measurement approach doesn’t measure invidual appliances, but – as the term says – it can only report the total of all appliance consumption patterns. In this particular graph, you can see a second consumer kicking in around 15:15 and switching off again at 15:56. Fortunately though, this one is very characteristic as well: there’s a motor turning on, causing a brief very sharp blip at the start. It’s the kitchen fridge, in fact – and as the graph shows, it draws about 80 W.

In the general sense, this Knapsack problem is impossible to solve with certainty. But that doesn’t mean we can’t get a good insight of what’s going on around the house anyway.

There are several more known sources and consumption patterns around here, and once HouseMon is a bit further along, I hope to be able to identify them all in the collected data to be able to obtain fairly good estimates of the monthly consumption for each one of them. Also, once these patters are sufficiently accurately known, it should be possible to subtract them from the graph so to speak, and get a better view of the rest of the measurements.

Who needs a database?

In Software on Feb 18, 2013 at 00:01

Yesterday’s post was about the three (3!) ways HouseMon manages data. Today I’ll describe the initial archive design in HouseMon. Just to reiterate its main properties:

  • archives are redundant – they can be reconstructed from scratch, using the log files
  • data in archives is aggregated, with one data point per hour, and optimised for access
  • each hourly aggregation contains: a count, a sum, a minimum, and a maximum value

The first property implies that the design of this stuff is absolutely non-critical: if a better design comes up later, we can easily migrate HouseMon to it: just implement the new code, and replay all the log files to decode and store all readings in the new format.

The second decision was a bit harder to make, but I’ve chosen to save only hourly values for all data older than 48 hours. That means today and yesterday remain accessible from Redis with every single detail still intact, and for things like weekly, monthly, and yearly graphs it always resolves to hourly values. The simplification over progressive RRD-like aggregation, is that there really are only two types of data, and that the archived data access is instant, based on time range: simple file seeking.

Which brings me to the data format: I’m going to use Plain Old Binary Data Files for all archive data. No database, nothing – although a filesystem is just as much a database as anything, really (as “git” also illustrates).

Note that this doesn’t preclude also storing the data in a “real” database, but as far as HouseMon is concerned, that’s an optional export choice (once the code for it exists).

Here’s a sample “data structure” of the archive on disk:


The “index.json” file is a map from parameter names to a unique ID (id’s 1 and 2 are used in this example). The “p123” directory has the data for time slot 123. This is in mult1ples of of 1024 hours, as each data files holds 1024 hourly slots. So “p123” would be unix time 123 x 1024 x 3600 = 453427200 (or “Tue May 15 00:00:00 UTC 1984”, if you want to know).

Each file has 1024 hourly data points and each data point uses 16 bytes, so each data file is 16,384 bytes long. The reason they have been split up is to make varying dataset collections easy to manage, and to allow optional compression. Conceptually, the archive is simply one array per parameter, indexed by the absolute hour since Jan 1st, 1970 at midnight UTC.

The 16 bytes of each data point contain the following information in little-endian order:

  • a 4-byte header
  • the 32-bit sum of all measurement values in this hour
  • the 32-bit minimum value of all measurements in this hour
  • the 32-bit maximum value of all measurements in this hour

The header contains a 12-bit count of all measurement values (i.e. 0..4095). The other 20 bits are reserved for future use (such a few extra bits for the sum, if we ever need ’em).

The average value for an hour is of course easy to calculate from this: the sum divided by the count. Slots with a zero count represent missing values.

Not only are these files trivial to create, read, modify, and write – it’d also be very easy to implement code which does this in any programming language. Simplicity is good!

Note that there are several trade-offs in this design. There can be no more than 4095 measurements per hour, i.e. one per second, for example. And measurements have to be 32-bit signed ints. But I expect that these choices will work out just fine, and as mentioned before: if the format isn’t good enough, we can quickly switch to a better one. I’d rather start out really simple (and fast), than to come up with a sophisticated setup which adds too little value and too much complexity.

As far as access goes, you can see that no searching at all is involved to access any time range for any parameter. Just open a few files, send the extracted data points to the browser (or maybe even entire files), and process the data there. If this becomes I/O bound, then per-file compression could be introduced – readings which are only a small integer or which hardly ever change will compress really well.

Time will tell. Premature optimisation is one of those pitfalls I’d rather avoid for now!

Data, data, data

In Software on Feb 17, 2013 at 00:01

If all good things come in threes, then maybe everything that is a triple is good?

This post is about some choices I’ve just made in HouseMon for the way data is managed, stored, and archived (see? threes!). The data in this case is essentially everything that gets monitored in and around the house. This can then be used for status information, historical charts, statistics, and ultimately also some control based on automated rules.

The measurement “readings” exist in the following three (3!) forms in HouseMon:

  • raw – serial streams and byte packets, as received via interfaces and wireless networks
  • decoded – actual values (e.g “temperature”), as integers with a fixed decimal point
  • formatted – final values as shown on-screen, formatted and with units, i.e. “17.8 °C”

For storage, I’ll be using three (3!) distinct mechanisms: log files, Redis, and archive files.

The first decision was to store everything coming in as raw text in daily log files. With rollover at midnight (UTC), and tagged with the interface and a time stamp in milliseconds. This format has been in use here at JeeLabs since 2008, and has served me really well. Here is an example, taken from the file called “20130211.txt”:

    L 01:29:55.605 usb-AH01A0GD OK 19 115 113 98 25 39 173 123 1
    L 01:29:58.435 usb-AH01A0GD OK 9 4 50 68 235 251 166 232 234 195 72 251 24
    L 01:29:58.435 usb-AH01A0GD DF S 6373 497 68
    L 01:29:59.714 usb-AH01A0GD OK 19 96 13 2 11 2 30 0

Easy to read and search through, but clearly useless for seeing the actual values, since these are the RF12 packets before being decoded. The benefit of this format is precisely that it is as raw as it gets: by storing this on file, I can improve the decoders and fix the inevitable bugs which will crop up from time to time, then simply re-parse the files and run them through the decoders again. Given that the data comes from sketches which change over time, and which can also contain bugs, the mapping of which decoders to apply to which packets is an important one, and is in fact going to depend on the timeline: the same node may have been re-used for a different sketch, with a different packet format over time (has rarely happened here, once a node has been put to permanent use).

In HouseMon, the “logger” briq now ties into serial I/O and creates such daily log files.

The second format is more meaningful. This holds “readings” such as:

    { id: 1, group:5, band: 868, type: 'roomNode', value: 123, time: ... }

… which might represent a 12.3°C reading in the living room, for example.

This is now stored in Redis, using the new “history” briq (a “briq” is simply an installable module in HouseMon). There is one sorted set per parameter, to which new readings are added (i.e. integers) as they come in. To support staged archive storage, the sorted sets are segmented per 32 hours, i.e. there is one sorted set per parameter per 32-hour period. At most two periods are needed to store full details of every reading from at least the past 24 hours. With two periods saved in Redis for each parameter, even a setup with a few hundred parameters will require no more than a few dozen megabytes of RAM. This is essential, given that Redis keeps all its data in memory.

And lastly, there is a process which runs periodically, to move data older than two periods ago into “archival storage”. These are not round-robin databases, in the sense of a circular buffer which gets overwritten as new data comes in and wraps around, but they do use a somewhat similar format on disk. Archival storage can grow infinitely, here I expect to end up with about 50..100 MB per year once all the log files have been re-processed. Less, if compression is used (to be decided once speed trade-offs of the RPi have been measured).

The files produced by the “archive” briq have the following three (3!) properties:

  • archives are redundant – they can be reconstructed from scratch, using the log files
  • data in archives is aggregated, with one data point per hour, and optimised for access
  • each hourly aggregation contains: a count, a sum, a minimum, and a maximum value

I’ll describe the archive design choices and tentative file format in the next post.

HouseMon resources

In AVR, Hardware, Software, Musings, Linux on Feb 6, 2013 at 00:01

As promised, a long list of resources I’ve found useful while starting off with HouseMon:

JavaScript – The core of what I’m building now is centered entirely around “JS”, the language behind many sites on the web nowadays. There’s no way around it: you have to get to grips with JS first. I spent several hours watching most of the videos on Douglas Crockford’s site. The big drawback is the time it takes…

Best book on the subject, IMO, if you know the basics of JavaScript, is “JavaScript: The Good Parts” by the same author, ISBN 0596517742. Understanding what the essence of a language is, is the fastest way to mastery, and his book does exactly that.

CoffeeScript – It’s just a dialect of JS, really – and the way HouseMon uses it, “CS” automatically gets compiled (more like “expanded”, if you ask me) to JS on the server, by SocketStream.

The most obvious resource,, is also one of the best ways to understand it. Make sure you are comfortable with JS, even if not in practice, before reading that home page top to bottom. For an intruiging glimpse of how CS code can be documented, see this example from the CS compiler itself (pretty advanced stuff!).

But the impact of CS goes considerably deeper. To understand how Scheme-like functional programming plays a role in CS, there is an entertaining (but fairly advanced) book called CoffeeScript Ristretto by Reginald Braithwaite. I’ve read it front-to-back, and intend to re-read it in its entirety in the coming days. IMO, this is the book that cuts to the core of how functions and objects work together, and how CS lets you write on a high conceptual level. It’s a delightful read, but be prepared to scratch your head at times…

For a much simpler introduction, see The Little Book on CoffeeScript by Alex MacCaw, ISBN 1449321046. Also available on GitHub.

Node.js – I found the Node.js in Action book by Mike Cantelon, TJ Holowaychuk and Nathan Rajlich to be immensely useful, because of how it puts everything in context and introduces all the main concepts and libraries one tends to use in combination with “Node”. It doesn’t hurt that one of the most prolific Node programmers also happens to be one of the authors…

Another useful resource is the API documentation of Node itself.

SocketStream – This is what takes care of client-server communication, deployment, and it comes with many development conveniences and conventions. It’s also the least mature of the bunch, although I’ve not really encountered any problems with it. I expect “SS” to evolve a bit more than the rest, over time.

There’s a “what it is and what it does” type of demo tour, and there is a collection on what I’d call tech notes, describing a wide range of design docs. As with the code, these pages are bound to change and get extended further over time.

Redis – This a little database package which handles a few tasks for HouseMon. I haven’t had to do much to get it going, so the README plus Command Summary were all I’ve needed, for now.

AngularJS – This is the most framework-like component used in HouseMon, by far. It does a lot, but the challenge is to understand how it wants you to do things, and altough “NG” is not really an opinionated piece of software, there is simply no other way to get to grips with it, than to take the dive and learn, learn, learn… Let me just add that I really think it’s worth it – NG can be magic on the client side, and once you get the hang of it, it’s in fact an extremely expressive way to create a responsive app in the browser, IMO.

There’s an elaborate tutorial on the NG site. It covers a lot of ground, and left me a bit overwhelmed – probably because I was trying to learn too much as quickly as possible…

There’s also a video, which gives a very clear idea of NG, what it is, how it is used, etc. Only downside is that it’s over an hour long. Oh, and BTW, the NG FAQ is excellent.

For a broader background on this sort of JS frameworks, see Rich JavaScript Applications by Steven Sanderson. An eye opener, if you’ve not looked into RIA’s before.

Arduino – Does this need any introduction on this weblog? Let me just link to the Reference and the Tutorial here.

JeeNode – Again, not really much point in listing much here, given that this entire weblog is more or less dedicated to that topic. Here’s a big picture and the link to the hardware page, just for completeness.

RF12 – This is the driver used for HopeRF’s wireless radio modules, I’ll just mention the internals weblog posts, and the reference documentation page.

Vim – My editor of choice, lately. After many years of using TextMate (which I still use as code browser), I’ve decided to go back to MacVim, because of the way it can be off-loaded to your spine, so to speak.

There’s a lot of personal preference involved in this type of choice, and there are dozens of blog posts and debates on the web about the pro’s and con’s. This one by Steve Losh sort of matches the process I am going through, in case you’re interested.

Best way to get into vim? Install it, and type “vimtutor“. Best way to learn more? Type “:h<CR>” in vim. Seriously. And don’t try to learn it all at once – the goal is to gradually migrate vim knowledge into your muscle memory. Just learn the base concepts, and if you’re serious about it: learn a few new commands each week. See what sticks.

To get an idea of what’s possible, watch some videos – such as the vim entries on the DAS site by Gary Bernhardt (paid subscription). And while you’re at it: take the opportunity to see what Behaviour Driven Design is like, he has many fascinating videos on the subject.

For a book, I very much recommend Practical Vim by Drew Neil. He covers a wide range of topics, and suggests reading up on them in whatever order is most useful to you.

While learning, this cheatsheet and wallpaper may come in handy.

Raspberry Pi – The little “RPi” Linux board is getting a lot of attention lately. It makes a nice setup for HouseMon. Here are some links for the hardware and the software.

Linux – Getting around on the command line in Linux is also something I get asked about from time to time. This is important when running Linux as a server – the RPi, for example.

I found the resource which appears to do a good job of explaining all the basic and intermediate concepts. It’s also available as a book, called “The Linux Command Line” by William E. Shotts, Jr. (PDF).

There… the above list ought to get you a long way with all the technologies I’m currently messing around with. Please feel free to add pointers and tips in the comments, if you think of other resource which can be of use to fellow readers in this context.

Dive Into JeeNodes

In AVR, Hardware, Software, Linux on Feb 1, 2013 at 00:01

Welcome to a new series of limited-edition posts from JeeLabs! Read ’em while they last!

Heh… just kidding. They’ll last forever of course, as does everything on this thing called internet. But what I’m going to describe in probably a dozen posts or so is the following:


Hm, that doesn’t quite explain it, I guess. Let me try again:

JC's Grid, page 63

So this is to announce a new “DIJN” series of weblog posts, describing how to set up your own Wireless Sensor Network with JeeNodes, as well as the infrastructure to report a measured light-level somewhere in your house, in real time. The end result will be fully automated and autonomous – you could take your mobile phone, point it to your web server via WiFi, and see the light level as it is that very moment, adjusting as it changes.

This is a far cry from a full-fledged home monitoring or home automation system, clearly – but on the other hand, it’ll have all the key pieces in place to explore whatever direction interests you: ready-made sensors, DIY sensors, your own firmware on the remote nodes, your own web pages and automation logic on the central server… it’s up to you!

Everything is open source, which in this context matters a lot, because that also means that you can really dive into any aspect of this to learn and explore the truly magical world of Physical Computing, Wireless Sensor Networks, environmental sensing and control, as well as state-of-the art web technologies.

The focus will be on describing every step needed to implement this from scratch. I’ll cover setting up all the necessary software and hardware, in such a way that if you know next to nothing about any of the domains involved, you can still follow along and try it out – whether your background is in software, electronics, wireless, or none of these.

If technology interests you, and if I can bring across even a small fraction of the fun there is in tinkering with this stuff and making new things up as you g(r)o(w) along, then that would be a very nice reward for everyone involved, as far as I’m concerned.

PS. “Dijn” is also old-Dutch for “your” (thy, to be precise). Quite a fitting name in my opinion, as this sort of knowledge is really yours for the taking – if you want it…

PPS. For reference: here is the first post in the series, and here is the overview.

Remote compilation

In Software on Jan 30, 2013 at 00:01

Now we’re cookin’ …

I’ve implemented the following with a new “compile” module in HouseMon – based on, which is also available for the Raspberry Pi via Debian apt:

JC's Grid, page 62

In other words, this allows uploading the source code of an Arduino sketch through a web browser, manage these uploaded files from the browser, and compile + upload the sketch to a server-side JeeNode / Arduino. All interaction is browser based…

Only proof of concept, but it opens up some fascinating options. Here’s an example of use:

Screen Shot 2013-01-29 at 21.54.13

(note that both avr-gcc + avrdude output and the exit status show up in the browser!)

The processing flow is fairly complex, given the number of different tasks involved, yet the way Node’s asynchronous system calls, SocketStream’s client-server connectivity, and Angular’s two-way webpage bindings work together is absolutely mind-blowing, IMO.

Here’s the main code, running on the server as briqs/ =
  name: 'jcw-compile'
  description: 'Compile for embedded systems'
  menus: [
    title: 'Compile'
    controller: 'CompileCtrl'

state = require '../server/state'
fs = require 'fs'
child_process = require 'child_process'
ss = require 'socketstream'

# TODO hardcoded paths for now
SKETCHDIR = switch process.platform
  when 'darwin' then '/Users/jcw/Tools/sketch'
  when 'linux' then '/home/pi/sketchbook/sketch'

# callable from client as: rpc.exec 'host.api', 'compile', path
ss.api.add 'compile', (path, cb) ->
  # TODO totally unsafe, will accept any path as input file
  wr = fs.createWriteStream "#{SKETCHDIR}/sketch.ino"
  fs.createReadStream(path).pipe wr
  wr.on 'close', ->
    make = child_process.spawn 'make', ['upload'], cwd: SKETCHDIR
    make.stdout.on 'data', (data) ->
      ss.api.publish.all 'ss-output', 'stdout', "#{data}"
    make.stderr.on 'data', (data) ->
      ss.api.publish.all 'ss-output', 'stderr', "#{data}"
    make.on 'exit', (code) ->
      cb? null, code

# triggered when bodyParser middleware finishes processing a file upload
state.on 'upload', (url, files) ->
  for file, info of files 'uploads',
      key: info.path
      file: file
      size: info.size
      date: Date.parse(info.lastModifiedDate)

# triggered when the uploads collection changes, used to clean up files
state.on 'store.uploads', (obj, oldObj) ->
  unless obj.key # only act on deletions
    fs.unlink oldObj.key

The client-side code needed to implement all this is even shorter: see the controller code and the page definition on GitHub.

There’s no error handling in here yet, so evidently this code is not ready for prime time.

PS – Verified to also work on the Raspberry Pi – yippie!

Blink Plug meets NG

In Hardware, Software on Jan 29, 2013 at 00:01

Last month, I described how to hook up a JeeNode with a Blink Plug to Node.js via SocketStream (“SS”) and a USB connection. Physical Computing with a web interface!

That was before AngularJS (“NG”), using traditional HTML and JavaCoffeeScript code.

With NG and SS, there’s a lot of machinery in place to interface a web browser with a central Node process. I’ve added a “blink-serial” module to HouseMon to explore this approach. Here’s the resulting web page (very basic, but fully dynamic in both directions):

Screen Shot 2013-01-28 at 19.17.40

The above web page is generated from this client/templates/blink.jade definition:

Screen Shot 2013-01-28 at 18.42.02

There are two more files involved – client/code/modules/ on the client:

Screen Shot 2013-01-28 at 18.43.41

… and briqs/, which drives the whole shebang on the server:

Screen Shot 2013-01-28 at 20.17.40

I’m not quite happy yet with the exact details of all this, but these 3 files are all there is to it, and frankly I can’t quite imagine how a bidirectional real-time interface could be made any simpler – given all the pieces involved: serial interface, server, and web browser.

The one thing that can hopefully be improved upon, is the terminology of it all and the actual API conventions. But that really is a matter of throwing lots of different stuff into HouseMon, to see what sticks and what gets messy.


PS – What’s your take on these screenshots? Ok? Or was the white background preferable?

PPS – Here’s another test, code inserted as HTML – suitable for copying and pasting:

# Define a single-page client called 'main'
ss.client.define 'main',
  view: 'index.jade'
  css: ['libs', 'app.styl']
  code: ['libs', 'app', 'modules']

Update – Yet another approach, using a a WordPress plugin (no Jade / Stylus support?):

# Define a single-page client called 'main'
ss.client.define 'main',
  view: 'index.jade'
  css: ['libs', 'app.styl']
  code: ['libs', 'app', 'modules']

Learning vim

In Software on Jan 28, 2013 at 00:01

This post won’t be for everyone. In fact, it may well sound like an utterly ridiculous idea…

After yesterday’s post, it occurred to me that maybe some readers will be interested in what these so-called “programmer’s editors” such as vim and emacs are all about.

When developing software, you need a tool to edit text. Not just enter it, nor even correct it, but really edit it. Now perhaps one day we’ll all laugh about this concept of having a bunch of lines with characters on a screen, and working with tools to arrange those characters in files in a very very specific way. But for the past decades up to this very day, that’s how software development happens. Don’t blame me, I’m just stating a fact.

I can think of a couple of tools to use for editing text:

  • a non-programmer’s text editor or word processor (NotePad? TextEdit? Word?)
  • GUI-based programmer’s editors (UltraEdit, BBEdit, gedit, TextMate, Sublime Text)
  • an Integrated Development Environment, such as Visual Studio, Eclipse, Komodo
  • command-based programmer’s editor – the main ones are really only Vim and Emacs

Everything but the last option is GUI based, and while all of those tools have lots and lots of “keyboard shortcuts”, these are usually added for the “power users”, with the main idea being that everything should be reachable via clicking on menu’s, buttons, and text. Mouse-based, in other words.

There’s nothing wrong with using the mouse. But when it comes to text, the fact is that you’re going to end up using the keyboard anyway. Having to switch between the two is awkward. The other drawback of using a mouse for these tasks, is that menu’s can only give a certain amount of choice – for the full breadth and depth of a command-based interface, you’re going to have to navigate through a fairly large set of clever visual representations on the screen. Navigation means “pointing” in this case. And pointing requires eye-hand coordination.

As I said yesterday, I’m going back to keyboard-only. Within reason, that is. Switching windows and apps can be done through the keyboard, and so can many tasks – even in a web browser. But not all. The trick is to find a balanced work flow that works well… in the long run. Which is – almost by definition – a matter of personal preference.

Anyway. Let’s say you want to find out more about vim…

In a nutshell:

  • you type commands at it: simple keys, control keys, short sequences of keys
  • each of them mean something, and you have to know them to be able to use them
  • luckily, a small subset of commands is all it takes for simple (albeit tedious) editing
  • it does a lot more than edit a file: jump around, see multiple files, start compiles, …
  • but also: integrate with syntax check, file search, symbol lookup, batch operations
  • it’s all about monospaced text files, and it scales to huge files without problems
  • vim (like emacs) is a world in itself, offering hundreds (thousands!) of commands
  • bonus: a stripped version of vim (or vi) is present on even the tiniest Linux systems

In vim, you’re either in insert mode (typing text), or in visual mode, or in shell-like command mode. Typing the same key means different things in each of these modes!

Make no mistake: vim is huge. Here’s the built-in help for just the CTRL+W command:

Screen Shot 2013-01-26 at 23.58.29

Yeah, vim also supports all sorts of color-scheme fiddling. But here’s the good news:

  • you only need to remember how to enter insert mode (“i” or “a”), how to exit it (ESC), how to enter command mode (“:”), how to save (“:w”+CR), and how to quit (“:q”+CR) – and most importantly: how to get to the help section (“:help”+CR).
  • a dozen or so letter combinations will get you a long way: just read up on what the letters “a” to “z” do in visual mode, and you’ll be editing for real in no time
  • there is logic in the way these commands are set up and named, there really is – even a few minutes spent on vim’s extensive built-in introductory help texts are worth it
  • some advanced features are awesome: such as being able to undo any change even after quits / relaunches, and saving complete session state per project (“:mks”)

If you try this for a few days (and fight that recurring strong urge to go back to the editor you’re so familiar with), you’ll notice that some things start to automate. That’s your spine taking over. This is what it’s really all about: delegating the control of the editor to a part of you which doesn’t need your attention.

When was the last time you had to think about how to get into your car? Or how to drive?

Probably the best advice I ever got from someone about learning vim: if you find yourself doing something over and over again and it feels tedious, then vim will have some useful commands to simplify those tasks. And that is the time to find them and try them out!

Vim (and emacs) is not about good looks. Or clever tricks. It’s about delegation.

Learning new tricks

In Software on Jan 27, 2013 at 00:01

There’s a reason why many people stick to a programming language and reuse it whenever possible: it’s hard work to learn another one! So once you’ve learned one well, and in due time really, really well, then you get a big payback in the form of a substantial productivity boost. Because – as I said – it’s really a lot of hard work to become familiar, let alone proficient, with any non-trivial technology – including a programming language or an OS.

Here’s me, a few weeks ago:


I was completely overwhelmed by trying to learn Node.js, AngularJS, SocketStream, CoffeeScript, and more. To the point of not being able to even understand how it all fits together, and knowing that whatever I’d try, I would stumble around in the dark for ages.

I’ve been through the learning curve of three programming languages before. I like to think that I got quite good at each of them, making code “sing” on occasion, and generally being able to spend all my time and energy on the problem, not the tools. But I also know that it took years of tinkering. As Malcolm Gladwell suggests in his book Outliers, it may take some 10,000 hours to become really good at something. Maybe even at anything?

Ten. Thousand. Hours. Of concentrating on the work involved. That sure is a long time!

But here’s the trick: just run, stumble, and learn from each re- / disrecovery!

That doesn’t mean it’s easy. It’s not even easier than the last time, unfortunately: you can’t get good at learning new stuff – you still have to sweat it through. And there are dead ends. Tons of them. This is the corner I painted myself into a couple of days ago:


Let me explain:

  • I thought I was getting up to speed with CoffeeScript and the basics of AngularJS
  • … even started writing some nice code for it, with lots of debug / commit cycles
  • big portions were still clear as mud to me, but hey… I got some stuff working!

The trouble is: instead of learning things the way all these new tools and technologies were meant to be used, by people far more knowledgeable than me, I started fitting my old mental models on top of what I saw.

The result: a small piece, working, all wrapped into code that made it look nice to me.

That’s a big mistake. Frameworks such as AngularJS and SocketStream always have their own “mindset” (well, that of their designers, obviously) – and trying to ignore it is a sure path to frustration and probably even disaster, in the long term.

Ya can’t learn new things by just mapping everything to your old knowledge!

Yesterday, I realised that what I had written as code was partly just a wrapper around the bits of NG and SS which I started to understand. The result: a road block, sitting in the way of real understanding. My code wasn’t doing much, just re-phrasing it all!

So a few days ago, I tore most of it all down again (I’m talking about HouseMon, of course). It didn’t get the startup sequence right, and it was getting worse all the time.

Guess what? Went through a total, frustratingly disruptive, re-factoring of the code. And it ended up so much simpler and better… it blew me away. This NG + SS stuff is incredible.

Here’s me, after checking in the latest code a few hours ago:


No… no deep understanding yet. There’s no way to grasp it all in such a short time (yes, weeks is a short time) – not for me, anyway – but the fog is lifting. I’m writing less code again. And more and more finding my way around the web. The AngularFun project on GitHub and this post by Brian Ford are making so much more sense now.

I’ve also decided to try and change one more habit: I’m going back to the vim editor, or rather the MacVim version of it. I’ve used TextMate for several years (now v2, and open source), but the fact is that pointing at things sucks. It requires eye-hand coordination, whereas all command-line and keyboard-driven editing activity ends up settling itself in your muscle memory. Editors such as vim and emacs are amazing tools because of that.

Now I don’t know about you, but when I learn new things, the last thing on earth I’d want is to spend my already-strained mental energy on the process instead of the task at hand.

So these days, I’m in parallel learning mode: with head and spine working in tandem :)

JavaScript semantics

In Software on Jan 24, 2013 at 00:01

Some things are quite surprising in JavaScript / CoffeeScript:

    $ coffee
    > '1'+2
    > '1'-2
    > 1 < null
    > 1 > null
    > 1 < undefined
    > 1 > undefined
    > a = [1,2,3]
    [ 1, 2, 3 ]
    > a.b = 4
    > (x for x in a)
    [ 1, 2, 3 ]
    > (x for x of a)
    [ '0', '1', '2', 'b' ]
    > (x for x in 'abc')
    [ 'a', 'b', 'c' ]
    > (x for x of 'abc')
    [ '0', '1', '2' ]

It makes sense once you know it … but that’s the whole thing with being a newbie, eh?

That array-with-poperties behaviour is actually very useful, because it lets you create collections which can be looped over, while still offering members in an object-like manner. Very Lua-ish. The same can be done with Object.defineProperty, but that’s more involved.

For the full story on “array-like” objects, see this detailed page. It gets real messy inside – but then again, so does any other language. As long as its easy to find answers, I’m happy. And with JavaScript / CoffeeScript, finding answers and suggestions on the web is trivial.

On another note: Redis is working well for storing data, but there is a slight impedance mismatch with JavaScript. Storing nested objects is trivial by using JSON, but then you lose the ability to let Redis do a lot of nifty sorting and extraction. For now, I’m getting good mileage with two hashes per collection: one for key-to-id lookup, one for id-based storage of each object in JSON format. But I’m really using only a tiny part of Redis this way.

Solar… again – first results

In Software on Jan 23, 2013 at 00:01

Sorry for keeping you in suspense after presenting yesterday’s code. I had to let this run for a few days to collect enough data worth graphing…

Some raw output:

L 15:01:00.818 usb-A40117UK OK 20 90 27 224 25 128 26 224 26
L 15:02:01.232 usb-A40117UK OK 20 68 27 224 25 128 26 224 26
L 15:03:01.663 usb-A40117UK OK 20 64 27 224 25 128 26 224 26
L 15:04:02.075 usb-A40117UK OK 20 64 27 222 25 127 26 224 26
L 15:05:02.486 usb-A40117UK OK 20 64 27 195 25 128 26 224 26
L 15:06:02.901 usb-A40117UK OK 20 64 27 192 25 123 26 224 26
L 15:07:03.314 usb-A40117UK OK 20 63 27 192 25 124 26 224 26
L 15:08:03.728 usb-A40117UK OK 20 38 27 192 25 111 26 224 26
L 15:09:04.140 usb-A40117UK OK 20 32 27 192 25 104 26 224 26
L 15:10:04.557 usb-A40117UK OK 20 32 27 169 25 99 26 224 26

And here some readings with DIO1 turned on:

L 16:01:24.894 usb-A40117UK OK 20 30 51 0 51 128 51 32 51
L 16:02:25.293 usb-A40117UK OK 20 159 51 128 51 0 52 160 51
L 16:03:25.677 usb-A40117UK OK 20 0 52 224 51 107 52 0 52
L 16:04:26.061 usb-A40117UK OK 20 64 52 55 52 192 52 96 52
L 16:05:26.461 usb-A40117UK OK 20 128 52 113 52 0 53 160 52
L 16:06:26.849 usb-A40117UK OK 20 192 52 160 52 61 53 192 52
L 16:07:27.224 usb-A40117UK OK 20 224 52 192 52 96 53 0 53
L 16:08:27.622 usb-A40117UK OK 20 0 53 237 52 128 53 32 53
L 16:09:28.011 usb-A40117UK OK 20 32 53 12 53 160 53 64 53
L 16:10:28.406 usb-A40117UK OK 20 64 53 32 53 192 53 96 53

In decimal, this is what I see:

a0: 13952 a1: 13914 a2: 14144 a3: 14017

Here are the first eight hours after sunset – no energy coming in at all, just conditioning:


Top to bottom: orange = 2 MΩ, red = 20 MΩ, blue = 2-C 200 kΩ, green = 1-C 200 kΩ. Some variation probably due to resistor / supercap tolerances.

Since we’re measuring half the voltage of the supercaps, you can see that they have all reached around 2.8V at this point. The interesting bit will be to see whether the voltage ever rises in the time between DIO1 “on” cycles – indicating a charge surplus!

Here’s the next 24 hours, only the 2 and 20 MΩ loads benefit from this cloudy winter day:


Another day, some real sunlight, not enough yet to survive the next night, even w/ 20 MΩ:


Note that the only curves which hold any promise, are the ones which can permanently stay above that hourly on/off cycle, since that DIO-pin top-up won’t be there to bail us out when using these solar cells as a real power source.

I’ll leave it for now – and will collect data for some more days, or perhaps weeks if there’s any point with these tiny solar cells. Will report in due time … for now: patience!

PS. Note that these fingernail-sized CPC1824 chips are a lot more than just some PV material in an SOIC package. There appears to be a switching boost regulator in there!

Solar… again – the code

In Software on Jan 22, 2013 at 00:01

With the hardware ready to go, it’s time to take that last step – the software!

Here is a little sketch called slowLogger.ino, now in JeeLib on GitHub:

Screen Shot 2013-01-19 at 16.20.50

It’s pretty general-purpose, really – measure 4 analog signals once a minute, and report them as wireless packet. There are a couple of points to make about this sketch:

  • The DIO1 pin will toggle approximately once every 64 minutes, i.e. one hour low / one hour high. This is used in the solar test setup to charge the supercaps to about 2.7V half of the time. The charge and discharge curves can provide useful info.

  • The analog channel is selected by doing a fake reading, and then waiting 100 ms. This sets the ADC channel to the proper input pin, and then lets the charge on the ADC sample-and-hold capacitor settle a bit. It is needed when the output impedance of the signal is high, and helps set up a more accurate voltage in the ADC.

  • The analog readings are done by summing up each value 32 times. When dividing this value, you get an average, which will be more accurate than a single reading if there is noise in the signal. By not dividing the value, we get the maximum possible amount of information about the exact voltage level. Here, it helps get more stable readings.

As it turns out, the current values reported are almost exactly 10x the voltage in millivolts (the scale ends up as 32767 / 3.3), which is quite convenient for debugging.

Having said all this, you shouldn’t expect miracles. It’s just a way to get minute-by-minute readings, with hopefully not too many fluctuations due to noise or signal spikes.

Tomorrow, some early results!

Solar… again – setup

In Software on Jan 21, 2013 at 00:01

With the purpose out of the way, it’s time start building.

It’s all a big hack, and since I didn’t plan the board layout, it all ended up being hard to hookup to a JeeNode, so I had to tack on a Proto Board:


Those solar cells are tiny! – if they don’t work, I’ll switch to more conventional bigger ones.

On the back, some resistors and diodes:


I used a JeeNode USB (about 60 µA consumption in power-down, due to the extra FTDI chip), but am powering it all via a 3x AA battery pack to get full autonomy. The idea is to place this thing indoor, facing south through a window:


Tomorrow, the code…

Who needs numbers?

In Software on Jan 19, 2013 at 00:01

I’ve always been intrigued by those “control panels” with lots and lots of numbers on them. It seems so much like old science-fiction movies with unmarked buttons all over the place.

That and the terror of the linear scale felt like a good reason to try out something new:

Screen Shot 2013-01-17 at 18.02.20   Screen Shot 2013-01-17 at 18.04.34   Screen Shot 2013-01-17 at 18.09.28   Screen Shot 2013-01-17 at 18.08.48

Ok, so here’s the idea:

  • you’re looking at four copies of the display next to each other, with different readings
  • the vertical scale is power production (top half) and power consumption (bottom half)
  • green is production (+), red is consumption (-), and blue is the difference (+ or -)
  • it’s logarithmic, top and bottom are 5000 W produced and consumed, respectively
  • the green and red dots are also log-proportional, the blue circe has a fixed diameter
  • the center has a ± 50 W dead zone, i.e. everything in that range is collapsed to a line

Now the examples, from left to right:

  1. end of the afternoon, no solar power, about 260 W consumption
  2. induction cooking, consuming about 2500 W
  3. night-time, house consumption is 80 W
  4. daytime, 1500 W production, 200 W consumption

Those last two readouts were simulated since I didn’t want to wait for a sunny winter day :)

The data is shown LIVE, and I’m going to keep it around to see whether this is an intuitive way of presenting this sort of information. It’s all implemented using the HTML5 canvas.

PS – I’ve since switched to “sqrt” i.s.o. of “log” for scaling. Looks nicer with large values.

Still crude, but oh so cool…

In Software on Jan 18, 2013 at 00:01

Good progress on the HouseMon front. I’m still waggling from concept to concept like a drunken sailor, but every day it sinks in just a little bit more. Adding a page to the app is now a matter of adding a CoffeeScript and a Jade file, and then adding an entry to the “routes” list to make a new “Readings” page appear in the main menu:

Screen Shot 2013-01-17 at 01.01.00

The only “HTML” I wrote for this page is readings.jade, using Foundation classes:

Screen Shot 2013-01-17 at 01.03.14

And the only “JavaScript” I wrote, is in to embellish the output slightly:

Screen Shot 2013-01-17 at 00.18.51

The rest is a “readings” collection, which is updated automatically on the server, with all changes propagated to the browser(s). There’s already quite a bit of architecture in place, including a generic RF12demo driver which connects to a serial port, and a decoder for some of the many packets flying around the house here at JeeLabs.

But the exciting bit is that the server setup is managed from the browser. There’s an Admin page, just enough to install and remove components (“briqlets” in HouseMon-speak):

Screen Shot 2013-01-17 at 00.14.09

In this example, I installed a fake packet generator (which simply replays one of my logfiles in real time), as well as a small set of decoders for things like room node sketches.

So just to stress what there is: a basic way of managing server components via the browser, and a basic way of seeing the live data updates, again in the browser. Oh, and all sorts of RF12 capturing and decoding… even support for the Announcer idea, described recently.

Did I mention that the data shown is LIVE? Some of this stuff sure feels like magic… fun!

Arduino sketches on RPi

In AVR, Software, Linux on Jan 17, 2013 at 00:01

There’s an arduino-mk package which makes it simple to build and upload sketches on a Raspberry Pi. Here’s how to set it up and use it with a JeeLink, for example:

Install the package:

  sudo apt-get install arduino-mk

That gets all the required software and files, but it needs a tiny bit of setup.

First, create the library area with a demo sketch in it:

  mkdir ~/sketchbook
  cd ~/sketchbook
  ln -s /usr/share/arduino/
  mkdir blink
  cd blink

Then create two files – the blink.ino sketch for a JeeLink or JeeNode SMD / USB:

// Blink - adapted for a JeeNode / JeeLink

void setup() {
  pinMode(9, OUTPUT);

void loop() {
  digitalWrite(9, LOW);
  digitalWrite(9, HIGH);

… and a Makefile which ties it all together:

BOARD_TAG    = uno
ARDUINO_DIR  = /usr/share/arduino

include ../

(that file in the sketchbook/ dir is well-commented and worth a read)

That’s it. You now have the following commands to perform various tasks:

  make             - no upload
  make upload      - compile and upload
  make clean       - remove all our dependencies
  make depends     - update dependencies
  make reset       - reset the Arduino by tickling DTR on the serial port
  make raw_upload  - upload without first resetting
  make show_boards - list all the boards defined in boards.txt

The first build is going to take a few minutes, because it compiles the entire runtime code. After that, compilation should be fairly snappy. Be sure to do a make clean whenever you change the board type, to re-compile for a different board. The make depends command should be used when you change any #include lines in your code or add / remove source files to the project.

This setup makes things run smoothly, without requiring the Arduino IDE + Java runtime.

Remote node discovery – code

In Software on Jan 16, 2013 at 00:01

Ok, time to try out some real code, after yesterday’s proposal for a simple node / sketch / packet format announcement system.

I’m going to use two tricks:

  • First of all, there’s no need for the node itself to actually send out these packets. I’ve written a little “announcer” sketch which masquerades as the real nodes and sends out packets on their behalf. So… fake info to try things out, without even having to update any nodes! This is a transitional state, until all nodes have been updated.

  • And second, I don’t really care about sending these packets out until the receiving end actually knows how to handle this info, so in debug mode, I’ll just report it over serial.

Note that packets get sent to “node 0”, which cannot exist – so sending this out is in fact harmless: today’s RF12 listeners will ignore such packets. Except node 31, which sees these new packets come in with header byte 64 (i.e. 0x40). No RF12 driver change needed!

Here’s the essence of the test sketch:

Screen Shot 2013-01-15 at 15.12.08

With apologies for the size of this screenshot – full code is on GitHub.

Note that this is just for testing. For the actual code, I’d pay far more attention to RAM usage and place the actual descriptive data in flash memory, for example.

And here’s some test output from it on the serial port:

Screen Shot 2013-01-15 at 15.09.51

Yeah, that looks about right. Next step will be to pick this up and do something with it. Please give me a few days for that, as the “HouseMon” server-side code isn’t yet up to it.

To be continued soon…

PS. Another very useful item type would be the maximum expected time between packets. It allows a server to automatically generate a warning when that time is grossly exceeded. Also a nice example of how self-announcement can help improve the entire system.

Remote node discovery – part 2

In Software on Jan 15, 2013 at 00:01

Yesterday’s post was about the desire to automate node discovery a bit more, i.e. having nodes announce and describe themselves on the wireless network, so that central receivers know something about them without having to depend on manual setup.

The key is to make a distinction between in-band (IB) and out-of-band (OOB) data: we need a way to send information about ourselves, and we want to send it by the same means, i.e. wirelessly, but recognisable in some way as not being normal packet data.

One way would be to use a flag bit or byte in each packet, announcing whether the rest of the packet is normal or special descriptive data. But that invalidates all the nodes currently out there, and worse: you lose the 100% control over packet content that we have today.

Another approach would be to send the data on a different net group, but that requires a second receiver permanently listening to that alternate net group. Not very convenient.

Luckily, there are two more alternatives (there always are!): one extends the use of the header bits already present in each packet, the other plays tricks with the node ID header:

  1. There are three header combinations in use, as described yesterday: normal data, requesting an ACK, and sending an ACK. That leaves one bit pattern unused in the header. We could define this as a special “management packet”.

  2. The out-of-band data we want to send to describe the node and the packet format is probably always sent out as broadcast. After all, most nodes just want to report their sensor readings (and a few may listen for incoming commands in the returned ACK). Since node ID zero is special, there is one combination which is never used: sending out a broadcast and filling in a zero node ID as origin field. Again, this combination could be used to mark a “management packet”.

I’m leaning toward the second because it’s probably compatible with all existing code.

So what’s a “management packet”, eh?

Well, this needs to be defined, but the main point here is that the format of management packets can be fully specified and standardised across all nodes. We’re not telling them to send/receive data that way, we’re only offering them a new capability to broadcast their node/packet properties through these new special packets.

So here’s the idea, which future sketches can then choose to implement:

  • every node runs a specific sketch, and we set up a simple registry of sketch type ID’s
  • each sketch type ID would define at least the author and name (e.g. “roomNode”)
  • the purpose of the registry is to issue unique ID’s to anyone who wants a bunch of ’em
  • once a minute, for the first five minutes after power-up, the sketch sends out a management packet, containing a version number, a sequence number, the sketch type ID, and optionally, a description of its packet payload format (if it is simple and consistent enough)
  • after that, this management packet gets sent out again about once an hour
  • management packets are sent out in addition to regular packets, not instead of

This means that any receiver listening to the proper net group will be able to pick up these management packets and know a bit more about the sketches running on the different node ID’s. And within an hour, the central node(s) will have learned what each node is.

Nodes which never send out anything (or only send out ACKs to remote nodes, such as a central JeeLink), probably don’t need this mechanism, although there too it could be used.

So the only remaining issue is how to describe packets. Note that this is entirely optional. We could just as easily put that description in the sketch type ID registry, and even add actual decoders there to automate not only node type discovery, but even packet decoding.

Defining a concise notation to describe packet formats can be very simple or very complex, depending on the amount of complexity and variation used in these data packets. Here’s a very simple notation, which I used in JeeMon – in this case for roomNode packets:

  light 8 motion 1 rhum 7 temp -10 lobat 1

These are pairs of name + number-of-bits, with a negative value indicating that sign extension is to be applied (so the temp field ranges from -512..+511, i.e. -51.2..+51.1°C).

Note that we need to fit a complete packet description in a few dozen bytes, to be able to send it out as management data, so probably the field names will have to be omitted.

This leads to the following first draft for a little-endian management packet format:

  • 3 bits – management packet version, currently “1”
  • 5 bits – sender node ID (since it isn’t in the header)
  • 8 bits – management packet number, incremented with each transmission
  • 16 bits – unique sketch type ID, assigned by JeeLabs (see below)
  • optional data items, in repeating groups of the form:
    • 4 bits – type of item
    • 4 bits – N-1 = length of following item data (1..16 bytes)
    • N bytes – item data

Item types and formats will need to be defined. Some ideas for item types to support:

  • type 0 – sketch version number (a single byte should normally be enough)
  • type 1 – globally unique node ID (4..16 bytes, fetched from EEPROM, could be UUID)
  • type 2 – the above basic bit-field description, i.e. for a room node: 8, 1, 7, -10, 1
  • type 3 – tiny URL of an on-line sketch / packet definition (or GitHub user/project?)

Space is limited – the total size of all item descriptions can only be up to 62 bytes.

Sketch type ID’s could be assigned as follows:

  • 0..99 – free for local use, unregistered
  • 100..65535 – assigned through a central registry site provided by JeeLabs

That way, anyone can start implementing and using this stuff without waiting for such a central registry to be put in place and become operational.

Tomorrow, a first draft implementation…

Remote node discovery

In Software on Jan 14, 2013 at 00:01

The current use of the wireless RF12 driver is very basic. That’s by design: simple works.

All you get is the ability to send out 0..66 bytes of data, either as broadcast or directed to a specific node. The latter is just a little trick, since the nature of wireless communication is such that everything goes everywhere anyway – so point to point is simply a matter of receivers filtering out and ignoring packets not intended for them.

The other thing you get with the RF12 driver, is the ability to request an acknowledgement of good reception, which the receiver deals with by sending an “ACK” packet back. Note that ACKs can contain data – so this same mechanism can also be used to request data from another node, if they both agree to use the protocol in this way, that is.

So there are three types of packets: data sent out as is, data sent out with the request to send an ACK back, and packets with this ACK. Each of them with a 0..66 byte payload.

But even though it all works well, there’s an important aspect of wireless sensor networks which has never been addressed: the ability for nodes to tell other nodes what/who they are. As a consequence, I always have to manually update a little table on my receiving server with a mapping from node ID to what sketch that node is currently running.

Trivial stuff, but still a bit inconvenient in the longer run. Why can’t each node let me know what it is and then I don’t have to worry about mixing things up, or keeping that table in sync with reality every time something changes?

Well… there’s more to it than a mapping of node ID’s – I also want to track the location of each node, and assign it a meaningful name such as “room node in the living room”. This is not something each node can know, so the need to maintain a table on the server is not going to disappear just by having nodes send out their sketch + version info.

Another important piece of information is the packet format. Each sketch uses its own format, and some of them can be non-trivial, for example with the OOK relay sending out all sorts of packets, received from a range of different brands of OOK sensors.

It would be sheer madness to define an XML DTD or schema and send validated XML with namespace info and the whole shebang in each packet, even though that is more or less the level of detail we’re after.

Shall we use JSON then? Protocol buffers? Bencode? ASN.1? – None of all that, please!

When power is so scarce that microcoulombs matter, you want to keep packet sizes as absolutely minimal as possible. A switch from a 1-bye to a 2-byte payload increases the average transmit power consumption of the node by about 10%. That’s because each packet has a fixed header + trailer byte overhead, and because the energy consumption of the transmitter is proportional to the time it is kept on.

The best way to keep packet sizes minimal, is to let each node pick what works best!

That’s why I play lots and lots of tricks each time when coming up with a remote sketch. The room node gets all its info across in just 4 bytes of data. Short packets keep power consumption low and also reduce the chance of collisions. Short is good.

Tomorrow, I’ll describe an idea for node discovery and packet format description.

Test-driven design

In Software on Jan 10, 2013 at 00:01

Drat, I can’t stop – this Node.js and AngularJS stuff sure is addictive! …

I’m a big fan of test-driven development, an approach where test code is often treated as more important than the code you’re writing to actually do the work. It may sound completely nuts – the thought of writing tons of tests which can only get in the way of making changes to your newly-written beautiful code later, right?

In fact, TDD goes even further: the idea is to write the test code as soon as you decide to add a new feature or write a new bug, and only then start writing the real stuff. Weird!

But having used TDD in a couple of projects in the past, I can only confirm that it’s in fact a fantastic way to “grow” software. Because adding new tests, and then seeing them pass, constantly increases the confidence in the whole project. Ever had the feeling that you don’t want to mess with a certain part of your code, because you’re afraid it might break in some subtle way? Yeah, well… I know the feeling. It terrifies me too :)

With test scaffolding around your code, something very surprising happens: you can tear down the guts and bring them back up in a different shape, because the test code will help drive that process! And re-writing / re-factoring usually leads to a better architecture.

There’s a very definite hurdle in each new project to start using TDD (or BDD). It’s a painful struggle to spend time thinking about tests, when all you want to do is write the darn code and start using it! Especially at the start of a project.

I started coding HouseMon without TDD/BDD on my plate, because I first want to expose myself to coding in CoffeeScript. But I’m already checking out the field in the world of Node.js and JavaScript. Fell off my chair once again… so many good things going on!

Here’s an example, using the Mocha and should packages with CoffeeScript:

Screen Shot 2013-01-09 at 22.57.08

And here’s some sample output, in one of many possible output formats:

Screen Shot 2013-01-09 at 22.59.55

I’m using kind of a mix of TDD’s assertion-style testing, and BDD’s requirement-style approach. The T + F tricks suit me better, for their brevity, but the “should.throw” stuff is useful to handle exceptions when they are intentional (there’s also “should.not.throw”).

But that’s just the tip of the iceberg. Once you start thinking in terms of testing the code you write, it becomes a challenge to make sure that every line of that code has been verified with tests, i.e. code coverage testing. And there too, Mocha makes things simple. I haven’t tried it, but here is some sample output from its docs:

Screen Shot 2012-02-23 at 8.37.13 PM

On the right, a summary of the test coverage of each of the source files, and in the main window an example of a few lines which haven’t been traversed by any of the tests.

Tools like these sure make it tempting to start writing tests!

Ok, ok, I’ll stop.

Gotta start somewhere…

In Software on Jan 9, 2013 at 00:01

Ok, well, baby steps or not – there’s just no way to avoid it… I need to start coding!

I’ve set up a new project on GitHub, called HouseMon – to have a place to put everything. It does almost nothing, other than what you’ll recognise after yesterday’s weblog post:

Screen Shot 2013-01-08 at 23.34.09

This was started with the idea of creating the whole setup as installable “Briqs” modules, which can then be configured for use. And indeed, some of that will be unavoidable right from the start – after all, to connect this to a JeeNode or JeeLink, I’m going to have to specify the serial port (even if the serial ports are automatically enumerated one day, the proper one will need to be selected in case there is more than one).

However, this immediately adds an immense amount of complexity over just hard-coding all the choices in the app for just my specific installation. Now, in itself, I’m not too worried by that – but with so little experience with Node.js and the rest, I just can’t make this kind of decisions yet. There’s too big a gap between what I know and understand today, and what I need to know to come up with a good modular system. The whole thing could be built completely out of npm packages, or perhaps even the lighter-weight Components mechanism. Too early to tell, and definitely too early to make such key decisions!

Just wanted to get this repository going to be able to actually build stuff. And for others to see what’s going on as this evolves over time. It’s real code, so you can play with it if you like. And of course make suggestions and point out things on the issue tracker. I’ll no doubt keep posting about this stuff here whenever the excitement gets too high, but this sort of concludes nearly two weeks of continuous immersion and blogging about this topic.

It’s time for me to get back to some other work. Never a dull moment here at JeeLabs! :)

Web page layouts

In Software on Jan 8, 2013 at 00:01

Wow – maybe I’m the last guy in the room to find out about this, but responsive CSS page layouts really have come a long way. Here are a bunch of screen shots at different widths:

Screen Shot 2013-01-07 at 21.32.15

Screen Shot 2013-01-07 at 21.32.53

Screen Shot 2013-01-07 at 21.33.44

Screen Shot 2013-01-07 at 21.34.00

Note that the relative scale of the above snapshots is all over the map, since I had to fit them into the size of this weblog, but as you can see, there’s a lot of automatic shuffling going on. With no effort whatsoever on my part!

Here’s the Jade definition to generate the underlying HTML:

Screen Shot 2013-01-07 at 21.47.21

How cool is that?

This is based on the Foundation HTML framework (and served from Node.js on on a Raspberry Pi). It looks like this might be an even nicer, eh, foundation to start from than Twitter Bootstrap (here’s a factual comparison, with examples of F and B).

Sigh… so many choices!

Technology decisions

In Software on Jan 7, 2013 at 00:01

Phew! You don’t want to know how many examples, frameworks, and packages I’ve been downloading, trying out, and browsing lately… all to find a good starting point for further software development here at JeeLabs.

Driving this is my desire to pick modern tools, actively being worked on, with an ecosystem which allows me to get up to speed, leverage all the amazing stuff people keep coming up with, and yet stay firmly on the simple side of things. Because good stuff is what you end up with when you weed out the bad, and the result is purity and elegance … as I’ve seen confirmed over and over again.

A new insight I didn’t start out from, is that server-side web page generation is no longer needed. In fact, with clients being more performant than servers (i.e. laptops, tablets, and mobile phones served from a small Linux system), it makes more and more sense to serve only static files (HTML, CSS, JS) and generic data (JSON, usually). The server becomes a file system + a database + a relatively low-end rule engine for stuff that needs to run at all times… even when no client is connected.

Think about the change in mindset: no server-side templating… n o n e !

Anyway – here are all the pieces I intend to use:

Node.js – JavaScript on the server, based on the V8 engine which is available for Intel and ARM architectures. A high-performance standards-compliant JavaScript engine.

The npm package manager comes with Node.js and is a phenomenal workhorse when it comes to getting lots of different packages to work together. Used from the command line, I found npm help to be great starting point for figuring out its potential.

SocketStream is a framework for building real-time apps. It wraps things like (now being replaced by the simpler for bi-directional use of WebSockets between the clients and the server. Comes with lots of very convenient features for development.

AngularJS is a great tool to create very dynamic and responsive client-side applications. This graphic from a post on says it all. The site has good docs and tutorials, this matters because there’s quite a bit of – fantastic! – stuff to learn.

Connect is what makes the HTTP webserver built into Node.js incredibly flexible and powerful. The coin dropped once I read an excellent introduction on If you’ve ever tried to write your own HTTP webpage server, then you’ll appreciate the elegance of this timeout module example on

Redis is a memory-based key-value store, which also saves its data on file, so that restarts can resume where it left off. An excellent way to cache file contents and “live” data which represents the state of a running system. Keeping data in a separate process is great for development, especially with automatic server restarts when any source file changes. IOW, the basic idea is to keep Redis running at all times, so that everything else can be restarted and reloaded at will during development.

Foundation is a set of CSS/HTML choices to support uniform good-looking web pages.

CoffeeScript is a “JavaScript dialect” which gets transformed into pure JavaScript on the fly. I’m growing quite used to it already, and enjoy its clean syntax and conciseness.

Jade is a shorthand notation which greatly reduces the amount of text (code?) needed to define HTML page structures (i.e. elements, tags, attributes). Like CoffeeScript, it’s just a dialect – the output is standard HTML.

Stylus is again just a dialect, for CSS this time. Not such a big deal, but I like the way all these notations let me see the underlying structure and nesting at a glance.

That’s ten choices, for Ten Terrific Technologies. Here are the links again:

(S = server-side, C = client-side, S+C = both, T = translated on the server)

All as non-restrictive open source, and with all development taking place on GitHub.

It’s been an intense two weeks, trying to understand enough of all this to be able to make practical decisions. And now I can hardly wait to find out where this will take me!

UpdateBootstrap has been replaced by Foundation – mostly as a matter of taste.

Node.js on Raspberry Pi

In Software on Jan 6, 2013 at 00:01

After all this fiddling with Node.js on my Mac, it’s time to see how it works out on the Raspberry Pi. This is a running summary of how to get a fresh setup with Node.js going.


I’m using the Raspbian build from Dec 16, called “2012-12-16-wheezy-raspbian”. See the download page and directory for the ZIP file I used.

The next step is to get this image onto an SD memory card. I used a 4 GB card, of which over half will be available once everything has been installed. Plenty!

Good instructions for this can be found at – in my case the section titled Copying an image to the SD card in Mac OS X (command line). With a minor tweak on Mac OSX 10.8.2, as I had to use the following command in step 10:

sudo dd bs=1m if=~/Downloads/2012-12-16-wheezy-raspbian.img of=/dev/rdisk1

First boot

Next: insert the SD card and power up the RPi! Luckily, the setup comes with SSH enabled out of the box, so only an Ethernet cable and 5V power with USB-micro plug are needed.

When launched and logged in (user “pi”, password “raspberry” – change it!), it will say:

Please run 'sudo raspi-config'

So I went through all the settings one by one, overclocking set to “medium”, only 16 MB assigned to video, and booting without graphical display. Don’t forget to exit the utility via the “Finish” button, and then restart using:

$ sudo shutdown -r now

Now is a good time to perform all pending updates and clean up:

$ sudo -i
# apt-get update
# apt-get upgrade
# apt-get clean
# reboot

The reboot at the end helps make sure that everything really works as expected.

Up and running

That’s it, the system is now ready for use. Some info about my 256 MB RPi:

$ uname -a
Linux raspberrypi 3.2.27+ #250 PREEMPT \
                    Thu Oct 18 19:03:02 BST 2012 armv6l GNU/Linux
$ free
             total       used       free     shared    buffers     cached
Mem:        237868      51784     186084          0       8904      25972
-/+ buffers/cache:      16908     220960
Swap:       102396          0     102396
$ df -H
Filesystem      Size  Used Avail Use% Mounted on
rootfs          3.9G  1.6G  2.2G  42% /
/dev/root       3.9G  1.6G  2.2G  42% /
devtmpfs        122M     0  122M   0% /dev
tmpfs            25M  213k   25M   1% /run
tmpfs           5.3M     0  5.3M   0% /run/lock
tmpfs            49M     0   49M   0% /run/shm
/dev/mmcblk0p1   59M   18M   42M  30% /boot
$ cat /proc/cpuinfo 
Processor       : ARMv6-compatible processor rev 7 (v6l)
BogoMIPS        : 697.95
Features        : swp half thumb fastmult vfp edsp java tls 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xb76
CPU revision    : 7

Hardware        : BCM2708
Revision        : 0004
Serial          : 00000000596372ab

So far, it’s just a standard boilerplate setup. Yawn…


On to Node.js! Unfortunately, the build included in Debian/Raspbian is 0.6.19, which is a bit old. I’d rather get started with the 0.8.x series, so here’s how to build it from source.

But first, let’s use this simple trick to get write permission in /usr/local as non-root:

$ sudo usermod -aG staff pi

Note: you have to logout and back in, or reboot, to get the new permissions.

With that out of the way, code can be built and installed as user “pi” – no need for sudo:

$ curl | tar xz 
$ cd node-v0.8.16
$ ./configure
$ make
(... two hours of build output gibberish ...)
$ make install

That’s it. A quick check that everything is in working order:

$ node -v
$ npm -v
$ which node
$ ldd `which node`
  /usr/lib/arm-linux-gnueabihf/ (0x40236000) => /lib/arm-linux-gnueabihf/ (0x40074000) => /lib/arm-linux-gnueabihf/ (0x40036000) => /usr/lib/arm-linux-gnueabihf/ (0x40107000) => /lib/arm-linux-gnueabihf/ (0x4023f000) => /lib/arm-linux-gnueabihf/ (0x4007f000) => /lib/arm-linux-gnueabihf/ (0x400a7000) => /lib/arm-linux-gnueabihf/ (0x402b0000)
  /lib/ (0x400d2000)

Oh, one more thing – the RPi doesn’t come with “git” installed, so let’s fix that right now:

$ sudo apt-get install git

There. Now you’re ready to start cookin’ with node and npm on the RPi. Onwards!

PS. If you don’t want to wait two hours, you can download my build, unpack with “tar xfz node-v0.8.16-rpi.tgz”, and do a “cd node-v0.8.16 && make install” to copy the build results into /usr/local/ (probably only works if you have exactly the same Raspbian image).

The key-value straightjacket

In Software, Musings on Jan 5, 2013 at 00:01

It’s probably me, but I’m having a hard time dealing with data as arrays and hashes…

Here’s what you get in just about every programming language nowadays:

  • variables (which is simply keyed access by name)
  • simple scalars, such as ints, floats, and strings
  • indexed aggregation, i.e. arrays: blah[index]
  • tagged aggregation, i.e. structs:
  • arbitrarily nested combinations of the above

JavaScript structs are called objects and tag access can be blah.tag or blah['tag'].

It would seem that these are all one would need, but I find it quite limiting.

Simple example: I have a set of “drivers” (JavaScript modules), implementing all sorts of functionality. Some of these need only be set up once, so the basic module mechanism works just fine: require "blah" can give me access to the driver as an object.

But others really act like classes (prototypal or otherwise), i.e. there’s a driver to open a serial port and manage incoming and outgoing packets from say a JeeLink running the RF12demo sketch. There can be more than one of these at the same time, which is also indicated by the fact that the driver needs a “serial port” argument to be used.

Each of these serial port interfaces has a certain amount of configuration (the RF12 band/group settings, for example), so it really is a good idea to implement this as objects with some state in them. Each serial port ends up as a derived instance of EventEmitter, with raw packets flowing through it, in both directions: incoming and outgoing.

Then there are packet decoders, to make sense of the bytes coming from room nodes, the OOK relay, and so on. Again, these are modules, but it’s not so clear whether a single decoder object should decode all packets on any attached JeeLink or whether there should be one “decoder object” per serial interface object. Separate objects allow more smarts, because decoders can then keep per-sensor state.

The OOK relay in turn, receives (‘multiplexes”) data from different nodes (and different types of nodes), so this again leads to a bunch of decoders, each for a specific type of OOK transmitter (KAKU, FS20, weather, etc).

As you can see, there’s sort of a tree involved – taking incoming packet data and dissecting / passing it on to more specific decoders. In itself, this is no problem at all – it can all be represented as nested driver objects.

As final step, the different decoders can all publish their readings to a common EventEmitter, which will act as a simple bus. Same as an MQTT broker with channels, with the same “nested key” strings to identify each reading.

So far so good. But that’s such a tiny piece of the puzzle, really.

Complexity sets in once you start to think about setup and teardown of this whole setup at run time (i.e. configuration in the browser).

Each driver object may need some configuration settings (the serial port name for the RF12demo driver was one example). To create a user interface and expose it all in the browser, I need some way of treating drivers as a generic collection, independent of their nesting during the decoding process.

Let’s call the driver modules “interfaces” for now, i.e. in essence the classes from which driver instances can be created. Then the “drivers” become instantiations of these classes, i.e. the objects which actually do the work of connecting, reading, writing, decoding, etc.

One essential difference is that the list of interfaces is flat, whereas a configured system with lots of drivers running is often a tree, to cope with the gradual decoding task described a bit above.

How do I find all the active drivers of a specific interface? Walk the driver tree? Yuck.

Given an driver object, how do I find out where it sits in the tree? Store path lists? Yuck.

Again, it may well be me, but I’m used to dealing with data structures in a non-redundant way. The more you link and cross-link stuff (let alone make copies), the more hassles you run into when adding, removing, or altering things. I’m trying to avoid “administrative code” which only keeps some redundant invariants intact – as much as possible, anyway.

Aren’t data structures supposed to be about keeping each fact in exactly one place?

PS. My terminology is still a mess in flux: interfaces, drivers, devices, etc…

Update – I should probably add that my troubles all seem to come from trying to maintain accurate object identities between clients and server.

Plotting again, at last

In Software on Jan 4, 2013 at 00:01

The new code is progressing nicely. First step was to get a test table, updating in real time:

Screen Shot 2013-01-03 at 11.29.07


It was a big relief to figure out how to produce graphs again – e.g. power consumption:

Screen Shot 2013-01-03 at 10.01.55

The measurement resolution from the 2000 pulse/kWh counters is excellent. Here is an excerpt of power consumption vs solar production on a cloudy and wet winter morning:

Screen Shot 2013-01-03 at 12.00.02

There is a fascinating little pattern in there, which I suspect comes from the central heating – perhaps from the boiler and pump, switching on and off in a peculiar 9-minute cycle?

Here are a bunch of temperature sensors (plus the central heating set-point, in brown):

Screen Shot 2013-01-03 at 10.02.14

There is no data storage yet (I just left the browser running on the laptop collecting data overnight), nor proper scaling, nor any form of configurability – everything was hard-coded, just to check that the basics work. It’s a far cry from being able to define and configure such graphs in the browser, but hey – one baby step at a time…

Although the D3 and NVD3 SVG-based graphing packages look stunning, they are a bit overkill for simple graphing and consist of more code for the browser to load and run. Maybe some other time – for now I’m using the Flot package for these graphs, again.

Will share the code as soon as I can make up my mind about the basic app structure!

Processing P1 data

In Software on Jan 3, 2013 at 00:01

Last post in a series of three (previous posts here and here).

The decoder for this data, in CoffeeScript, is as follows:

Screen Shot 2012-12-31 at 15.28.22

Note that the API of these decoders is still changing. They are now completely independent little snippets of code which do only one thing – no assumptions on where the data comes from, or what is to be done with the decoded results. Each decoder takes the data, creates an object with decoded fields, and finishes by calling the supplied “cb” callback function.

Here is some sample output, including a bit of debugging:

Screen Shot 2012-12-31 at 13.43.10

As you can see, this example packet used 19 bytes to encode 10 values plus a format code.

Explanation of the values shown:

  • usew is 0: no power is being drawn from the grid
  • genw is 6: power feed into the grid is 10 x 6, i.e. ≈ 60W
  • p1 + p3 is current total consumption: 8 + 191, i.e. 199W
  • p2 is current solar power output: 258W

With a rate of about 50 Kbit/sec using the standard RF12 driver settings, and with some 10 bytes of packet header + footer overhead, this translates to (19 + 10) * 8 * 20 = 4,640 µS “on-air” time once every 10 seconds, i.e. still under 0.05 % bandwidth utilisation. Fine.

This is weblog post #1200 – onwards! :)

Encoding P1 data

In Software on Jan 2, 2013 at 00:01

After yesterday’s hardware hookup, the next step is to set up the proper software for this.

There are a number of design and implementation decisions involved:

  • how to format the data as a wireless packet
  • how to generate and send out that packet on a JeeNode
  • how to pick up and decode the data in Node.js

Let’s start with the packet: there will be one every 10 seconds, and it’s best to keep the packets as small as possible. I do not want to send out differences, like I did with the otRelay setup, since there’s really not that much to send. But I also don’t want to put too many decisions into the JeeNode sketch, in case things change at some point in the future.

The packet format I came up with is one which I’d like to re-use for some of the future nodes here at JeeLabs, since it’s a fairly convenient and general-purpose format:

       (format) (longvalue-1) (longvalue-2) ...

Yes, that’s right: longs!

The reason for this is that the electricity meter counters are in Watt-hour, and will immediately exceed what can be stored as 16-bit ints. And I really do want to send out the full values, also for gas consumption, which is in 1000th of a m3, i.e. in liters.

But for these P1 data packets that would be a format code + 8 longs, i.e. at least 25 bytes of data, which seems a bit wasteful. Especially since not all values need longs.

The solution is to encode each value as a variable-length integer, using only as many bytes as necessary to represent each value. The way this is done is to stored 7 bits of the value in each byte, reserving the top-most bit for a flag which is only set on the last byte.

With this encoding, 0 is sent as 0x80, 127 is sent as 0xFF, whereas 128 is sent as two bytes 0x01 + 0x80, 129 is sent as 0x01 + 0x81, 1024 as 0x08 + 0x80, etc.

Ints up to 7 bits take 1 byte, up to 14 take 2, … up to 28 take 4, and 32 will take 5 bytes.

It’s relatively simple to implement this on an ATmega, using a bit of recursion:

Screen Shot 2012-12-31 at 13.47.01

The full code of this p1scanner.ino sketch is now in JeeLib on GitHub.

Tomorrow, I’ll finish off with the code used on the receiving end and some results.

Playing back logfiles

In Software on Dec 31, 2012 at 00:01

After yesterday’s reading and decoding exploration, here’s some code which will happily play back my daily log files, of which I now have over 4 years worth

Screen Shot 2012-12-29 at 15.55.11

Sample output:

Screen Shot 2012-12-29 at 15.59.52

As you can see, this supports scanning entire log files, both plain text and gzipped. In fact, JeeMonLogParser@parseStream should also work fine with sockets and pipes:

Screen Shot 2012-12-29 at 15.55.46

The beauty – again – is total modularity: both the real serial interface module and this log-replay module generate the same events, and can therefore be used interchangeably. As the decoders work independent of either one, there is no dependency (“coupling”) whatsoever between these modules.

Not to worry: from now on I won’t bore you with every new JavaScript / CoffeeScript snippet I come up with – just wanted to illustrate how asynchronous I/O and events are making this code extremely easy to develop and try out in small bites.

I wish you a Guten Rutsch ins neue Jahr and a safe, healthy, and joyful 2013!

Decoding RF12demo with Node.js

In Software on Dec 30, 2012 at 00:01

I’m starting to understand how things work in Node.js. Just wrote a little module to take serial output from the RF12demo sketch and decode its “OK …” output lines:

Screen Shot 2012-12-29 at 01.04.48

Sample output:

Screen Shot 2012-12-29 at 00.44.21

This quick demo only has decoders for nodes 3 and 9 so far, but it shows the basic idea.

This relies on the EventEmitter class, which offers a very lightweight mechanism of passing around objects on channels, and adding listeners to get called when such “events” happen. A very efficient in-process pub-sub mechanism, in effect!

Here is the “serial-rf12demo” module which does the rest of the magic:

Screen Shot 2012-12-29 at 00.46.13

And that’s really all there is to it – very modular!

Data wants to be dynamic

In Software on Dec 25, 2012 at 00:01

It’s all about dynamics, really. When software becomes so dynamic that you see the data, then all that complex code will vanish into the background:

Screen Shot 2012-12-21 at 23.34.20   DSC_4327

This is the transformation we saw a long time ago when going from teletype-based interaction to direct manipulation with the mouse, and the same is happening here: if the link between physical devices and the page shown on the web browser is immediate, then the checkboxes and indicators on the web page become essentially the same as the buttons and the LED’s. The software becomes invisible – as it should be!

That demo from a few days back really has that effect. And of course then networking kicks in to make this work anywhere, including tablets and mobile phones.

But why stop there? With Tcl, I have always enjoyed the fact that I can develop inside a running process, i.e. modify code on a live system – by simply reloading source files.

With JavaScript, although the mechanism works very differently, you can get similar benefits. When launching the Node.js based server, I use this command:


This not only launches a web server on port 3000 for use with a browser, it also starts watching the files in the current directory for changes. In combination with the logic of SocketStream, this leads to the following behavior during development:

  • when I change a file such as or any file inside the server/ directory, nodemon will stop and relaunch the server app, thus picking up all the changes – and SocketStream is smart enough to make all clients re-connect automatically
  • when changing a file anywhere inside the clients/ area, the server sends a special request via WebSockets for the clients, i.e. the web browser(s), to refresh themselves – again, this causes all client-side changes to be picked up
  • when changing CSS files (or rather, the Stylus files that generate it), the same happens, but in this case the browser state does not get lost – so you can instantly view the effects of twiddling with CSS

Let me stress that: the browser updates on each save, even if it’s not the front window!

The benefits for the development workflow are hard to overstate – it means that you can really build a full client-server application in small steps and immediately see what’s going on. If there is a problem, just insert some “console.log()” calls and watch the server-side (stdout) or client-side (browser console window).

There is one issue, in that browser state gets lost with client-side code changes (current contents of input boxes, button state, etc), but this can be overcome by moving more of this state into Redis, since the Redis “store” can just stay running in the background.

All in all, I’m totally blown away by what’s possible and within reach today, and by the way this type of software development can be done. Anywhere and by anyone.


Setting up a SocketStream app

In Software on Dec 24, 2012 at 00:01

As shown yesterday, the SocketStream framework takes care of a lot of things for real-time web-based apps. It’s at version 0.3 (0.4 coming up), but already pretty effective.

Here’s what I did to get the “ss-blink” demo app off the ground on my Mac notebook:

  • no wallet needed: everything is either free (Xcode) or open source (the rest)
  • make sure the Xcode command-line dev tools have been installed (gcc, make, etc)
  • install the Homebrew package manager using this scary-looking one-liner:
    ruby -e "$(curl -fsSkL"
  • using HomeBrew, install Node.js – brew install node
  • that happens to include NPM, the Node Package Manager, all I had to do was add the NPM bin dir to my PATH (in .bash_profile, for example), so that globally installed commands will be found – PATH=/usr/local/share/npm/bin:$PATH

Not there yet, but I wanted to point out at this point that Xcode plus Homebrew (on Mac, other platforms have their own variants), with Node.js and NPM as foundation for everything else. Once you have those installed and working smoothly, everything else is a matter of obtaining packages through NPM as needed and running them with Node.js – a truly amazing software combo. NPM can also handle uninstalls & cleanup.

Let’s move on, shall we?

  • install SocketStream globally – npm install -g socketstream
    (the “-g” is why PATH needs to be properly set after this point)
  • install the nodemon utility – npm install -g nodemon
    (this makes development a breeze, by reloading the server whenever files change)
  • create a fresh app, using – socketstream new ss-blink
  • this creates a dir called ss-blink, so first I switched to it – cd ss-blink
  • use npm to fetch and build all the dependencies in ss-blink – npm install
  • that’s it, start it up – nodemon app.js (or node app.js if you insist)
  • navigate to and you should see a boilerplate chat app
  • open a second browser window on the same URL, and marvel at how a chat works :)

So there’s some setup involved, and it’s bound to be a bit different on Windows and Linux, but still… it’s not that painful. There’s a lot hidden behind the scenes of these installation tools. In particular npm is incredibly easy to use, and the workhorse for getting tons and tons of packages from GitHub or elsewhere into your project.

The way this works, is that you add one line per package you want to the “package.json” file inside the project directory, and then simply re-run “npm install”. I did exactly that – adding “serialport” as dependency, which caused npm to go out, fetch, and compile all the necessary bits and pieces.

Note that none of the above require “root” privileges: no superuser == better security.

For yesterday’s demo, the above was my starting point. However, I did want to switch to CoffeeScript and Jade instead of JavaScript and HTML, respectively – which is very easy to do with the js2coffee and html2jade tools.

These were installed using – npm install -g js2coffee html2jade

And then hours of head-scratching, reading, browsing the web, watching video’s, etc.

But hey, it was a pretty smooth JavaScript newbie start as far as I’m concerned!

Connecting a Blink Plug to a web browser

In Hardware, Software on Dec 23, 2012 at 00:01

Here’s a fun experiment – using Node.js with SocketStream as web server to directly control the LEDs on a Blink Plug and read out the button states via a JeeNode USB:

JC's Grid, page 51

This is the web interface I hacked together:

Screen Shot 2012-12-21 at 23.34.20

The red background comes from pressing button #2, and LED 1 is currently on – so this is bi-directional & real-time communication. There’s no polling: signalling is instant in both directions, due to the magic of WebSockets (this page lists supported browsers).

I’m running blink_serial.ino on the JeeNode, which does nothing more than pass some short messages back and forth over the USB serial connection.

The rest is a matter of getting all the pieces in the right place in the SocketStream framework. There’s no AngularJS in here yet, so getting data in and out of the actual web page is a bit clumsy. The total code is under 100 lines of CoffeeScript – the entire application can be downloaded as ZIP archive.

Here’s the main client-side code from the client/code/app/ source file:

Screen Shot 2012-12-22 at 00.48.12

(some old stuff and weird coding in there… hey, it’s just an experiment, ok?)

The client side, i.e. the browser, can receive “blink:button” events via WebSockets (these are set up and fully managed by SocketStream, including reconnects), as well as the usual DOM events such as changing the state of a checkbox element on the page.

And this is the main server-side logic, contained in the server/rpc/ file:

Screen Shot 2012-12-22 at 00.54.07

The server uses the node-serialport module to gain access to serial ports on the server, where the JeeNode USB is plugged in. And it defines a “sendCommand” which can be called via RPC by each connected web browser.

Most of the work is really figuring out where things go and how to get at the different bits of data and code. It’s all in JavaScript CoffeeScript on both client and server, but you still need to know all the concepts to get to grips with it – there is no magic pill!

Tomorrow, I’ll describe how I created this app, and how to run it.

Update – The code is now on GitHub.

Dynamic web pages

In Software on Dec 22, 2012 at 00:01

There are tons of ways to make web pages dynamic, i.e. have them update in real-time. For many years, constant automatic full-page refreshes were the only game in town.

But that’s more or less ignoring the web evolution of the past decade. With JavaScript in the browser, you can manipulate the DOM (i.e. the structure underlying each web page) directly. This has led to an explosion of JavaScript libraries in recent years, of which the most widespread one by now is probably JQuery.

In JQuery, you can easily make changes to the DOM – here is a tiny example:

Screen Shot 2012-12-20 at 23.40.23

And sure enough, the result comes out as:

Screen Shot 2012-12-20 at 23.40.32

But there is a major problem – with anything non-trivial, this style quickly ends up becoming a huge mess. Everything gets mixed up – even if you try to separate the JavaScript code into its own files, you still need to deal with things like loops inside the HTML code (to create a repeated list, depending on how many data items there are).

And there’s no automation – the more little bits of dynamic info you have spread around the page, the more code you need to write to keep all of them in sync. Both ways: setting items to display as well as picking up info entered via the keyboard and mouse.

There are a number of ways to get around this nowadays – with a very nice overview about seven of the mainstream solutions by Steven Sanderson.

I used Knockout for the RFM12B configuration generator to explore its dynamics. And while it does what it says, and leads to delightfully dynamically-updating web pages, I still found myself mixing up logic and presentation and having to think about template expansion more than I wanted to.

Then I discovered AngularJS. At first glance, it looks like just another JavaScript all-in-the-browser library, with all the usual expansion and looping mechanisms. But there’s a difference: AngularJS doesn’t mix concepts, it embeds all the information it needs in HTML elements and attributes.

AngularJS manipulates the DOM structure (better than XSLT did with XML, I think).

Here’s the same example as above, in Angular (with apologies for abusing ng-init a bit):

Screen Shot 2012-12-20 at 23.40.53

The “ng-app” attribute is the key. It tells AngularJS to go through the element tree and do its magic. It might sound like a detail, but as a result, this page remains 100% HTML – it can still be created by a graphics designer using standard HTML editing tools.

More importantly, this sort of coding can grow without ever becoming a mix of concepts and languages. I’ve seen my share of JavaScript / HTML mashups and templating attempts, and it has always kept me from using JavaScript in the browser. Until now.

Here’s a better example (live demo):

Screen Shot 2012-12-20 at 23.35.12

Another little demo I just wrote can be seen here. More physical-computing related. As with any web app, you can check the page source to see how it’s done.

For an excellent introduction about how this works, see John Lindquist’s 15-minute video on YouTube. There will be a lot of new stuff here if you haven’t seen AngularJS before, but it shows how to progressively create a non-trivial app (using WebStorm).

If you’re interested in this, and willing to invest some hours, there is a fantastic tutorial on the AngularJS site. As far as I’m concerned (which doesn’t mean much) this is just about the best there is today. I don’t care too much about syntax (or even languages), but AngularJS absolutely hits the sweet spot in the browser, on a conceptual level.

AngularJS is from Google, with MIT-licensed source on GitHub, and documented here.

And to top it all off, there is now also a GitHub demo project which combines AngularJS on the client with SocketStream on the server. Lots of reading and exploring to do!

JavaScript reading list

In Software on Dec 21, 2012 at 00:01

As I dive into JavaScript, and prompted by a recent comment on the weblog, it occurred to me that it might be useful to create a small list of books resources, for those of you interested in going down the same rabbit hole and starting out along a similar path.

Grab some nice food and drinks, you’re gonna’ need ’em!

First off, I’m assuming you have a good basis in some common programming language, such as C, C++, or Java, and preferably also one of the scripting languages, such as Lua, Perl, Python, Ruby, or Tcl. This isn’t a list about learning to program, but a list to help you dive into JavaScript, and all the tools, frameworks, and libraries that come with it.

Because JavaScript is just the enabler, really. My new-found fascination with it is not the syntax or the semantics, but the fast-paced ecosystem that is evolving around JS.

One more note before I take off: this is just my list. If you don’t agree, or don’t like it, just ignore it. If there are any important pointers missing (of course there are!), feel free to add tips and suggestions in the comments.


There’s JavaScript (the language), and there are the JavaScript environments (in the browser: the DOM, and on the server: Node). You’ll want to learn about them all.

  • JavaScript: The Good Parts by Douglas Crockford
    2008, 1st ed, 176 pages, ISBN 0596517742
  • JavaScript: The Definitive Guide by David Flanagan
    2011, 6th ed, 1100 pages, ISBN 0596805527

Videos: again by Douglas Crockford, there’s an excellent list at Stack Overflow. Going through them will take many hours, but they are really excellent. I watched all of these.

Don’t skim over prototypes, “==” vs “===”, and how “this” gets passed to functions.

Being able to understand every single notation in JavaScript is essential. Come back here if you can’t. Or google for stuff. Just don’t cut corners – it’s bound to bite you.

If you want to dive in really deep, check out this page about JavaScript and Scheme.

In the browser

Next on the menu: the DOM, HTML, and CSS. This is the essence of what happens inside a browser. Can be consumed in small doses, as the need arises. Simply start with the just-mentioned Wikipedia links.

Not quite sure what to recommend here – I’ve picked this up over the years. Perhaps w3schools this or this. Focus on HTML5 and CSS3, as these are the newest standards.

On the server

There are different implementations of JavaScript, but on the server, by far the most common implementation seems to be Node.js. This is a lot more than “just some JS implementation”. It comes with a standard API, full of useful functions and objects.

Node.js is geared towards asynchronous & event-driven operation. Nothing blocks, not even a read from a local disk – because in CPU terms, blocking takes too long. This means that you tend to call a “read” function and give it a “callback” function which gets called once the read completes. Very very different frame of mind. Deeply frustrating at times, but essential for any non-trivial app which needs to deal with networking, disks, and other “slow” peripherals. Including us mortals.

  • Learning Node by Shelley Powers
    2012, 1st ed, 396 pages, ISBN 1449323073

See also this great (but fairly long) list of tutorials, videos, and books at Stack Overflow.


Note that JavaScript on the server replaces all sorts of widespread approaches: PHP, ASP, and such. Even advanced web frameworks such as Rails and Django don’t play a role here. The server no longer acts as a templating system generating dynamic web pages – instead it just serves static HTML, CSS, JavaScript, and image files, and responds to requests via Ajax or WebSockets (often using JSON in both directions).

The term for this is Single-page web application, even though it’s not about staying on a single page (i.e. URL) at all costs. See this website for more background – also as PDF.

The other concepts bound to come up are MVC and MVVM. There’s an article about MVC at A List Apart. And here’s an online book with probably more than you want to know about this topic and about JavaScript design patterns in general.

In a nutshell: the model is the data in your app, the view is its presentation (i.e. while browsing), and the controller is the logic which makes changes to the model. Very (VERY!) loosely speaking, the model sits in the server, the view is the browser, and the controller is what jumps into action on the server when someone clicks, drags, or types something. This simplification completely falls apart in more advanced uses of JS.


I am already starting to become quite a fan of CoffeeScript, Jade, and Stylus. These are pre-processors for JavaScript, HTML, and CSS, respectively. Totally optional.

Here are some CoffeeScript tutorials and a cookbook with recipes.

CoffeeScript is still JavaScript, so a good grasp of the underlying semantics is important.

It’s fairly easy to read these notations with only minimal exposure to the underlying language dialects, in my (still limited) experience. No need to use them yourself, but if you do, the above links are excellent starting points.

Just the start…

The above are really just pre-requisites to getting started. More on this topic soon, but let me just stress that good foundational understanding of JavaScript is essential. There are crazy warts in the language (which Douglas Crockford frequently points out and explains), but they’re a fact of life that we’ll just have to live with. This is what you get with a language which has now become part of every major web browser in the world.

Graphics, oh là là!

In Software on Dec 20, 2012 at 00:01

Graphs used to be made with gnuplot or RRDtool. Both generated on the server and then presented as images in the browser. This used to be called state of the art!

But that sooo last-century …

Then came JavaScript libraries such as Flot, which uses the HTML5 Canvas, allowing you to draw the graph in the browser. The key benefit is that these graphs can be made dynamic (updating through real-time data feeds) and interactive (so you can zoom in and show details).

But that sooo last-decade …

Now there is this, using the latest HTML5 capabilities and resolution-independent SVG:

Screen Shot 2012-12-18 at 22.15.15

See (click through on each one to get details).

That picture doesn’t really do justice to the way some of these tools adjust dynamically and animate on change. All in the web browser. Stunning – in features and in variety!

I’ve been zooming in a bit (heh) on tools such as Rickshaw and NVD3 – both with lots of fascinating examples. Some parts are just window dressing, but the dynamics and real-time behaviour will definitely help gain more insight into the underlying datasets. Which is what all the visualisation should be about, of course.

For an interesting project using SocketStream, Flot, and DataTables, see DaisyCentral. There’s a great write-up on the Architectural Overview, and another page on graphically setting up automation rules:

Screen Shot 2012-12-18 at 22.48.27

This editor is based on jsPlumb for drawing.

Another interesting project is Dashku, based on SocketStream and Raphaël. It’s a way to build a live dashboard – the essence only became clear to me after seeing this YouTube video. As you build and adjust it in edit mode, you can keep a second view open which shows the final result. Things automatically get synced, due to SocketStream.

Now, if only I knew how to build up my fu-level and find a way into all this magic…

Getting my feet wet

In Software on Dec 19, 2012 at 00:01

It all starts with baby steps. Let me just say that it feels very awkward and humbling to stumble around in a new programming language without knowing how things should be done. Here’s the sort of gibberish I’m currently writing:

Screen Shot 2012-12-18 at 17.49.18

This must be the ugliest code I’ve ever written. Not because the language is bad, but because I’m trying to convert existing code in a hurry, without knowing how to do things properly in JavaScript / CoffeeScript. Of course it’s unreadable, but all I care for right now, is to get a real-time data source up and running to develop the rest with.

I’m posting this so that one day I can look back and laugh at all this clumsiness :)

The output appears in the browser, even though all this is running on the server:

Screen Shot 2012-12-18 at 17.11.04

Ok, so now there’s a “feed” with readings coming in. But that’s just the tip of the iceberg:

  • What should the pubsub naming structure be, i.e. what are the keys / topic names?
  • Should readings be managed per value (temperature), or per device (room node)?
  • What format should this data have, since inserting a decimal point is locale-specific?
  • How to manage new values, given that previous ones can be useful to have around?
  • Are there easy choices to make w.r.t. how to store the history of all this data?
  • How to aggregate values, but more importantly perhaps: when to do this?

And that’s just incoming data. There will also need to be rules for automation and outgoing control data. Not to mention configuration settings, admin front-ends, live development, per-user settings, access rights, etc, etc, etc.

I’m not too interested yet in implementing things for real. Would rather first spend more time understanding the trade-offs – and learning JavaScript. By doodling as I’m doing now and by examining a lot of code written by others.

If you have any suggestions on what I should be looking into, let me know!

“Experience is what you get while looking for something else.” – Federico Fellini

Real-time out of the box

In Software on Dec 18, 2012 at 00:01

I recently came across SocketStream, which describes itself as “A fast, modular Node.js web framework dedicated to building single-page realtime apps”.

And indeed, it took virtually no effort to get this self-updating page in a web browser:

Screen Shot 2012-12-17 at 00.45.50

The input comes from the serial port, I just added this code:

Screen Shot 2012-12-17 at 15.48.50

That’s not JavaScript, but CoffeeScript – a dialect with a concise functional notation (and significant white-space indentation), which gets turned into JavaScript on the fly.

The above does a lot more than collect serial data: the “try” block converts the text to a binary buffer in the form of a JavaScript DataView, ready for decoding and then publishes each packet on its corresponding channel. Just to try out some ideas…

I’m also using Jade here, a notation which gets transformed into HTML – on the fly:

Screen Shot 2012-12-17 at 15.53.56

And this is Stylus, a shorthand notation which generates CSS (yep, again on the fly):

Screen Shot 2012-12-17 at 15.54.25

All of these are completely gone once development is over: with one command, you generate a complete app which contains only pure JavaScript, HTML, and CSS files.

I’m slowly falling in love with all these notations – yeah, I know, very unprofessional!

Apart from installing SocketStream using “npm install -g SocketStream”, adding the SerialPort module to the dependencies, and scratching my head for a few hours to figure out how all this machinery works, that is virtually all I had to do.

Development is blindingly fast when it comes to client side editing: just save any file and the browser(s) will automatically reload. With a text editor that saves changes on focus loss, the process becomes instant: edit and switch to the browser. Boom – updated!

Server-side changes require nodemon (as described in the faq). After that, server reload becomes equally automatic during development. Pretty amazing stuff.

The trade-off here is learning to understand these libraries and tools and playing by their rules, versus having to write a lot more yourself. But from what I’ve seen so far, SocketStream with Express, CoffeeScript, Jade, Stylus, SocketIO, Node.js, SerialPort, Redis, etc. take a staggering amount of work off my shoulders – all 100% open source.

There’s a pubsub-over-WebSockets mechanism inside SocketStream. Using Redis.

Wow. Creating responsive real-time apps hasn’t been this much fun in a long time!

It’s all about MOM

In Software on Dec 17, 2012 at 00:01

Home monitoring and home automation have some very obvious properties:

  • a bunch of sensors around the house are sending out readings
  • with actuators to control lights and appliances, driven by secure commands
  • all of this within and around the home, i.e. in a fairly confined space
  • we’d like to see past history, usually in the form of graphs
  • we want to be able to control the actuators remotely, through control panels
  • and lastly, we’d like to automate things a bit, using configurable rules

In information processing terms, this stuff is real-time, but only barely so: it’s enough if things happen within say a tenth of a second. The amount of information we have to deal with is also quite low: the entire state of a home at any point in time is probably no more than perhaps a kilobyte (although collected history will end up being a lot more).

The challenge is not the processing side of things, but the architecture: centralised or distributed, network topology for these readings and commands, how to deal with a plethora of physical interfaces and devices, and how to specify and manage the automation rules. Oh, and the user interface. The setup should also be organic, in that it allows us to grow and evolve all the aspects of our system over time.

It’s all about state and messages: the state of the home, current, sensed, and desired, and the events which change that state, in the form of incoming and outgoing messages.

What we need is MOM, i.e. Message-oriented middleware: a core which represents that state and interfaces through messages – both incoming and generated. One very clean model is to have a core process which allows some processes to “publish” messages to it and other to “subscribe” to specific changes. This mechanism is called pubsub.

Ideally, the core process should be launched once and then kept running forever, with all the features and functions added (at least initially) as separate processes, so that we can develop, add, fix, refine, and even tear down the different functions as needed without literally “bringing down the house” at every turn.

There are a couple of ways to do this, and as you may recall, I’ve been exploring the option of using ZeroMQ as the core foundation for all message exchanges. ZeroMQ bills itself as “the intelligent transport layer” and it supports pubsub as well as several other application interconnect topologies. Now, half a year later, I’m not so sure it’s really what I want. While ZeroMQ is definitely more than flexible and scalable enough, it also is fairly low-level in many ways. A lot will need to be built on top, even just to create that central core process.

Another contender which seems to be getting a lot of traction in home automation these days is MQTT, with an open source implementation of the central core called Mosquitto. In MOM terms, this is called a “broker”: a process which manages incoming message traffic from publishers by re-routing it to the proper subscribers. The model is very clean and simple: there are “channels” with hierarchical names such as perhaps “/kitchen/roomnode/temperature” to which a sensor publishes its temperature readings, and then others can subscribe to say “/+/+/temperature” to get notified of each temperature report around the house, the moment it comes in.

MQTT adds a lot of useful functionality, and optionally supports a quality-of-service (QoS) level as a way to handle messages that need reliable delivery (QoS level 0 messages use best-effort delivery, but may occasionally get dropped). The “retain” feature can hold on to the last message sent on each channel, so that when the system shuts down and comes back up or when a connection has been interrupted, a subscriber immediately learns about the last value. The “last will and testament” lets a publisher prepare a message to be sent out to a channel (not necessarily the same one) when it drops out for any reason.

All very useful, but I’m not convinced this is a good fit. In my perception, state is more central than messages in this context. State is what we model with a home monitoring and automation system, whereas messages come and go in various ways. When I look at the system, I’m first of all interested in the state of the house, and only in the second place interested in how things have changed until now or will change in the future. I’d much rather have a database as the centre of this universe. With excellent support for messages and pubsub, of course.

I’ve been looking at Redis lately, a “key-value store” which is not only small and efficient, but which also has explicit support for pubsub built in. So the model remains the same: publishers and subscribers can find each other through Redis, with wildcards to support the same concept of channels as in MQTT. But the key difference is that the central setup is now based on state: even without any publishers active, I can inspect the current temperature, switch setting, etc. – just like MQTT’s “retain”.

Furthermore, with a database-centric core, we automatically also have a place to store configuration settings and even logic, in the form of scripts, if needed. This approach can greatly simplify publishers and subscribers, as they no longer need local storage for configuration. Not a big deal when everything lives on a single machine, but with a central general-purpose store that is no longer a necessity. Logic can run anywhere, yet operate off the same central configuration.

The good news is that with any of the above three options, programming language choice is irrelevant: they all have numerous bindings and interfaces. In fact, because interconnections take place via sockets, there is not even a need to use C-based interface code: even the language itself can be used to handle properly-formatted packets.

I’ve set up a basic installation on the Mac, using Homebrew. The following steps are not 100% precise, but this is more or less all that’s needed on Windows, MacOSX, or Linux:

    brew install node redis
    npm install -g express
    redis-server               (starts the database server)
    express                    (creates a demo app)
    npm install connect-redis
    node app.js                (starts the demo app on port 3000)

There are many examples around the net on how to get started, such as this one, which is already a bit dated.

Let’s see where this leads to…


In Software on Dec 16, 2012 at 00:01

I learned to program in C a long time ago, on a PDP11 running Unix (one of the first installations in the Netherlands). That’s over 30 years ago and guess what… that knowledge is still applicable. Back in full force on all of today’s embedded µC’s, in fact.

I’ll spare you the list of languages I learned before and after that time, but C has become what is probably the most widespread programming language ever. Today, it is the #1 implementation language, in fact. It powers the gcc toolchain, the Linux operating system, most servers and browsers, and … well, just about everything we use today.

It’s pretty useful to learn stuff which lasts… but also pretty hard to predict, alas!

Not just because switching means you have start all over again, but because you can become really productive at programming when spending years and years (or perhaps just 10,000 hours) learning the ins and outs, learning from others, and getting really familiar with all the programming language’s idioms, quirks, tricks, and smells.

C (and it its wake C++ and Objective-C) has become irreplaceable and timeless.

Fast-forward to today and the scenery sure has changed: there are now hundreds of programming languages, and so many people programming, that lots and lots of them can thrive alongside each other within their own communities.

While researching a bit how to move forward with a couple of larger projects here at JeeLabs, I’ve spent a lot of time looking around recently, to decide on where to go next.

The web and dynamic languages are here to stay, and that inevitably leads to JavaScript. When you look at GitHub, the most used programming language is JavaScript. This may be skewed by the fact that the JavaScript community prefers GitHub, or that people make more and smaller projects, but there is no denying that it’s a very active trend:

Screen Shot 2012-12-14 at 15.53.14

In a way, JavaScript went where Java once tried to go: becoming the de-facto standard language inside the browser, i.e. on the client side of the web. But there’s something else going on: not only is it taking over the client side of things, it’s also making inroads on the server end. If you look at the most active projects, again on GitHub, you get this list:

Screen Shot 2012-12-14 at 15.57.09

There’s something called Node.js in each of these top-5 charts. That’s JavaScript on the server side. Node.js has an event-based asynchronous processing model and is based on Google’s V8 engine. It’s also is phenomenally fast, due to its just-in-time compilation for x86 and ARM architectures.

And then the Aha-Erlebnis set in: JavaScript is the next C !

Think about it: it’s on all web browsers on all platforms, it’s complemented by a DOM, HTML, and CSS which bring it into an ever-richer visual world, and it’s slowly getting more and more traction on the server side of the web.

Just as with C at the time, I don’t expect the world to become mono-lingual, but I think that it is inevitable that we will see more and more developments on top of JavaScript.

With JavaScript comes a free text-based “data interchange protocol”. This is where XML tried to go, but failed – and where JSON is now taking over.

My conclusion (and prediction) is: like it or not, client-side JavaScript + JSON + server-side JavaScript is here to stay, and portable / efficient / readable enough to become acceptable for an ever-growing group of programmers. Just like C.

Node.js is implemented in C++ and can be extended in C++, which means that even special-purpose C libraries can be brought into the mix. So one way of looking at JavaScript, is as a dynamic language on top of C/C++.

I have to admit that it’s quite tempting to consider building everything in JavaScript from now on – because having the same language on all sides of a network configuration will probably make things a lot simpler. Actually, I’m also tempted to use pre-processors such as CoffeeScript, Jade, and Stylus, but these are really just optional conveniences (or gimmicks?) around the basic JavaScript, HTML, and CSS trio, respectively.

It’s easy to dismiss JavaScript as yet another fad. But doing so by ignorance would be a mistake – see the Blub Paradox by Paul Graham. Features such as list comprehensions are neat tricks, but easily worked around. Prototypal inheritance and lexical closures on the other hand, are profound concepts. Closures in combination with asynchronous processing (and a form of coding called CPS) are fairly complex, but the fact that some really smart guys can create libraries using these techniques and hide it from us mere mortals means you get a lot more than a new notation and some hyped-up libraries.

I’m not trying to scare you or show off. Nor am I cherry-picking features to bring out arguments in favour of JavaScript. Several languages offer similar – and sometimes even more powerful – features . Based on conceptual power alone, I’d prefer Common Lisp or Scheme, in fact. But JavaScript is dramatically more widespread, and very active / vibrant w.r.t. what is currently being developed in it and for it.

For more about JavaScript’s strengths and weaknesses, see Douglas Crockford’s page.

So where does this leave me? Easy: a JS novice, tempted to start learning from scratch!

Tomorrow, some new considerations for middleware…

Extracting data from P1 packets

In Software on Dec 1, 2012 at 00:01

Ok, now that I have serial data from the P1 port with electricity and gas consumption readings, I would like to do something with it – like sending it out over wireless. The plan is to extend the homePower code in the node which is already collecting pulse data. But let’s not move too fast here – I don’t want to disrupt a running setup before it’s necessary.

So the first task ahead is to scan / parse those incoming packets shown yesterday.

There are several sketches and examples floating around the web on how to do this, but I thought it might be interesting to add a “minimalistic sauce” to the mix. The point is that an ATmega (let alone an ATtiny) is very ill-suited to string parsing, due to its severely limited memory. These packets consist of several hundreds of bytes of text, and if you want to do anything else alongside this parsing, then it’s frighteningly easy to run out of RAM.

So let’s tackle this from a somewhat different angle: what is the minimal processing we could apply to the incoming characters to extract the interesting values from them? Do we really have to collect each line and then apply string processing to it, followed by some text-to-number conversion?

This is the sketch I came up with (“Look, ma! No string processing!”):

Screen Shot 2012 11 29 at 20 44 16

This is a complete sketch, with yesterday’s test data built right into it. You’re looking at a scanner implemented as a hand-made Finite State Machine. The first quirk is that the “state” is spread out over three global variables. The second twist is that the above logic ignores everything it doesn’t care about.

Here’s what comes out, see if you can unravel the logic (see yesterday’s post for the data):

Screen Shot 2012 11 29 at 20 44 49

Yep – that’s just about what I need. This scanner requires no intermediate buffer (just 7 bytes of variable storage) and also very little code. The numeric type codes correspond to different parameters, each with a certain numeric value (I don’t care at this point what they mean). Some values have 8 digits precision, so I’m using a 32-bit int for conversion.

This will easily fit, even in an ATtiny. The moral of this story is: when processing data – even textual data – you don’t always have to think in terms of strings and parsing. Although regular expressions are probably the easiest way to parse such data, most 8-bit microcontrollers simply don’t have the memory for such “elaborate” tools. So there’s room for getting a bit more creative. There’s always a time to ask: can it be done simpler?

PS. I had a lot of fun come up with this approach. Minimalism is an addictive game.

Reducing the packet rate

In Software on Nov 22, 2012 at 00:01

One last optimisation, now that the OpenTherm relay sends nice and small 3-byte packets, is to reduce the number of packets sent.

The most obvious reduction would be to only send changed values:

Screen Shot 2012 11 11 at 14 40 15

This is a trivial change, but it has a major flaw: if packets are lost – which they will, once in a while – then the receiving node will never find out the new value until it changes again.

There are several ways to solve this. I opted for a very simple mechanism: in addition to sending all changes, also send out unchanged values every few minutes anyway. That way, if a packet gets lost, at least it will be resent within a few minutes later, allowing the receiver to resynchronise its state to the sender.

Here’s the main code, which was rewritten a bit to better match this new algorithm:

Screen Shot 2012 11 11 at 14 35 52

This still keeps the last value for each id in a “history” array, but now also adds a “resend” counter. The reason for this is that I only want to re-send id’s which have been set at least once, and not all 256 of them (of which many are never used). Also, I don’t really want to keep sending id’s for which nothing has been received for a long time. So I’m setting the re-send counter to 10 every time a new value is stored, and then counting them down for each actual re-send.

The final piece of the puzzle is to periodically perform those re-sends:

Screen Shot 2012 11 11 at 14 36 10

And in the main loop, we add this:

Screen Shot 2012 11 11 at 14 36 24

Here’s how it works: every second, we go to the next ID slot. Since there are 256 of them, this will repeat roughly every 4 minutes before starting over (resendCursor is 8 bits, so it’ll wrap from 255 back to 0).

When examining the id, we check whether its resend counter is non-zero, meaning it has to be sent (or re-sent). Then the counter is decremented, and the value is sent out. This means that each id value will be sent out at most 10 times, over a period of about 42 minutes. But only if it was ever set.

To summarise, as long as id values are coming in:

  • if the value changed, it will be sent out immediately
  • if it didn’t, it’ll be re-sent anyway, once every 4 minutes or so
  • … but not more than 10 times, if it’s never received again

And indeed, this reduces the packet rate, yet sends and re-sends everything as intended:

  L 13:48:48.073 usb-A40117UK OK 14 24 18 197
  L 13:48:49.080 usb-A40117UK OK 14 25 51 0
  L 13:48:50.072 usb-A40117UK OK 14 26 43 0
  L 13:48:52.072 usb-A40117UK OK 14 28 51 0
  L 13:49:20.075 usb-A40117UK OK 14 56 60 0
  L 13:49:36.548 usb-A40117UK OK 14 20 238 51
  L 13:50:37.549 usb-A40117UK OK 14 20 238 52
  L 13:51:33.532 usb-A40117UK OK 14 24 18 186
  L 13:51:37.563 usb-A40117UK OK 14 20 238 53
  L 13:52:34.537 usb-A40117UK OK 14 24 18 188
  L 13:52:38.553 usb-A40117UK OK 14 20 238 54
  L 13:52:40.088 usb-A40117UK OK 14 0 2 0
  L 13:52:41.096 usb-A40117UK OK 14 1 10 0
  L 13:52:46.087 usb-A40117UK OK 14 6 3 1
  L 13:52:54.101 usb-A40117UK OK 14 14 100 0
  L 13:52:56.101 usb-A40117UK OK 14 16 18 0
  L 13:53:00.100 usb-A40117UK OK 14 20 238 54
  L 13:53:04.099 usb-A40117UK OK 14 24 18 188
  L 13:53:05.090 usb-A40117UK OK 14 25 51 0
  L 13:53:06.098 usb-A40117UK OK 14 26 43 0
  L 13:53:08.098 usb-A40117UK OK 14 28 51 0
  L 13:53:34.538 usb-A40117UK OK 14 24 18 192

Time to close that OpenTherm Gateway box and move it next to the heater. Onwards!

Reducing the payload size

In Software on Nov 21, 2012 at 00:01

The OpenTherm relay sketch presented yesterday sends out 9-byte packets containing the raw ASCII text received from the gateway PIC. That’s a bit wasteful of bandwidth, so let’s reduce that to a 3-byte payload instead. Here is some extra code which does just that:

Screen Shot 2012 11 11 at 12 37 33

I’m using a very hacky way to convert hex to binary, and it doesn’t even check for anything. This should be ok, because the packet has already been verified to be of a certain kind:

  • marked as coming from either the thermostat or the heater
  • the packet type is either Write-Data or Read-Ack
  • every other type of incoming packet will be ignored

Note the shouldSend() implementation “stub”, to be filled in later to send fewer packets.

Here are the results of this change, as received by the central node:

L 11:29:26.651 usb-A40117UK OK 14 16 18 0
L 11:29:27.547 usb-A40117UK OK 14 20 236 30
L 11:29:28.635 usb-A40117UK OK 14 28 45 0
L 11:29:29.642 usb-A40117UK OK 14 0 2 0
L 11:29:30.650 usb-A40117UK OK 14 25 46 0
L 11:29:31.562 usb-A40117UK OK 14 1 10 0
L 11:29:31.658 usb-A40117UK OK 14 1 10 0
L 11:29:34.649 usb-A40117UK OK 14 25 46 0

Much better, although still a lot of duplication and far too many packets to send.

I’ll fix this tomorrow, in the final version of this otRelay.ino sketch.

OpenTherm relay

In Software on Nov 20, 2012 at 00:01

Now that the OpenTherm Gateway has been verified to work, it’s time to think about a more permanent setup. My plan is to send things over wireless via an RFM12B on 868 MHz. And like the SMA solar inverter relay, the main task is to capture the incoming serial data and then send this out as wireless packets.

First, a little adapter – with 10 kΩ resistors in series as 5V -> 3.3V “level converters”:

DSC 4253

(that’s an old JeeNode v2 – might as well re-use this stuff, eh?)

And here’s the first version of the otRelay.ino sketch I came up with:

Screen Shot 2012 11 11 at 00 26 19

The only tricky bit in here is how to identify each message coming in over the serial port. That’s fairly easy in this case, because all valid messages are known to consist of exactly one letter, then 8 hex digits, then a carriage return. We can simply ignore anything else:

  • if there is a valid numeric or uppercase character, and there is room: store it
  • if a carriage returns arrives at the end of the buffer: bingo, a complete packet!
  • everything else causes the buffer to be cleared

This isn’t the packet format I intend to use in the final setup, but it’s a simple way to figure out what’s coming in in the first place.

It worked on first try. Some results from this node, as logged by the central JeeLink:

  L 10:38:07.582 usb-A40117UK OK 14 84 56 48 49 65 48 48 48 48
  L 10:38:07.678 usb-A40117UK OK 14 66 52 48 49 65 50 66 48 48
  L 10:38:08.558 usb-A40117UK OK 14 84 56 48 49 57 48 48 48 48
  L 10:38:08.654 usb-A40117UK OK 14 66 52 48 49 57 51 53 48 48
  L 10:38:09.566 usb-A40117UK OK 14 84 49 48 48 49 48 65 48 48
  L 10:38:09.678 usb-A40117UK OK 14 66 68 48 48 49 48 65 48 48
  L 10:38:10.574 usb-A40117UK OK 14 84 48 48 49 66 48 48 48 48
  L 10:38:10.686 usb-A40117UK OK 14 66 54 48 49 66 48 48 48 48
  L 10:38:11.550 usb-A40117UK OK 14 84 48 48 48 70 48 48 48 48
  L 10:38:11.646 usb-A40117UK OK 14 66 70 48 48 70 48 48 48 48
  L 10:38:12.557 usb-A40117UK OK 14 84 48 48 49 50 48 48 48 48

One of the problems with just relaying everything, apart from the fact that it’s wasteful to send it all as hex characters, is that there’s quite a bit of info coming out of the gateway:

Screen Shot 2012 11 10 at 22 44 51

Not only that – a lot of it is in fact redundant. There’s really no need to send the request as well as the reply in each exchange. All I care about are the “Read-Ack” and “Write-Data” packets, which contain actual meaningful results.

Some smarts in this relay may reduce RF traffic without losing any vital information.

OpenTherm data processing

In Software on Nov 19, 2012 at 00:01

Before going into processing the data from Schelte Bron’s OpenTherm Gateway, I’d like to point to OpenTherm Monitor, a multi-platform application he built and also makes freely available from his website.

It’s not provided for Mac OSX, but as it so happens, this software is written in Tcl and based on Tclkit, by yours truly. Since JeeMon is nothing but an extended version of Tclkit, I was able to extract the software and run it with my Mac version of JeeMon:

  sdx unwrap otmonitor.exe
  jeemon otmonitor.vfs/main.tcl
Heh – nothing beats “re-using” one’s own code in new and mysterious ways, eh?

Here’s the user interface which pops up, after setting up the serial port (it needed some hacking in the otmonitor.tcl script):

Screen Shot 2012 11 10 at 22 35 47

I left this app running for an hour (vertical lines are drawn every 5 minutes), while raising the room temperature in the beginning, and running the hot water tap a bit later on.

Note the high error count: looks like the loose wires are highly susceptible to noise and electrostatic fields. Even just moving my hand near the laptop (connected to the gateway via the USB cable) could cause the Gateway to reset (through its watchdog, no doubt).

Still, it looks like the whole setup works very nicely! There’s a lot of OpenTherm knowledge built into the otmonitor code, allowing it to extract and even control various parameters in both heater and thermostat. As the above window shows, all essential values are properly picked up, even though this heater is from a different vendor. That’s probably the point of OpenTherm: to allow a couple of vendors to make their products inter-operable.

But here’s the thing: neither the heater nor the thermostat are near any serial or USB ports over here, so for me it would be much more convenient to transmit this info wirelessly.

Using a JeeNode of course! (is there any other way?) – stay tuned…

PS. Control would be another matter, since then the issue of authentication will need to be addressed, but as I said: that’s not on the table here at the moment.

The difference between 2 and 3

In AVR, Software on Nov 13, 2012 at 00:01

One. Ok, next post :)

I was curious about the difference between Power-down and Standby in the ATmega328p. Power-down turns off the main clock (a 16 MHz resonator in the case of JeeNodes), whereas standy keeps it on. And quite surprised by the outcome… read on.

There’s an easy way to measure this, as far as software goes, because the rf12_sendWait() function has an argument to choose between the two (careful: 2 = Standby, 3 = Power-down – unrelated to the values in the SMCR register!).

I tweaked radioBlip.ino a bit, and came up with this simple test sketch:

Screen Shot 2012 11 05 at 18 13 23

With this code, it’s just a matter of hooking up the oscilloscope again in current measurement mode (a 10 Ω resistor in the power line), and comparing the two.

Here’s the standby version (arg 2):


… and here’s the power-down version (arg 3), keeping the display the same:


I’ve zoomed in on the second byte transmission, and have dropped the baseline by 21 mA to properly zoom in, so what you’re seeing is the ATmega waking up to grab store a single byte out of into the RFM12B’s reception FIFO, and then going back to sleep again.

The thing to watch is the baseline level of these two scope captures. In the first case, it’s at about 0.5 mA above the first division, and the processor quickly wakes up, does it’s thing, and goes into power-save again.

In the second case, there’s a 40 to 50 µs delay and “stutter” as the system starts its clock (the 16 MHz resonator), does its thing, and then goes back to that ultra-low power level.

There are bytes coming in going out once every 200 µs, so fast wakeup is essential. The difference is keeping the processor drawing 0.5 mA more, versus a more sophisticated clock startup and then dropping back to total shutdown.

What can also be gleaned from these pictures, is that the RF12 transmit interrupt code takes under 40 µs @ 16 MHz. This explains why the RF12 driver can work with a clock frequency down to 4 MHz.

The thing about power-down mode (arg 3), is that it requires a fuse setting different from what the standard Arduino uses. We have to select fast low-power crystal startup, in 258 cycles, otherwise the ATmega will require too much time to start up. This is about 16 µs, and looks very much like that second little hump in the last screen shot.

Does all this matter, in terms of power savings? Keep in mind that this is running while the RFM12B transmitter is on, drawing 21 mA. This case was about the ATmega powering down between each byte transmission. Using my scope’s math integral function, I measured 52.8 µC for standby vs 60.0 µC for power-down – so we’re talking about a 12 % increase in power consumption during transmit!

The explanation for this seemingly contradictory result is that the power-down version adds a delay before it actually sends out the first byte to transmit. In itself that wouldn’t really make a difference, but because of the delay, the transmitter stays on noticeably longer – wiping out the gains of shutting down the main clock. Check this out:


Standby is the saved white reference trace, power-down is yellow.

See? Ya’ can’t always predict this stuff – better measure before jumping to conclusions!

PS. While I’m at it – enabling the RF12’s “OPTIMIZE_SPI” mode lowers charge usage only a little: from 52.8 to 51.6 µC, i.e. using fast SPI @ 8 MHz i.s.o. 2 MHz for sending to the RFM12B. Hardly worth the trouble.

Using watchdog resets

In AVR, Hardware, Software on Nov 9, 2012 at 00:01

The Bluetooth readout node running the “smaRelay” code is ready to go:

DSC 4237

As mentioned yesterday, I’ve decided to take the easy way out and completely power down the Bluetooth module and go through a full powerup/connect/readout cycle about once every 5 minutes.

The advantage for me of a battery-powered unit, is that I don’t have to locate this thing near a power outlet – it can be placed out of sight, unobtrusively doing its work.

I seem to have developed an allergy for power cables and wall warts all over the place…

The power on/off logic held a little surprise, about which I’ll report tomorrow.

Here’s is the new part of the updated smaRelay.ino sketch:

Screen Shot 2012 11 07 at 14 27 33

Quite a different use of the watchdog here:

  • on powerup, go into low-power mode and wait for 5 minutes
  • prepare the watchdog to perform a reset in 8 seconds
  • power up Bluetooth, connect to the SMA, and read out some values
  • then power down Bluetooth and power up the RFM12 radio
  • send out a data packet over RF12
  • lastly, turn the radio off and power down
  • let the watchdog do the reset and make this sketch start over

This approach has as “benefit” that it’ll fail gracefully: even if anything goes wrong and things hang, the hardware watchdog will pull the ATmega out of that hole and restart it, which then starts off again by entering an ultra-low power mode for 5 minutes. So even if the SMA is turned off, this sketch won’t be on more than about 1% of the time.

Here’s the energy consumption measurement of an actual readout cycle:


The readings are a bit noisy – perhaps because I had to use 1 mV/div over a 1 Ω resistor (the 10 Ω used before interfered too much with this new power-switching code).

As you can see, the whole operation takes about 4 seconds. IOW, this node is consuming 153 milli-Coulombs every 300 seconds. That’s 0.5 mC/sec, or 0.5 mA on average. So the estimated lifetime on 3x AA 1900 mAh Eneloops is 3800 hours – some 5 months.

Update – The first set of batteries lasted until March 18th, 2013 – i.e. over 4 months.

Good enough for now. Deploy and forget. Onwards!

Relaying SMA data as RF12 packets

In Hardware, Software on Nov 7, 2012 at 00:01

Yesterday’s post shows how to read out the SMA solar PV inverter via Bluetooth. The idea was to install this on the Mac Mini JeeLabs server, which happens to be in range of the SMA inverter (one floor below). But that brings up a little dilemma.

Install a potentially kernel-panic-generating utility on the main JeeLabs server? Nah…

I don’t really care whether this issue gets fixed. I don’t want to have the web server go down for something as silly as this, and since it’s a kernel panic, there’s no point trying to move the logic into a Linux VM – the problem is more likely in Apple’s Bluetooth / WiFi stack, which will get used no matter how I access things.

The alternative is to implement a little “SMA Relay” using a JeeNode with a Bluetooth module attached to it, which drives the whole protocol and then broadcasts results periodically over RF12. That way I can figure out and control it.

I tried to use the SoftwareSerial library built into the newer Arduino IDE releases, but ran into problems with lost bytes – even with the software UART speed down to 19200 baud.

So I ended up first debugging the code on an Arduino Mega, which has multiple hardware UARTs and allows good ol’ debugging-with-print-statements, sending out that debug info over USB, while a separate hardware UART deals with all communication to and from the Bluetooth module.

Once that worked, all debugging statements were removed and the serial Bluetooth was switched to the main (and only) UART of the JeeNode. The extra 10 kΩ R’s in the RX and TX lines allow hooking up a USB BUB for re-flashing. The BUB will simply overrule, but needs to be removed to try things out:

DSC 4236

(the Bluetooth module I used in this setup is Sparkfun’s BlueSMiRF Silver)

Next step was to add a little driver to JeeMon again, the aging-but-still-working Tcl-based home monitoring setup at JeeLabs. Fairly straightforward, since it merely needs to extract a couple of 16-bit unsigned ints from incoming packets:

Screen Shot 2012 11 06 at 10 39 10

And sure enough, data is coming in (time in UTC):

Screen Shot 2012 11 06 at 10 32 35

… and properly decoded:

Screen Shot 2012 11 06 at 10 33 08

The ATmega code has been added as example to JeeLib on GitHub, see smaRelay.ino.

I’m still debugging some details, as the Arduino sketch often stops working after a few minutes. I suspect that some sort of timeout and retry is needed, in case Bluetooth comms get lost occasionally. Bluetooth range is only a few meters, especially with the re-inforced concrete floors and walls here at JeeLabs.

Anyhow, it’s a start. Suggestions and improvements are always welcome!

Accessing the SMA inverter via Bluetooth

In Software on Nov 6, 2012 at 00:01

As pointed out in recent comments, the SMA solar PV inverter can be accessed over Bluetooth. This offers various goodies, such as reading out the daily yield and the voltage / power generation per MPP tracker. Since the SB5000TL has two of them, and my panels are split between 12 east and 10 west, I am definitely interested in seeing how they perform.

Besides, it’s fun and fairly easy to do. How hard could reading out a Bluetooth stream be?

Well, before embarking on the JeeNode/Arduino readout, I decided to first try the built-in Bluetooth of my Mac laptop, which is used by the keyboard and mouse anyway.

I looked at a number of examples out there, but didn’t really like any of ’em – they looked far too complex and elaborate for the task at hand. This looked like a wheel yearning to be re-invented… heh ;)

The trouble is that the protocol is fully packetized, checksummed, etc. The way it was set up, this seems to also allow managing multiple inverters in a solar farm. Nothing I care about, but I can see the value and applicability of such an approach.

So what it comes down to is to send a bunch of hex bytes in just the right order and with just the right checksums, and then pulling out a few values from what comes back by only decoding what is relevant. Fortunately, the Nanode SMA PV Monitor project on GitHub by Stuart Pittaway already did much of this (and a lot more).

I used some templating techniques (in good old C) which are probably worth a separate post, to generate the proper data packets to connect, initialise, login, and ask for specific results. And here’s what I got – after a lot of head-scratching and peering at hex dumps:

    $ make
    cc -o bluesman main.cpp
    logged in
    daily yield: 2886 Wh @ Sun Jun 29 15:38:03     394803
    total generated power: 75516 W
    AC power: 432 W
    451f: DC voltage = 181.00 V
    451f: DC voltage = 142.62 V
    DC voltage
    251e: DC power = 252 W
    251e: DC power = 177 W
    DC power

The clock was junk at the time, but as you can see there are some nice bits of info in there.

One major inconvenience was that my 11″ MacBook Air tended to crash every once in a while. And in the worst possible way: hard kernel panic -> total reboot needed -> all unsaved data lost. Yikes! Hey Apple, get your stuff solid, will ya – this is awful!

The workaround appears to be to disable wireless and not exit the app while data is coming in. Sounds awfully similar to the kernel panics I can generate by disconnecting an FTDI USB cable or BUB, BTW. Needless to say, these disruptions are extremely irritating while trying to debug new code.

Next step… getting this stuff to work on an ATmega – stay tuned!

Verifying synchronisation over time

In AVR, Software on Nov 5, 2012 at 00:01

(Perhaps this post should be called “Debugging with a scope, revisited” …)

The syncRecv.ino sketch developed over the last few days is shaping up nicely. I’ve been testing it with the homePower transmitter, which periodically sends out electricity measurements over wireless.

Packets are sent out every 3 seconds, except when there have been no new pulses from any of the three 2000 pulse/kWh counters I’m using. So normally, a packet is expected once every second, but at night when power consumption drops to around 100 Watt, only every third or fourth measurement will actually lead to a transmission.

The logic I’m using was specifically chosen to deal with this particular case, and the result is a pretty simple sketch (under 200 LOC) which seems to work out surprisingly well.

How well? Time to fire up that oscilloscope again:


This is a current measurement, collected over about half an hour, i.e. over 500 reception attempts. The screen was set in 10s trace persistence mode (with “false colors” and “background” enabled to highlight the most recent traces and keep showing each one, so all the triggers are superimposed on one another.

These samples were taken with about 300 W consumption (i.e. 600 pulses per hour, one per 6s on average), so the transmitter was indeed skipping packets fairly regularly.

Here’s a typical single trigger, giving a bit more detail for one reception:


Lots of things one can deduce from these images:

  • the mid-level current consumption is ≈ 8 mA, that’s the ATmega running
  • the high-level current increases by another 11 mA for the RFM12B radio
  • almost all receptions are within 8..12 ms
  • most missing packets cause the receiver to stay on for up to some 18 ms
  • on a few occasions, the reception window is doubled
  • when that happens, the receiver can be on, but still no more than 40 ms
  • the 5 ms after proper reception are used to send out info over serial
  • the ATmega is on for less than 20 ms most of the time (and never over 50 ms)
  • it looks like the longer receptions happened no more than 5 times

If you ignore the outliers, you can see that the receiver stays on well under 15 ms on average, and the ATmega well under 20 ms.

This translates to a 0.5% duty cycle with 3s transmissions, or a 200-fold reduction in power over leaving the ATmega and RFM12B on all the time. To put that in perspective: on average, this setup will draw about 0.1 mA (instead of 20 mA), while still receiving those packets coming in every 3 seconds or so. Not bad, eh?

There’s always room for improvement: the ATmega could be put to sleep while the radio is receiving (it’s going to be idling most of that time anyway). And of course the serial port debugging output should be turned off for real use. Such optimisations might halve the remaining power consumption – diminishing returns, clearly!

But hey, enough is enough. I’m going to integrate this mechanism into the homeGraph.ino sketch – and expect to achieve at least 3 months of run time on 3x AA (i.e. an average current consumption of under 1 mA total, including the GLCD).

Plenty for me – better than both my wireless keyboard and mouse, in fact.

Predicting the next packet time

In AVR, Software on Nov 4, 2012 at 00:01

The problem described in the last post was caused by the receiver losing track of what the transmitter is doing when packets are missing.

Here are the key parts of an improved syncRecv.ino, with several refinements:

  • on powerup, an estimate is chosen as before, using a few iterations
  • we save the best estimate in EEPROM and reuse it after a restart
  • the loop then switches to predicting the next arrivals each time
  • failure to capture within 20 packets restarts a full estimation run
  • every 4, 8, 12, etc lost packets, the window size is doubled (up to a maximum)
  • on each successful capture, the window size is halved (down to a minimum)
  • the cycle time estimate is averaged with a 6-fold smoothing filter

The estimation logic is roughly the same as before:

Screen Shot 2012 10 31 at 15 15 10

And here’s the main loop, with the rest of the logic described above:

Screen Shot 2012 10 31 at 15 15 25

The code will lock onto packets even when some of them are missing. It does so by simply ignoring the last miss, and predicting another packet again exactly one cycle later.

Here are some first results, illustrating synchronisation loss:

Screen Shot 2012 10 31 at 15 20 55

Still some rough spots, but this algorithm should receive properly with a < 1% on/off cycle.

Losing sync is bad

In AVR, Software on Nov 3, 2012 at 00:01

Yesterday’s code has a nasty side-effect of getting into a corner out of which it almost can’t recover at times. Here’s a somewhat longer run, edited for brevity (note the indented lines):

Screen Shot 2012 10 30 at 23 52 35

On one occasion, it took 238 retries (12 minutes) to get back into sync!

Note how the code is sailing slightly too close to the edge, by always being at 0 or 1 ms from the expected time, just because this watchdog timer “happens” to run at a particular rate.

One fix is to gradually widen the window again when too many receptions in a row fail:

Screen Shot 2012 10 30 at 23 56 44

The other step is to always leave a bit more time when calculating a new estimate – it’s better to err on the side of receiving too early than to miss the start of the packet:

Screen Shot 2012 10 30 at 23 55 53

Whoops, looks like this doesn’t get it right – but it illustrates the widening:

Screen Shot 2012 10 31 at 00 02 29

This change works a lot better (i.e. plus 2, instead of minus 2):

Screen Shot 2012 10 31 at 00 07 05

Here’s the sample output:

Screen Shot 2012 10 31 at 00 05 35

The syncRecv code on GitHub has been updated with this tweak. More to come!

Synchronised reception

In AVR, Software on Nov 2, 2012 at 00:01

That homeGraph setup brought out the need to somehow synchronise a receiver to the transmitter, as illustrated in a recent post. See also this forum discussion, which made me dive in a little deeper.

Let’s take a step back and summarise what this is this all about…

The basic idea is that if the transmitter is transmitting in a fixed cycle, then the receiver need not be active more than a small time window before and after the expected transmission. This might make it possible to reduce power consumption by two orders of magnitude, or more.

Here’s the initial syncRecv.ino sketch I came up with, now also added to JeeLib:

Screen Shot 2012 10 30 at 22 55 17

The idea is to turn the radio off for T – W milliseconds after a packet comes in, where T is the current cycle time estimate, and W the current +/- window size, and then wait for a packet to come in, but no later than T + W.

Each time we succeed, we get a better estimate, and can reduce the window size. Down to the minimum 16 ms, which is the granularity of the watchdog timer. For that same reason, time T is truncated to multiples of 16 ms as well. We simply cannot deal with time more accurately if all we have is the watchdog.

Here are some results from three different JeeNodes:

Screen Shot 2012 10 30 at 22 48 04

Screen Shot 2012 10 30 at 22 42 44

Screen Shot 2012 10 30 at 22 39 44

Screen Shot 2012 10 30 at 22 37 04

(whoops – the offset sign is the wrong way around, because I messed up the subtraction)

Note that for timing purposes, this sketch first waits indefinitely for a packet to come in. Since the transmitter doesn’t always send out a sketch, some of these measurement attempts fail – as indicated by multiple R’s.

That last example is a bit surprising: it was modified to run without powering down the radio in between reception attempts. What it seems to indicate is that reception works better when the radio is not put to sleep – not sure why. Or maybe the change in current consumption affects things?

As you can see, those watchdog timers vary quite a lot across different ATmega chips. These tests were done in about 15 minutes, with the sending sketch (and home power consumption levels) about the same during the entire period.

Still, these results look promising. Seems like we could get the estimates down to a few milliseconds and run the watchdog at its full 16 ms resolution. With the radio on less than 10 ms per 3000 ms, we’d get a 300-fold reduction in power consumption. Battery powered reception may be feasible after all!

Meet the wireless JeeBoot

In Software on Oct 31, 2012 at 00:01

This has been a long time coming, and the recent Elektro:camp meet-up has finally pushed me to figure out the remaining details and get it all working (on foam board!):

DSC 4220

Bottom middle is a JeeLink, which acts as a “boot server” for the other two nodes. The JeeNode USB on the left and the JeeNode SMD on the right (with AA Power Board) both now have a new boot loader installed, called JeeBoot, which supports over-the-air uploading via the RFM12B wireless module.

The check for new firmware happens when pressing reset on a remote node (not on power-up!). This mechanism is already quite secure, since you need physical access to the node to re-flash it. Real authentication could be added later.

The whole JeeBoot loader is currently a mere 1.5 KB, including a custom version of the RF12 driver code. Like every ATmega boot loader, it is stored in the upper part of flash memory and cannot be damaged by sketches running amok. The code does not yet include proper retry logic and better low-power modes while waiting for incoming data, but that should fit in the remaining 0.5 KB. The boot loader could be expanded to 4 KB if need be, but right now this thing is small enough to fit even in an ATmega168, with plenty of room left for a decent sketch.

The boot algorithm is a bit unconventional. The mechanism is driven entirely from the remote nodes, with the central server merely listening and responding to incoming requests in a state-less fashion. This approach should offer better support for low-power scenarios. If no new code is available, or if the server does not respond quickly, the remote node continues by launching the current sketch. If the current sketch does not match its stored size and CRC (perhaps because of an incomplete or failed previous upload attempt), then the node retries until it has a valid sketch. That last part hasn’t been fully implemented yet.

The boot server node can be a JeeLink, which has enough memory on board to store different sketches for different remote nodes (not all of them need to be running the same code). But it could also be another RFM12B-based setup, such as a small Linux box or PC.

This first test server has just two fixed tiny sketches built in: fast blink and slow blink. It alternately sends either one or the other, which is enough to verify that the process works. Each time any node is reset, it’ll be updated with one of these two sketches. A far more elaborate server sketch will be needed for a full-fledged over-the-air updatable WSN.

But hey, it’s a start, and the hardest part is now done!

Debugging with a scope

In Hardware, Software on Oct 30, 2012 at 00:01

Good, but not perfect… I was curious about the actual current consumption of this latest version of the homeGraph.ino sketch. So here’s another timing snapshot:

Annotated capture

This has trace persistence turned on, collecting multiple samples in one screen shot.

As you can see, the radio is turned on for ≈ 75 ms in this case (different JeeNode, slightly different watchdog timer period). Then after 30..35 ms the sketch goes to sleep again.

The second case is when nothing has been received after 150 ms: the “radioTimer” fires, and in this case the sketch turns off the radio as well, assuming there will be no packet coming in anymore.

But the interesting bit starts on the next packet coming in after one has been omitted. The logic in the code is such that the radio will be left on indefinitely in this case. As you can see, every late arrival then comes 100 ms later than expected: same radio-off + power down signature on the right-hand side of the screen capture, just much later.

Here is the code again:

Screen Shot 2012 10 21 at 22 41 49

And the main loop which calls this code:

Screen Shot 2012 10 21 at 22 06 18

That “if (!timingWasGood) …” code is probably wrong.

But it’s not so easy to fix, I’m afraid. Because the timing gets even more tricky if two or more packets are omitted, not just occasionally a single one. Sometimes, the radio doesn’t get switched off quickly – as you can see in this 30-minute capture at the end of the day, when power levels are starting to drop:


Maybe I need to try out a software PLL implementation after all. The other reason is that there seems to be a fair amount of variation in watchdog clock timing between different ATmega’s. For one, I had to use 2800 as value for recvOffTime, with another 3000 worked a lot better. A self-locking algorithm would solve this, and would let us predict the next packet time with even more accuracy.

But this is still good enough for now. Normally the radio will only be on for about 100 ms every 3s, so that’s a 30-fold power reduction, or about 0.6 mA on average for the radio.

This still ought to run for a month or two on a 3x AA battery pack, at least for daytime use. Which brings up another idea: turn the whole thing off when it’s dark – the display is not readable without light anyway.

Oh well. Perfection is the enemy of done.

Final HomeGraph setup

In Hardware, Software on Oct 29, 2012 at 00:01

This concludes my little gadget to track home energy use and solar energy production:

DSC 4219

The graph shows production vs total consumption in 15-minute intervals for the last 5 hours. A summary of this infomation is shown at the bottom: “+” is total solar production in the last 5 hours, “-” is total energy consumption in that same period.

The actual consumption values are not yet correct because the home energy pulse counter is wired incorrectly, but they will be once that is fixed. The total home consumption is currently 1327 – 1221 + 7 = 113W, since the home counter is currently driven in reverse.

The graph is auto-scaling and I’m storing these values in EEPROM whenever it scrolls, so that a power-down or reset from say a battery change will only lose the information accumulated in the last 15 minutes.

Power consumption is “fairly low”, because the backlight has been switched off and the radio is turned off between predicted reception times. The mechanism works quite well when there is a packet every 3 or 6 seconds, but with longer intervals (i.e. at night), the sketch still keeps the receiver on for too long.

A further refinement could be to reduce the scan cycle when there are almost no new pulses coming in – and then picking up again when the rate increases. Trouble is that it’s impossible to predict accurately when packets will be skipped, so the risk is that the sketch quickly goes completely out of sync when packet rates do drop. The PLL approach would be a better option, no doubt.

But all in all, I’m quite happy with the result. The display is reasonably easy to read in daylight, even without the backlight. I’ll do a battery-lifetime test with a fresh new battery once the pulse counter wiring has been fixed.

The code has become a bit long to be included in this post – it’s available as homeGraph on GitHub, as part of the GLCDlib project. I’m still amazed by how much a little 200-line program can do in just 11 KB of flash memory, and how it all ends up as a neat custom gadget. Uniquely tailored for JeeLabs, but it’s all open source and easy to adapt by anyone.

Feel free to add a light sensor, a PIR motion detector, an RTC, whatever…


It’s all about timing – part 2

In Software on Oct 28, 2012 at 00:01

Yesterday’s optimisation was able to achieve an estimated 60-fold power consumption, by turning on the radio only when a packet is predicted to arrive, and going into a deep sleep mode for the rest of the time.

That does work, but the problem is that the sending node isn’t sending out a packet every 3 seconds. Sometimes no packet is sent because there is nothing to report, and sometimes a packet might simply be lost.

Yesterday’s code doesn’t deal with these cases. It merely waits after proper reception, because in that case the time to the next packet is known and predictable (to a certain degree of accuracy). If a packet is not coming in at the expected time, the sketch will simply continue to wait indefinitely with the radio turned on.

I tried to somewhat improve on this, by limiting the time to wait when a packet does not arrive, turning the radio off for a bit before trying again. But that turns out to be rather complex – how do you deal with uncertainty when the last known reception is longer and longer ago? We can still assume that the sender is staying in its 3s cycle, but: 1) the clocks are not very accurate on either end (especially the watchdog timer), and 2) we need a way to resync when all proper synchronisation is lost.

Here’s my modified loop() code:

Screen Shot 2012 10 21 at 22 06 18

And here’s the slightly extended snoozeJustEnough() logic:

Screen Shot 2012 10 21 at 22 06 04

Yes, it does work, some of the time – as you can see in this slow scope capture:


Four of those blips are working properly, i.e. immediate packet reception, drop in current draw, and then display update. Two of them are missed packets (probably none were sent), which are again handled properly by turning the receiver off after 150 ms, instead of waiting indefinitely.

But once that first packet reception fails, the next try will be slightly too late, and then the receiver just stays on for the full 3 seconds.

And that’s actually just a “good” example I picked from several runs – a lot of the time, the receiver just stays on for over a dozen seconds. This algorithm clearly isn’t sophisticated enough to deal with periodic packets when some of ’em are omitted from the data stream.

I can think of a couple of solutions. One would be to always send out a packet, even just a brief empty one, to keep the receiver happy (except when there is packet loss). But that seems a bit wasteful of RF bandwidth.

The other approach would be to implement a Phase Locked Loop in software, so that the receiver tracks the exact packet frequency a lot more accurately over time, and then gradually widens the receive window when packets are not coming in. This would also deal with gradual clock variations on either end, and if dimensioned properly would end up with a large window to re-gain “phase lock” when needed.

But that’s a bit beyond the scope of this whole experiment. The point of this demonstration was to show that with extra smarts in the receiving node, it is indeed possible to achieve low-power periodic data reception without having to keep the receiver active all the time.

Update – With the following timing tweak, everything works out a lot better already:

Screen Shot 2012 10 21 at 22 41 49

The receiver is now staying in sync surprisingly well – it’s time to declare victory!

It’s all about timing

In Software on Oct 27, 2012 at 00:01

The previous post showed that most of the power consumption of the homeGraph.ino sketch was due to the RFM12B receiver being on all the time. This is a nasty issue which comes back all the time with Wireless Sensor Networks: for ultra-low power scenarios, it’s simply impossible to keep the radio on at all times.

So how can we pick up readings from the new homePower.ino sketch, used to report energy consumption and production of the new solar panels?

The trick is timing: the homePower.ino sketch was written in such a way that it only sends out a packet every 3 seconds. Not always, only when there is something to report, but always exactly on the 3 second mark.

That makes it possible to predict when the next packet can be expected, because once we do receive a packet, we know that we don’t have to expect one for another 3 seconds. Aha, so we could turn the receiver off for a while!

It’s very easy to try this out. First, the code which goes to sleep:

Screen Shot 2012 10 21 at 21 26 40

The usual stuff really – all the hard work is done by the JeeLib library code.

And here’s the new loop() code:

Screen Shot 2012 10 21 at 20 24 23

In other words: whenever a packet has been received, process it, then go to sleep for 2.8 seconds, then wake up and start the receiver again.

Here’s the resulting power consumption timing, as voltage drop over a series resistor:


I can’t say I fully understand what’s going on, but I think it’s waiting for the next packet for about 35 ms with the receiver enabled (drawing ≈ 18 mA), and then another 35 ms is spent generating the graph and sending the new image to the Graphics Board over software SPI (drawing 7 mA, i.e. just the ATmega).

Then the µC goes to sleep, leaving just the display showing the data.

So we’re drawing about 18 mA for say 50 ms every 3000 ms – this translates to a 60-fold reduction in average current consumption, or about 0.3 mA (plus the baseline current consumption). Not bad!

Unfortunately, real-world use isn’t working out quite as planned… to be continued.

Reducing power consumption

In Software on Oct 26, 2012 at 00:01

The problem with yesterday’s setup is power consumption:

DSC 4213

There’s no way the sketch will run for a decent amount of time on just a single AA battery.

Let’s take some measurements:

  • total power consumption on a 5V supply (via the USB BUB) is 20 mA
  • total power consumption on 3.3V is almost the same: 19 mA
  • the battery drain on the Eneloop at 1.28V is a whopping 95 mA

(the regulator on the AA board was selected for its low idle current, not max efficiency)

That last value translates to a run time of 20 hours on a fully charged battery. Fine for a demo, but definitely not very practical if this means we must replace batteries all the time.

Let’s try to reduce the power consumption of this thing – my favourite pastime.

The first obvious step is to turn off the Graphics Board backlight – a trivial change, since it can be done under software control. With the dark-on-light version of the GLCD this is certainly feasible, since that display is still quite readable in ambient light. But the net effect with a 5V power is that we now draw 16.5 mA … not that much better!

The real power consumer is the RFM12B module, which is permanently on. Turning it off drops the power consumption to 6.0 mA (with the Graphics Board still displaying the last values). And putting the ATmega to sleep reduces this even further, down to 0.5 mA. Now we’re cookin’ – this would last some 6 months on 3 AA batteries.

Except that this variant of the “homeGraph” sketch is next to useless: it powers up, waits for a packet, updates the display, and then goes to sleep: forever… whoops!

But maybe there’s a way out – stay tuned…

Who needs a server?

In Hardware, Software on Oct 25, 2012 at 00:01

With all those energy pulses flying through the air, we don’t even need a central setup to display this information on a little self-contained Graphics Board display – like so:

DSC 4214

The “homeGraph.ino” sketch which drives it has just been added to GitHub:

Screen Shot 2012 10 21 at 15 20 54

There’s room for a little graph, but it’ll have to wait until the home usage counter has been rewired to measure the proper value (the value shown above is in fact the surplus).

Decoding the pulses

In Software on Oct 23, 2012 at 00:01

Receiving the packets sent out yesterday is easy – in fact, since they are being sent out on the same netgroup as everything else here at JeeLabs, I don’t have to do anything. Part of this simplicity comes from the fact that the node is broadcasting its data to whoever wants to hear it. There is no need to define a destination in the homePower.ino sketch. Very similar to UDP on Ethernet, or the CAN bus, for that matter.

But incoming data like this is not very meaningful, really:

  L 22:09:25.352 usb-A40117UK OK 9 2 0 69 235 0 0 0 0 103 0 97 18

What I have in mind is to completely redo the current system running here (currently still based on JeeMon) and switch to a design using ZeroMQ. But that’s still in the planning stages, so for now JeeMon is all I have.

To decode the above data, I wrote this little “homePower.tcl” driver:

Screen Shot 2012 10 21 at 01 52 07

It takes those incoming 12-byte packets, and converts them to three sets of results – each with the total pulse count so far (2000 pulses/KWh), and the last calculated instantaneous power consumption. Note also the “decompression” of the millisecond time differences, as applied on the sending side.

Calculation of the actual Watts being consumed (or produced) is trivial: there are 2000 pulses per KWh, so one pulse per half hour represents an average consumption (or production) of exactly one Watt.

To activate this driver I also had to add this line to “main.tcl”:

  Driver register RF12-868.5.9 homePower

And sure enough, out come results like this:

Screen Shot 2012 10 21 at 01 52 32

This is just after a reset, at night with no solar power being generated. That’s 7 Watt consumed by the cooker (which is off, but still drawing some residual power for its display and control circuits), and 105 Watt consumed by the rest of the house.

Actually, you’re looking at the baseline power consumption here at JeeLabs. I did these measurements late at night with all the lights and everything else turned off (this was done by staring at these figures from a laptop on wireless, running off batteries). A total of 112 Watt, including about 24 Watt for the Wireless router plus the Mac Mini running the various JeeLabs web servers, both always on. Some additional power (10W perhaps?) is also drawn by the internet modem downstairs, so that leaves only some 80 Watt of undetermined “vampire power” drawn around the house. Not bad!

One of my goals for the next few months will be to better understand where that remaining power is going, and then try to reduce it even further – if possible. That 80 W baseline is 700 KWh per year after all, i.e. over 20% of the total annual consumption here.

Here are some more readings, taken the following morning with heavy overcast clouds:

Screen Shot 2012 10 21 at 10 24 37

This also illustrates why the wiring error is causing problems: the “pow3” value is now a surplus (counting down), but there’s no way to see that in the measurement data.

I’ve dropped the packet sending rate to at most once every 3 seconds, and am very happy with these results which give me a lot more detail and far more frequent insight into our power usage around here. Just need to wait for the electrician to come back and reroute counter 3 so it doesn’t include solar power production.

Sending out pulses

In Software on Oct 22, 2012 at 00:01

With pulses being detected, the last step in this power consumption sketch is to properly count the pulses, measure the time between ’em, and send off the results over wireless.

There are a few details to take care off, such as not sending off too many packets, and sending out the information in such a way that occasional packet loss is harmless.

The way to do this is to track a few extra values in the sketch. Here are the variables used:

Screen Shot 2012 10 21 at 00 14 43

Some of these were already used yesterday. The new parts are the pulse counters, last-pulse millisecond values, and the payload buffer. For each of the three pulse counter sources, I’m going to send the current count and the time in milliseconds since the last pulse. This latter value is an excellent indication of instantaneous power consumption.

But I also want to keep the packet size down, so these values are sent as 16-bit unsigned integers. For count, this is not so important as it will be virtually impossible to miss over 65000 packets, so we can always resync – even with occasional packet loss.

For the pulse time differences, having millisecond resolution is great, but that limits the total to no more than about a minute between pulses. Not good enough in the case of solar power, for example, which might stop on very dark days.

The solution is to “compress” the data a bit: values up to one minute are sent as is, values up to about 80 minutes are sent in 1-second resolution, and everything above is sent as being “out of range”.

Here are the main parts of the sketch, the full “homePower.ino” sketch is now on GitHub:

Screen Shot 2012 10 21 at 00 36 00

Sample output, as logged by a process which always runs here at JeeLabs:

  L 22:09:25.352 usb-A40117UK OK 9 2 0 69 235 0 0 0 0 103 0 97 18

Where “2 0 69 235” is the cooker, “0 0 0 0” is solar, and “103 0 97 18” is the rest.

Note that results are sent off no more than once a second, and the careful distinction between having data-to-send pending and actually getting it sent out only after that 1000 ms send timer expires.

The scanning and blinking code hasn’t changed. The off-by-one bug was in calling setblinks() with a value of 0 to 2, instead of 1 to 3, respectively.

That’s it. The recently installed three new pulse counters are now part of the JeeLabs home monitoring system. Well… as far as remote sensing and reporting goes. Processing this data will require some more work on the receiving end.

Detecting pulses

In Software on Oct 21, 2012 at 00:01

With yesterday’s solar setup operational, it’s now time to start collecting the data.

The pulse counter provides a phototransistor output which is specified as requiring 5..27V and drawing a current up to 27 mA, so my hunch is that it’s a phototransistor in series with an internal 1 kΩ resistor. To interface to the 3.3V I/O pins of a JeeNode, I used this circuit:

Pulse power

That way, if the circuit has that internal 1 kΩ resistor, the pin will go from 0 to 2.5V and act as a logic “1”. In case there is no internal resistor, the swing will be from 0 to 5V, but with the 10 kΩ resistor in series, this will still not harm the ATmega’s I/O pin (excess current will leak away through the internal ESD protection diode).

No need to measure anything, the above will work either way!

I considered building this project into a nice exclosure, but in the end I don’t really care – as long as it works reliably. As only 3 input pins are used, there’s a spare to also drive an extra LED. So here’s the result, built on a JeePlug board – using a 6-pin RJ12 socket:

DSC 4204

The RJ12 socket pins are not on a 0.1″ grid, but they can be pushed in by slanting the socket slightly. The setup has been repeated three times, as you can see:

DSC 4205

To avoid having to chase up and down the stairs while debugging too many things at once, I started off with this little sketch to test the basic pulse counter connections:

Screen Shot 2012 10 20 at 22 08 26

All the code does, is detect when a pulse starts and blink a LED one, two, or three times, depending on which pulse was detected. Note how the two MilliTimer objects make it easier to perform several periodic tasks independently in a single loop.

Tomorrow: logic to track pulse rates and counts, and sending results off into the air.

PS. Off-by-one bug – will be fixed tomorrow.

Push-scanning with coroutines

In Software on Oct 18, 2012 at 00:01

The “benstream.lua” decoder I mentioned yesterday is based on coroutines. Since probably not everyone is familiar with them (they do not exist in languages such as C and C++), it seems worthwhile to go over this topic briefly.

Let’s start with this completely self-contained example, written in Lua:

Screen Shot 2012 10 15 at 01 23 15

Here’s the output generated from the test code at the end:

  (string) abcde
  (string) 123
  (number) 12345
  (boolean) false
  (number) 987
  (boolean) false
  (number) 654
  (number) 321
  (number) 99999999
  (boolean) true
  (string) one
  (number) 11
  (string) two
  (number) 22
  (string) bye!

There is some weird stuff in here, notably how the list start, dict start, and list/dict end are returned as “false”, “true”, and the empty list ({} in Lua), respectively. The only reason for this is that these three values are distinguishable from all the other possible return values, i.e. strings and numbers.

But the main point of this demo is to show what coroutines are and what they do. If you’re not familiar with them, they do take getting used to… but as you’ll see, it’s no big deal.

First, think about the task: we’re feeding characters into a function, and expect it to track where it is and return complete “tokens” once enough data has been fed through it. If it’s not ready, the “cwrap” function will return nil.

The trouble is that such code cannot be in control: it can’t “pull” more characters from the input when it needs them. Instead, it has to wait until it gets called again, somehow figure out where it was the last time, and then deal with that next character. For an input sequence such as “5:abcde”, we need to track being in the leading count, then skip the colon, then track the actual data bytes as they come in. In the C code I added to the EmBencode library recently, I had to implement a Finite State Machine to keep track of things. It’s not that hard, but it feels backwards. It’s like constantly being called away during a meeting, and having to pick up the discussion again each time you come back :)

Now look at the above code. It performs the same task as the C code, but it’s written as if it was in control! – i.e. we’re calling nextChar() to get the next character, looping and waiting as needed, and yet the coroutine is not actually pulling any data. As you can see, the test code at the end is feeding characters one by one – the normal way to “push” data, not “pull” it from the parsing loop. How is this possible?

The magic happens in nextchar(), which calls coroutine.yield() with an argument.

Yield does two things:

  • it saves the complete state of execution (i.e. call stack)
  • it causes the coroutine to return the argument given to it

It’s like popping the call stack, with one essential difference: we’re not throwing the stack contents away, we’re just moving it aside.

Calling a coroutine does almost exactly the opposite:

  • it restores the saved state of execution
  • it causes coroutine.yield() to return with the argument of the call

These are not full-blown continuations, as in Lisp, but sort of halfway there. There is an asymmetry in that there is a point in the code which creates and starts a coroutine, but from then on, these almost act like they are calling each other!

And that’s exactly what turns a “pull” scanner into a “push” scanner: the code thinks it is calling nextChar() to immediately obtain the next input character, but the system sneakily puts the code to sleep, and resumes the original caller. When the original caller is ready to push another character, the system again sneakily changes the perspective, resuming the scanner with the new data, as if it had never stopped.

This is in fact a pretty efficient way to perform multiple tasks. It’s not exactly multi-tasking, because the caller and the coroutine need to alternate calls to each other in lock-step for all this trickery to work, but the effect is almost the same.

The only confusing bit perhaps, is that argument to nextChar() – what does it do?

Well, this is the way to communicate results back to the caller. Every time the scanner calls nextChar(), it supplies it with the last token it found (or nil). The conversation goes like this: “I’m done, I’ve got this result for you, now please give me the next character”. If you think of “coroutine.yield” as sort of a synonym for “return”, then it probably looks a lot more familiar already – the difference being that we’re not returning from the caller, but from the entire coroutine context.

The beauty of coroutines is that you can hide the magic so that most of the code never knows it is running as part of a coroutine. It just does its thing, and “asks” for more by calling a function such as nextChar(), expecting to get a result as soon as that returns.

Which it does, but only after having been put under narcosis for a while. Coroutines are a very neat trick, that can simplify the code in languages such a Lua, Python, and Tcl.

Collecting serial packets

In Software, Linux on Oct 17, 2012 at 00:01

As promised after yesterday’s “rf12cmd” sketch, here’s a Lua script for Linux which picks up characters from the serial port and turns them into Lua data structures:

Screen Shot 2012 10 14 at 17 31 37

Ah, if only things were that simple! On Mac OSX, the serial port hangs when opened normally, so we need to play non-blocking tricks and use the lower-level nixio package:

Screen Shot 2012 10 14 at 17 28 40

(note also the subtle “-f” vs “-F” flag for stty… welcome to the world of portability!)

Here’s some sample output:

  $ ./rf12show.lua 
    [1] = (string) hi
    [2] = (string) rf12cmd
    [3] = (number) 1
    [4] = (number) 100
    [1] = (string) rx
    [2] = (number) 868
    [3] = (number) 5
    [4] = (number) 19
    [5] = (12 bytes) BAC80801E702200000DA5121
    [1] = (string) rx
    [2] = (number) 868
    [3] = (number) 5
    [4] = (number) 3
    [5] = (4 bytes) 21DE0F00
    [1] = (string) rx
    [2] = (number) 868
    [3] = (number) 5
    [4] = (number) 19
    [5] = (8 bytes) 7331D61AE7C43901

That’s the startup greeting plus three incoming packets.

I’m using a couple of Lua utility scripts – haven’t published them yet, but at least you’ll get an idea of how the decoding process can be implemented:

  • dbg.lua – this is the vardump script, extended to show binary data in hex format
  • benstream.lua – a little script I wrote which does “push-parsing” of Bencoded data

Note that this code is far too simplistic for real-world use. The most glaring limitation is that it is blocking, i.e. we wait for each next character from the serial port, while being completely unresponsive to anything else.

Taking things further will require going into processes, threads, events, asynchronous I/O, polling, or some mix thereof – which will have to wait for now. To be honest, I’ve become a bit lazy because the Tcl language solves all that out of the box, but hey… ya’ can’t have everything!

Reporting serial packets

In Software on Oct 16, 2012 at 00:01

The RF12demo sketch was originally intended to be just that: a demo, pre-flashed on all JeeNodes to provide an easy way to try out wireless communication. That’s how it all started out over 3 years ago.

But that’s not where things ended. I’ve been using RF12demo as main sketch for all “central receive nodes” I’ve been working with here. It has a simple command-line parser to configure the RF12 driver, there’s a way to send out packets, and it reports all incoming packets – so basically it does everything needed:

  [RF12demo.8] A i1* g5 @ 868 MHz 
  DF I 577 10

  Available commands:
    <nn> i     - set node ID (standard node ids are 1..26)
                 (or enter an uppercase 'A'..'Z' to set id)
    <n> b      - set MHz band (4 = 433, 8 = 868, 9 = 915)
    <nnn> g    - set network group (RFM12 only allows 212, 0 = any)
    <n> c      - set collect mode (advanced, normally 0)
    t          - broadcast max-size test packet, with ack
    ...,<nn> a - send data packet to node <nn>, with ack
    ...,<nn> s - send data packet to node <nn>, no ack
    <n> l      - turn activity LED on PB1 on or off
    <n> q      - set quiet mode (1 = don't report bad packets)
  Remote control commands:
    <hchi>,<hclo>,<addr>,<cmd> f     - FS20 command (868 MHz)
    <addr>,<dev>,<on> k              - KAKU command (433 MHz)
  Flash storage (JeeLink only):
    d                                - dump all log markers
    <sh>,<sl>,<t3>,<t2>,<t1>,<t0> r  - replay from specified marker
    123,<bhi>,<blo> e                - erase 4K block
    12,34 w                          - wipe entire flash memory
  Current configuration:
   A i1* g5 @ 868 MHz

This works fine, but now I’d like to explore a real “over-the-wire” protocol, using the new EmBencode library. The idea is to send “messages” over the serial line in both directions, with named “commands” and “events” going to and from the attached JeeNode or JeeLink. It won’t be convenient for manual use, but should simplify things when the host side is a computer running some software “driver” for this setup.

Here’s the first version of a new rf12cmd sketch, which reports all incoming packets:

Screen Shot 2012 10 14 at 15 31 51

Couple of observations about this sketch:

  • we can no longer send a plain text “[rf12cmd]” greeting, that too is now sent as packet
  • the greeting includes the sketch name and version, but also the decoder’s packet buffer size, so that the other side knows the maximum packet size it may use
  • invalid packets are discarded, we’re using a fixed frequency band and group for now
  • command/event names are short – let’s not waste bandwidth or string memory here
  • I’ve bumped the serial line speed to 115200 baud to speed up data transfers a bit
  • there’s no provision (yet) for detecting serial buffer overruns or other serial link errors
  • incoming data is sent as a binary string, no more tedious hex/byte conversions
  • each packet includes the frequency band and netgroup, so the exchange is state-less

The real change is that all communication is now intended for computers instead of us biological life-forms and as a consequence some of the design choices have changed to better match this context.

Tomorrow, I’ll describe a little Lua script to pick up these “serial packets”.

What’s the deal with Bencode?

In Software on Oct 5, 2012 at 00:01

These past few days, I’ve explored some of the ways we could use Bencode as data format “over the wire” in both very limited embedded µC contexts and in scripting languages.

Calling it “over the wire” is a bit more down-to-earth than calling Bencode a data serialisation format. Same thing really: a way to transform a possibly complex nested (but acylic) data structure into a sequence of bytes which can be sent (or stored!) and then decoded at a later date. The world is full of these things, because where there is communication, there is a need to get stuff across, and well… getting stuff across in the world of computers tends to happen byte-by-byte (or octet-by-octet, if you prefer).

When you transfer things as bytes, you have to delimit the different pieces somehow. The receiver needs to know where one “serialised” piece of information ends and the next starts.

There are three ways to send multi-byte chunks and keep track of those boundaries:

  • send a count, send the data, rinse and repeat
  • send the bytes, then add a special byte marker at the end
  • send the bytes and use some “out-of-band” mechanism to signal the other side

Each of them has major implications and trade-offs for how a transmission works. With counts, if there is any sort of error, we’re hosed – because we lose sync and no guaranteed way to ever recover from it.

With the second approach, we need to reserve some character code as end marker. That means it can’t appear inside the data. So then the world came up with escape sequences to work around this limitation. That’s why to enter a quote inside a string in C, you have to use a backslash: "this is a quoted \" inside a string" – and then you lose the backslash. It’s all solvable, of course… but messy.

The third approach uses a different trick: we send whatever we like, and then we use a separate means of communication to signal the end or some other state change. We could use two separate communication lines for example, sending data over one and control information over the other. Or close the socket when done, as with TCP/IP.

If you don’t get this stuff right, you can get into a lot of trouble. Like when in the 60’s, telephone companies used “in-band” tones on a telephone line to pass along routing or even billing information. Some clever guys got pretty famous for that – simply inserting a couple of tones into the conversation!

Closer to home, as noted recently – even the Arduino bootloader got bitten by this.

So how about Bencode, eh?

Well, I think it hits the sweet spot in tradeoffs. It’s more or less based on the second mechanism, using a few delimiters and special characters to signal the start and end of various types of data, while switching to a byte-counted prefix for the things that matter: strings with arbitrary content (hence including any bit pattern). And it sure helps that we often tend to know the sizes of our strings up front.

With Bencode, you don’t have to first build up the entire message in memory (or generate it twice) to find out how many bytes will be sent – as required if we had to use a size prefix. Yet the receiver also can prepare for all the bigger memory requirements, because strings are still prefixed with the number of bytes to come.

Also, having an 8-bit clean data path really offers a lot of convenience. Because any set of bytes can be pushed through without any processing. Like 32-bit or 64-bit floats, binaries, ZIP’s, MP3’s, video files – anything.

Another pretty clever little design choice is that neither string lengths nor signed integers are limited in size or magnitude in this protocol. They both use the natural decimal notation we all use every day. A bigger number is simply a matter of sending more digits. And if you want to send data in multiple pieces: send them as a list.

Lastly, this format has the property that if all you send is numerical and plain ASCII data, then the encoded string will also only consist of plain text. No binary codes or delimiters in sight, not even for the string sizes. That can be a big help when trying to debug things.

Yep – an elegant set of compromises and design choices indeed, this “Bencode” thing!

Bencode in Lua, Python, and Tcl

In Software, Linux on Oct 4, 2012 at 00:01

Ok, let’s get on with this Bencode thing. Here is how it can be used on Linux, assuming you have Lua, Python, or Tcl installed on your system.

Let me show you around in the wonderful land of rocks, eggs, and packages (which, if I had my way, would have been called rigs … but, hey, whatever).


Lua’s Bencode implementation by Moritz Wilhelmy is on GitHub. To install, do:

    luarocks install bencode

(actually, I had to use “sudo”, this appears to be solved in newer versions of LuaRocks)

LuaRocks – a clever play on words if you think about it – is Lua’s way of installing well-known packages. It can be installed on Debian using “sudo apt-get install luarocks”.

The package ends up as as a single file: /usr/local/share/lua/5.1/bencode.lua

Encoding example, using Lua interactively:

    > require 'bencode'
    > s = {1,2,'abc',{234},{a=1,b=2},321}
    > print(bencode.encode(s))

To try it out, we need a little extra utility code to show us the decoded data structure. I simply copied table_print() and to_string() from this page into the interpreter, and did:

    > t = 'li1ei2e3:abcli234eed1:ai1e1:bi2eei321ee'
    > print(to_string(bencode.decode(t)))
      a = "1"
      b = "2"

(hmmm… ints are shown as strings, I’ll assume that’s a flaw in to_string)


Python’s Bencode implementation by Thomas Rampelberg is on PyPi. Install it as follows:

    easy_install $PKGS/b/bencode/bencode-1.0-py2.7.egg

It ends up as “/usr/local/lib/python2.7/dist-packages/bencode-1.0-py2.7.egg” – this is a ZIP archive, since eggs can be used without unpacking.

This depends on easy_install. I installed it first, using this magic incantation:


And it ended up in /usr/local/bin/ – the standard spot for user-installed code on Linux.

Now let’s encode the same thing in Python, interactively:

    >>> import bencode
    >>> bencode.bencode([1,2,'abc',[234],{'a':1,'b':2},321])                        

And now let’s decode that string back into a data structure:

    >>> bencode.bdecode('li1ei2e3:abcli234eed1:ai1e1:bi2eei321ee')                  
    [1, 2, 'abc', [234], {'a': 1, 'b': 2}, 321]

Note: for another particularly easy to read Python decoder, see Mathieu Weber‘s version.


Tcl’s Bencode implementation by Andreas Kupries is called Bee and it part of Tcllib.

Tcllib is a Debian package, so it can be installed using “sudo apt-get install tcllib”.

Ok, so installation is trivial, but here we run into an important difference: Tcl’s data structures are not “intrinsically typed”. The type (and performance) depends on how you use the data, following Tcl’s “everything is a string” mantra.

Let’s start with decoding instead, because that’s very similar to the previous examples:

    % package require bee    
    % bee::decode li1ei2e3:abcli234eed1:ai1e1:bi2eei321ee
    1 2 abc 234 {a 1 b 2} 321

Decoding works fine, but as you can see, type information vanishes in the Tcl context. We’ll need to explicitly construct the data structure with the types needed for Bencoding.

I’m going to use some Tcl trickery by first defining shorthand commands S, N, L, and D to abbreviate the calls, and then construct the data step by step as a nested set of calls:

    % foreach {x y} {S String N Number L ListArgs D DictArgs} {
        interp alias {} $x {} bee::encode$y
    % L [N 1] [N 2] [S abc] [L [N 234]] [D a [N 1] b [N 2]] [N 321]

So we got what we wanted, but as you can see: not all the roads to Rome are the same!

Decoding Bencode data

In AVR, Software on Oct 3, 2012 at 00:01

After the encoding and collection of Bencoded data, all the pieces are finally in place to actually process that incoming data again.

I’ve extended the EmBencode library and the serialReceive demo sketch to illustrate the process:

Screen Shot 2012 09 30 at 15 15 53

Let’s try it with the same test data, i.e. the following text (all on one line):


Here is the output:

Screen Shot 2012 09 30 at 15 18 03

You can see the pieces being separated, and all the different parts of the message being decoded properly. One convenient feature is that the asString() and asNumber() calls can be mixed at will. So strings that come in but are actually numeric can be decoded as numbers, and numbers can be extracted as strings. For binary data, you can get at the exact string length, even when there are 0-bytes in the data.

The library takes care of all string length manipulations, zero-byte termination (which is not in the encoded data!), and buffer management. The incoming data is in fact not copied literally to the buffer, but stored in a convenient way for subsequenct use. This is done in such a way that the amount of buffer space needed never exceeds the size of the incoming data.

The general usage guideline for this decoder is:

  • while data comes in, pass it to the decoder
  • when the decoder says it’s ready, switch to getting the data out:
    • call nextToken() to find out the type of the next data item
    • if it’s a string or a number, pull out that value as needed
  • the last token in the buffer will always be T_END
  • reset the decoder to prepare for the next incoming data

Note that dict and list nesting is decoded, but there is no processing of these lists – you merely get told where a dict or list starts, and where it ends, and this can happen in a nested fashion.

Is it primitive? Yes, quite… but it works!

But the RAM overhead is just the buffer and a few extra bytes (8 right now). And the code is also no more than 2..3 KB. So this should still fit fairly comfortably in an 8-bit ATmega (or perhaps even ATtiny).

Note that the maximum buffer size (hence packet size) is about 250 bytes. But that’s just this implementation – Bencoded data has no limitations on string length, or even numeric accuracy. That’s right: none, you could use Bencode for terabytes if you wanted to.

PS – It’s worth pointing out that this decoder is driven as a push process, but that actual extraction of the different items is a pull mechanism: once a packet has been received, you can call nextToken() as needed without blocking. Hence the for loop.

Collecting Bencode data

In AVR, Software on Oct 1, 2012 at 00:01

After having constructed some Bencode-formatted data, let’s try and read this stuff back.

This is considerably more involved. We’re not just collecting incoming bytes, we’re trying to make sense of it, figuring out where the different strings, ints, lists, and dicts begin and end, and – not obvious either perhaps – we’re trying to manage the memory involved in a maximally frugal way. RAM is a scarce resource on a little embedded µC, so pointers and malloc’s have to be avoided where possible.

Let’s start with the essential parsing task of figuring out when an incoming byte stream contains a complete Bencoded data structure. IOW, I want to keep on reading bytes until I have a complete “packet”.

The following serialReceive sketch does just that:

Screen Shot 2012 09 30 at 11 46 31

All of the heavy-lifting is done in the EmBencode library. What we’re doing here is giving incoming bytes to the decoder, until it tells us that we have a complete packet. Here’s the output, using the test data created yesterday:

Screen Shot 2012 09 30 at 11 46 07

Looks obvious, but this thing has fully “understood” the incoming bytes to the point that it knows when the end of each chunk has been reached. Note that in the case of a nested list, all the nesting is included as part of one chunk.

There’s more to this than meets the eye.

First of all, this is a “push” scanner (aka “lexer”). Think about how you’d decode such a data stream. By far the simplest would be something like this (in pseudo code):

  • look at the next character
  • if it’s a digit:
    • get the length, the “:”, and then grab the actual string data
  • if it’s an “i”:
    • convert the following chars to a signed long and drop the final “e”
  • if it’s a “d” or an “l”:
    • recursively grab entries, until we reach an “e” at the end of this dict / list

But that assumes you’re in control – you’re asking for more data, IOW you’re waiting until that data comes in.

What we want though, is to treat this processing as a background task. We don’t know when data comes in and maybe we’d like to do other stuff or go into low-power mode in the meantime. So instead of the scanner “pulling” in data, we need to turn its processing loop inside out, giving it data when there is some, and having it tell us when it’s done.

The code for this is in EmBencode.cpp:

Screen Shot 2012 09 30 at 12 01 52

It’s written in (dense) C++ and implemented as a finite state machine (FSM). This means that we switch between a bunch of “states” as scanning progresses. That state is saved between calls, so that we’ll know what to do with the next character when it comes in.

There’s a fair amount of logic in the above code, but it’s a useful technique to have in your programming toolset, so I’d recommend going through this if FSM’s are new to you. It’s mostly C really, if you keep in mind that all the variables not declared locally in this code are part of a C++ object and will be saved from call to call. The “EMB_*” names are just arbitrary (and unique) constants. See if you can follow how this code flows as each character comes in.

The above code needs 7 bytes of RAM, plus the buffer used to store the incoming bytes.

There are tools such as re2c which can generate similar code for you, given a “token grammar”, but in simple cases such as this one, being able to wrap your mind around such logic is useful. Especially for embedded software on limited 8-bit microcontrollers, where we often don’t have the luxury of an RTOS with multiple “tasks” operating in parallel.

Halfway there: we’ve identified the pieces, but we’re not yet doing anything with ’em!

PS – The new JeeLabs Café website is now alive and kicking.

Sending Bencode data

In AVR, Software on Sep 30, 2012 at 00:01

As mentioned a while back, I’m adopting ZeroMQ and Bencode on Win/Mac/Linux for future software development. The idea is to focus on moving structured data around, as foundation for what’s going on at JeeLabs.

So let’s start on the JeeNode side, with the simplest aspect of it all: generating Bencoded data. I’ve set up a new library on GitHub and am mirroring it on Redmine. It’s called “EmBencode” (embedded, ok?). You can “git clone” a copy directly into your Arduino’s “libraries” folder if you want to try it out, or grab a ZIP archive.

This serialSend sketch sends some data from a JeeNode/Arduino/etc. via its serial port:

Screen Shot 2012 09 29 at 20 20 32

Note that we have to define the connection between this library and the Arduino’s “Serial” class by defining the “PushChar” function declared in the EmBencode library.

One thing to point out, is that this code uses the C++ “function overloading” mechanism: depending on the type of data given as argument to push(), the appropriate member function for that type of value gets called. The C++ compiler automagically does the right thing when pushing strings and numbers.

Apart from that, this is simply an example which sends out a bit of data – i.e. some raw values as well as some structured data, each one right after the other.

Here’s what you’ll see on the IDE’s serial console monitor:


(the output has been broken up to make it fit on this page)

It looks like gobbledygook, but when read slowly from left to right, you can see each of the calls and what they generate. I’ve indented the code to match the structures being sent.

You can check out the EmBencode.h header file to see how the encoder is implemented. It’s all fairly straight-forward. More interestingly perhaps, is that this code requires no RAM (other than the run-time stack). There is no state we need to track for encoding arbitrarily complex data structures.

(Try figuring out how to decode this stuff – it’s quite tricky to do in an elegant way!)

Tomorrow, I’ll process this odd-looking Bencoded data on the other side of the serial line.

ARM to ARM upload

In Hardware, Software, ARM on Sep 27, 2012 at 00:01

After yesterday’s basic connection of an LPCXpresso 1769 board to the Raspberry Pi, it’s time to get all the software bits in place.

With NXP ARM chips such as the LPC1769, you need a tool like lpc21isp to upload over a serial connection (just like avrdude for AVR chips). It handles all the protocol details to load .bin (or .hex) files into flash memory.

There’s a small glitch in that the build for this utility gets confused when compiled on an ARM chip (including the Raspberry Pi), because it then thinks it should be built in some sort of special embedded mode. Luckily, Peter Brier figured this out recently, and published a fixed version on GitHub (long live open source collaboration!).

So now it’s just a matter of a few commands on the RPi to create a suitable uploader:

    git clone
    cd lpc21isp
    cp -a lpc21isp ~/bin/

Next step is to get that board into serial boot mode. As it turns out, we’re going to have to do similar tricks as with the JeeNode a few days ago. And sure enough, I’m running into the same timing problems as reported here.

But in this case, the boot load process is willing to wait a bit longer, so now it can all easily be solved with a little shell script I’ve called “upload2lpc”:

# export the two GPIO pins to the shell
echo "18" >/sys/class/gpio/export >&/dev/null
echo "23" >/sys/class/gpio/export >&/dev/null

# use them as output pins
echo "out" >/sys/class/gpio/gpio18/direction
echo "out" >/sys/class/gpio/gpio23/direction

# pull ISP low and prepare for reset
echo "0" >/sys/class/gpio/gpio23/value
echo "1" >/sys/class/gpio/gpio18/value

  # it is essential to delay the reset pulse for lpc21isp
  sleep 0.3
  # pulse reset low
  echo "0" >/sys/class/gpio/gpio18/value
  sleep 0.1
  echo "1" >/sys/class/gpio/gpio18/value
  # set ISP line high again
  echo "1" >/sys/class/gpio/gpio23/value
) &

echo "Uploading... "
lpc21isp -debug0 -bin $1 /dev/ttyAMA0 115200 12000 \
  && echo "Done." || exit 1

The result? We can now reset the LPCXpresso from the Raspberry Pi and upload some blink code to it:

    $ sudo ./upload2lpc blink.bin

Yippie: it resets, it uploads, it blinks – and it does so consistently! Onwards! :)

Update – For best results, also add a “sleep 0.1” between those two last echo 1’s.

Energia for MSP430

In Software on Sep 25, 2012 at 00:01

There’s a project called Energia, which describes itself as:

Energia is a fork of Arduino for the Texas Instruments MSP430 Micro Controller.

It’s not the first fork, and won’t be the last. And if you’re used to the Arduino IDE, it’ll be delightfully familiar:

Screen Shot 2012 09 18 at 22 57 48

One really interesting aspect of of this project, is that the price of experimenting with embedded controllers drops to about $5 for a TI “LaunchPad” – so any throwaway project and kid-with-little-money can now use this:

DSC 4150

As I found out, these through-hole experimenter’s boards fit quite nicely on top:

DSC 4151

Although if you look closely, it’s actually one row less than the processor board:

DSC 4152

But hey, you end up with a darn cheap setup, and numerous little experiments to try out.

Lots of options, all usable with open source software toolchains: ARM, AVR, MSP!

Linux real-time hickups

In Hardware, Software on Sep 24, 2012 at 00:01

To give you an idea why a simple little Linux board (the Carambola on OpenWrt in this case) is not a convenient platform for real-time use, let me show you some snapshots.

I’ve been trying to toggle two I/O pins in a burst to get some data out. The current experiment uses a little C program which does ioctl(fd, GPIO_SET, ...) and ioctl(fd, GPIO_CLEAR, ...) calls to toggle the I/O pins. After the usual mixups, stupid mistakes, messed up wires, etc, I get to see this:


A clean burst on two I/O pins, right? Not so fast… here’s what happens some of the time:


For 10 ms, during that activity, Linux decides that it wants to do something else and suspends the task! (and the system is idling 85% of the time)

If you compare the hex string at the bottom of these two screenshots, you can see that the toggling is working out just fine: both are sending out 96 bytes of data, and the first few bytes are identical.

The situation can be improved by running the time-critical processes with a higher priority (nice --20 cmd...), but there can still be occasional glitches.

Without kernel support, Linux cannot do real-time control of a couple of I/O pins!

Resetting a JeeNode from a RPi

In Hardware, Software on Sep 21, 2012 at 00:01

Now that the JeeNode talks to the Raspberry Pi, it’d be interesting to be able to reset the JeeNode as well, because then we could upload new sketches to it.

It’s not hard. As it so happens, the next pin on the RPi connector we’re using is pin 12, called “GPIO 18”. Let’s use that as a reset pin – and because this setup is going to become a bit more permanent here, I’ve made a little converter board:

DSC 4149

This way, a standard JeeLabs Extension Cable can be used. All we need is a board with two 6-pin female headers, connected almost 1-to-1, except for GND and +5V, which need to be crossed over (the other wire runs on the back of the PCB to avoid shorts).

This takes care of the hardware side of resets. Now the software. This is an example of just how much things are exposed in Linux (once you know where to find all the info).

There’s a direct interface into the kernel drivers called /sys, and this is also where all the GPIO pins can be exposed for direct access via shell commands or programs you write. Let’s have a look at what’s available by default:

$ sudo ls /sys/class/gpio/
export  gpiochip0  unexport

The “export” entry lets us expose individual GPIO pins, so the first thing is to make pin 18 available in the /sys area:

sudo echo 18 >/sys/class/gpio/export

That will create a virtual directory called /sys/class/gpio/gpio18/, which has these entries:

$ sudo ls /sys/class/gpio/gpio18/
active_low  direction  edge  power  subsystem  uevent  value

All we’ll need is the direction and value of that pin. Let’s first make in an output pin:

sudo echo out >/sys/class/gpio/gpio18/direction

Now, we’re ready to reset the JeeNode. I found out that the pin needs a negative transition (from 1 to 0) to do this, so the following three commands will do the trick:

sudo echo 1 >/sys/class/gpio/gpio18/value
sudo echo 0 >/sys/class/gpio/gpio18/value
sudo echo 1 >/sys/class/gpio/gpio18/value

And sure enough, the JeeNode resets, and the terminal I have running in another window shows the RF12demo startup message again:

Screen Shot 2012 09 18 at 14 32 09

We’re in control – shell over sketch!

Laser cutter electronics

In Hardware, Software on Sep 19, 2012 at 00:01

The last part of this mini series about a low-cost semi-DIY laser cutter describes the electronics and firmware:

DSC 3506

In the middle is an MBED microcontroller based on an ARM Cortex M3 chip (NXP’s LPC1768). It runs at 96 MHz, has 512 KB flash, 32 KB RAM, USB, and an Ethernet controller on board (but not the magjack).

The whole thing sits on the custom LaOS board, which includes an SD card socket, an I2C bus, some opto-isolators, power-supply interface (+5V and +24V), and two Pololu stepper drivers (or nearly compatible StepSticks).

There’s room for 1 or 2 more stepper drivers, a CAN bus interface, and more connectors for extra opto-isolated inputs and outputs. The on-board steppers can be omitted with the step/direction pins brought out if you need to drive more powerful motors.

The MBED was briefly mentioned in a previous post, and can also be replaced by a suitably modded LPCXpresso 1769 board, at less than half the price.

The software is open source and has just been moved from the MBED site to GitHub. Note that at this point in time, the software is still based on the “” runtime support library, which is unforunately not open source. Work is in progress to move to a fully open source “stack”, but that’s not the case today – the Ethernet library in particular hasn’t been successfully detached from its MBED context yet.

Ethernet connectivity is based on the lwIP code, which in turn is the big brother of uIP. Big in the sense that this supports a far more capable set of TCP/IP features than would be possible with an 8-bit microcontroller and its limited memory. Having 512 KB flash and 32 KB RAM in this context, means that you don’t have to fight the constraints of the platform and can simply focus on getting all the functionality you want in there.

Right now, the LaOS firmware drives the steppers, using a Gcode interpreter adapted from gbrl, and includes support for Ethernet (TFTP, I believe) and the SD card. It also requires the I2C control panel. As a result, you can send a “job” to this thing via VisiCut, and it’ll save it on the SD card. The I2C-based control panel then lets you pick a job and start it. Quite a good workflow already, although there are still a fair number of rough edges.

(Note that if you listen very carefully, you can hear all the pioneers on this project scream for more volunteers and more help, on all sorts of levels and domains of expertise :)

What excites me about all this, apart from those über-cool laser cutting capabilities of course, is the potential to take this further since all the essential bits are now open source – and it’s not even really tied to a specific brand of laser.

So there you have it. It cuts, it burns, it smells, it’s yearning to be taken to the next level :)

One million packets

In AVR, Hardware, Software on Sep 15, 2012 at 00:01

Minutes after this weblog post goes live will be a new milestone for one of the JeeNodes.

This thing has been running for over two years, sending out nearly one million test packets:

Here are the entries from the data logged to file on my central server, minutes ago:

1:40   RF12-868.5.3 radioBlip age       740
1:40   RF12-868.5.3 radioBlip ping      999970

The counter is about to reach 1,000,000 – just after midnight today, in fact.

These log entries come from a JeeNode with a radioBlip sketch which just sends out a counter, roughly every 64 seconds, and goes into maximum low-power mode in between each transmission. That’s the whole trick to achieving very long battery lifetimes: do what ya’ gotta do as quickly as possible, then go go bck to ultra-low power as deeply as possible.

The battery is a 1300 mAh LiPo battery, made for some Fujitsu camera. I picked it because of the nice match with the JeeNode footprint.

Bit the big news is that this battery has not been recharged since August 21st, 2010!

Which goes to show that:

  • lithium batteries can hold a lot of charge, for a long time
  • JeeNodes can survive a long time, when programmed in the right way
  • sending out a quick packet does not really take much energy – on average!
  • all of this can easily be replicated, the design and the code are fully open source

And it also clearly shows that this sort of lifetime testing is really not very practical – you have to wait over two years before being sure that the design is as ultra-low power as it wast intended to be!

If we (somewhat arbitrarily) assume that the battery is nearly empty now, then running for 740 days (i.e. 17,760 hours) means that the average current draw is about 73 µA, including the battery’s self-discharge. Which – come to think of it – isn’t even that low. I suspect that with today’s knowledge, I could probably set up a node which runs at least 5 years on this kind of battery. Oh well, there’s not much point trying to actually prove it in real time…

One of the omissions in the original radioBlip code was that only a counter is being sent out, but no indication at all of the remaining battery charge. So right now I have no idea how much longer this setup will last.

As you may recall, I implemented a more elaborate radioBlip2 sketch a while ago. It reports two additional values: the voltage just before and after sending out a packet over wireless. This gives an indication of the remaining charge and also gives some idea how much effect all those brief transmission power blips have on the battery voltage. This matters, because in the end a node is very likely to die during a packet transmission, while the radio module drains the battery to such a point that the circuit ceases to work properly.

Next time, as a more practical way of testing, I’ll probably increase the packet rate to once every second – reducing the test period by a factor of 60 over real-world use.

Waiting a couple of years for the outcome is a bit silly, after all…

The ARM ecosystem

In Hardware, Software on Sep 13, 2012 at 00:01

Yesterday’s post presented an example of a simple yet quite powerful platform for “The Internet Of Things” (let’s just call it simple and practical interfacing, ok?). Lots of uses for that in and around the house, especially in the low-cost end of ATmega’s, basic Ethernet, and basic wireless communication.

What I wanted to point out with yesterday’s example, is that there is quite a bit of missed potential when we stay in the 8-bit AVR / Arduino world. There are ARM chips which are a least as powerful, at least as energy-efficient, and at least as low-cost as the ATmega328. Which is not surprising when you consider that ARM is a design, licensed to numerous vendors, who all differentiate their products in numerous interesting ways.

In theory, the beauty of this is that they all speak the same machine language, and that code is therefore extremely portable between different chips and vendors (apart from the inevitable hardware/driver differences). You only need one compiler to generate code for any of these ARM processor families:

arm2 arm250 arm3 arm6 arm60 arm600 arm610 arm620 arm7 arm7m arm7d arm7dm arm7di arm7dmi arm70 arm700 arm700i arm710 arm710c arm7100 arm720 arm7500 arm7500fe arm7tdmi arm7tdmi-s arm710t arm720t arm740t strongarm strongarm110 strongarm1100 strongarm1110 arm8 arm810 arm9 arm9e arm920 arm920t arm922t arm946e-s arm966e-s arm968e-s arm926ej-s arm940t arm9tdmi arm10tdmi arm1020t arm1026ej-s arm10e arm1020e arm1022e arm1136j-s arm1136jf-s mpcore mpcorenovfp arm1156t2-s arm1156t2f-s arm1176jz-s arm1176jzf-s cortex-a5 cortex-a7 cortex-a8 cortex-a9 cortex-a15 cortex-r4 cortex-r4f cortex-r5 cortex-m4 cortex-m3 cortex-m1 cortex-m0 cortex-m0plus xscale iwmmxt iwmmxt2 ep9312 fa526 fa626 fa606te fa626te fmp626 fa726te

In practice, things are a bit trickier, if we insist on a compiler “toolchain” which is open source, with stable releases for Windows, Mac, and Linux. Note that a toolchain is a lot more than a C/C++ compiler + linker. It’s also a calling convention, a run-time library choice, a mechanism to upload code, and a mechanism to debug that code (even if that means merely seeing printf output).

In the MBED world, the toolchain is in the cloud. It’s not open source, and neither is the run-time library. Practical, yes – introspectable, not all the way. Got a problem with the compiler (or more likely the runtime)? You’re hosed. But even if it works perfectly – ya can’t peek under the hood and learn everything, which in my view is at least as important in a tinkering / hacking / repurposing world.

Outside the MBED world, I have found my brief exploration a grim one: commercial compiler toolchains with “limited free” options, and proprietary run-time libraries everywhere. Not my cup of tea – and besides, in my view gcc/g++ is really the only game in town nowadays. It’s mature, it’s well supported, it’s progressing, and it runs everywhere. Want a cross compiler which runs on platform A to generate code for platform B? Can do, for just about any A and B – though building such a beast is not necessarily easy!

As an experiment, I wanted to try out a much lower-cost yet pin-compatible alternative for the MBED, called the LCPXpresso (who comes up with names like that?):


Same cost as an Arduino, but… 512 KB flash, 64 KB RAM, USB, Ethernet, and tons of digital + analog I/O features.

Except: half of that board is dedicated to acting as an upload/debug interface, and it’s all proprietary. You have to use their IDE, with “lock-in” written on every page. Amazing, considering that the ARM chip can do serial uploading via built-in ROM! (i.e. it doesn’t even have to be pre-flashed with a boot loader)

As an experiment, I decided to break free from that straight-jacket:

DSC 3832

Yes, that’s right: you can basically throw away half the board, and then add a few wires and buttons to create a standard FTDI interface, ready to use with a BUB or other 3.3V serial interface.

(there’s also a small regulator mod, because the on-board 3.3V regulator seems to have died on me)

The result is a board which is pin-compatible with the MBED, and will run more or less the same code (it has only 1 user-controllable LED instead of 4, but that’s about it, I think). Oh, and serial upload, not USB anymore.

Does this make sense? Not really, if that means having to manually patch such boards each time you need one. But again, keep in mind that the boards cost the same as an Arduino Uno, yet offers far more than even the Arduino Mega in features and performance.

The other thing about this is that you’re completely on your own w.r.t. compiling and debugging code. Well, not quite: there’s a gcc4mbed by Adam Green, with pre-built x86 binaries for Windows, Mac, and Linux. But out of the box, I haven’t found anything like the Arduino IDE, with GUI buttons to push, lots of code examples, a reference website, and a community to connect with.

For me, personally, that’s not a show stopper (“real programmers prefer the command line”, as they say). But getting a LED to blink from scratch was quite a steep entry point into this ARM world. Did I miss something?

Two more notes:

  • Yes, I know there’s the Maple IDE by LeafLabs, but I couldn’t get it to upload on my MacBook Air notebook, nor get a response to questions about this on the forum.

  • No, I’m not “abandoning” the Atmel ATmega/ATtiny world. For many projects, simple ways to get wireless and battery-operated nodes going, I still vastly prefer the JeeNode over any other option out there (in fact, I’m currently re-working the JeeNode Micro, to add a bit more functionality to it).

But its good to stray outside the familiar path once in a while, so I’ll continue to sniff around in that big ARM Cortex world out there. Even if the software exploration side is acting surprisingly hostile to me right now.

New HYT131 sensor

In Hardware, Software on Jun 30, 2012 at 00:01

The SHT11 sensor used on the room board is a bit pricey, and the bulk source I used in the past no longer offers them. The good news is that there’s a HYT131 sensor with the same accuracy and range. And the same pinout:

DSC 3353

This sensor will be included in all Room Board kits from now on.

It requires a different bit of software to read out, but the good news is that this is now standard I2C and that the code no longer needs to do all sorts of floating point calculations. Here’s a test sketch I wrote to access it:

Screen Shot 2012 06 30 at 00 23 28

And here’s some sample output (measurements increase when I touch the sensor):

Screen Shot 2012 06 28 at 15 39 55

As you can see, this reports both temperature and humidity with 0.1 resolution. The output gets converted to a float for display purposes, but apart from that no floating point code is needed to use this sensor. This means that the HYT131 sensor is also much better suited for use with the memory-limited JeeNode Micro.

It’ll take a little extra work to integrate this into the roomNode sketch, but as far as I’m concerned that’ll have to wait until after the summer break. Which – as far as this weblog goes – will start tomorrow.

One more post tomorrow, and then we “Northern Hemispherians” all get to play outside for a change :)

Update – I’ve uploaded the datasheet here.

Edge interrupts

In AVR, Hardware, Software on Jun 27, 2012 at 00:01

To continue yesterday’s discussion about level interrupts, the other variant is “edge-triggered” interrupts:

JC s Grid page 22

In this case, each change of the input signal will trigger an interrupt.

One problem with this is that it’s possible to miss input signal changes. Imagine applying a 1 MHz signal, for example: there is no way for an ATmega to keep up with such a rate of interrupts – each one will take several µs.

The underlying problem is a different one: with level interrupts, there is sort of a handshake taking place: the external device generates an interrupt and “latches” that state. The ISR then “somehow” informs the device that it has seen the interrupt, at which point the device releases the interrupt signal. The effect of this is that things always happen in lock step, even if it takes a lot longer before the ISR gets called (as with interrupts disabled), or if the ISR itself takes some time to process the information.

With edge-triggered interrupts, there’s no handshake. All we know is that at least one edge triggered an interrupt.

With “pin-change interrupts” in microcontrollers such as the ATmega and the ATtiny, things get even more complicated, because several pins can all generate the same interrupt (any of the PD0..PD7 pins, for example). And we also don’t get told whether the pin changed from 0 -> 1 or from 1 -> 0.

The ATmega328 datasheet has this to say about interrupt handling (page 14):

Screen Shot 2012 06 26 at 23 08 46

(note that the “Global Interrupt Enable” is what’s being controlled by the cli() and sei() instructions)

Here are some details about pin-change interrupts (from page 74 of the datasheet):

Screen Shot 2012 06 26 at 23 13 01

The way I read all the above, is that a pin change interrupt gets cleared the moment its associated ISR is called.

What is not clear, however, is what happens when another pin change occurs before the ISR returns. Does this get latched and generate a second interrupt later on, or is the whole thing lost? (that would seem to be a design flaw)

For the RF12 driver, to be able to use pin-change interrupts instead of the standard “INT0” interrupt (used as level interrupt), the following is needed:

  • every 1 -> 0 pin change needs to generate an interrupt so the RFM12B can be serviced
  • every 0 -> 1 pin change can be ignored

The current code in the RF12 library is as follows:

Screen Shot 2012 06 26 at 23 19 59

I made that change from an “if” to a “while” recently, but I’m not convinced it is correct (or that it even matters here). The reasoning is that servicing the RFM12B will clear the interrupt, and hence immediately cause the pin to go back high. This happens even before rf12_interrupt() returns, so the while loop will not run a second time.

The above code is definitely flawed in the general case when more I/O pins could generate the same pin change interrupt, but for now I’ve ruled that out (I think), by initializing the pin change interrupts as follows:

Screen Shot 2012 06 26 at 23 23 28

In prose:

  • make the RFM12B interrupt pin an input
  • enable the pull-up resistor
  • allow only that pin to trigger pin-change interrupts
  • as last step, enable that pin change interrupt

Anyway – I haven’t yet figured out why the RF12 driver doesn’t work reliably with pin-change interrupts. It’s a bit odd, because things do seem to work most of the time, at least in the setup I tried here. But that’s the whole thing with interrupts: they may well work exactly as intended 99.999% of the time. Until the interrupt happens in some particular spot where the code cannot safely be interrupted and things get messed up … very tricky stuff!

Level interrupts

In Hardware, Software on Jun 26, 2012 at 00:01

The ATmega’s pin-change interrupt has been nagging at me for some time. It’s a tricky beast, and I’d like to understand it well to try and figure out an issue I’m having with it in the RF12 library.

Interrupts are the interface between real world events and software. The idea is simple: instead of constantly having to poll whether an input signal changes, or some other real-world event occurs (such as a hardware count-down timer reaching zero), we want the processor to “somehow” detect that event and run some code for us.

Such code is called an Interrupt Service Routine (ISR).

The mechanism is very useful, because this is an effective way to reduce power consumption: go to sleep, and let an interrupt wake up the processor again. And because we don’t have to keep checking for the event all the time.

It’s also extremely hard to do these things right, because – again – the ISR can be triggered any time. Sometimes, we really don’t want interrupts to get in our way – think of timing loops, based on the execution of a carefully chosen number of instructions. Or when we’re messing with data which is also used by the ISR – for example: if the ISR adds an element to a software queue, and we want to remove that element later on.

The solution is to “disable” interrupts, briefly. This is what “cli()” and “sei()” do: clear the “interrupt enable” and set it again – note the double negation: cli() prevents interrupts from being serviced, i.e. an ISR from being run.

But this is where it starts to get hairy. Usually we just want to prevent an interrupt to happen now – but we still want it to happen. And this is where level-interrupts and edge-interrupts differ.

A level-interrupt triggers as long a an I/O signal has a certain level (0 or 1) and works as follows:

JC s Grid page 22

Here’s what happens at each of those 4 points in time:

  1. an external event triggers the interrupt by changing a signal (it’s usually pulled low, by convention)
  2. the processor detects this and starts the ISR, as soon as its last instruction finishes
  3. the ISR must clear the source of the interrupt in some way, which causes the signal to go high again
  4. finally, the ISR returns, after which the processor resumes what it had been doing before

The delay from (1) to (3) is called the interrupt latency. This value can be extremely important, because the worst case determines how quickly our system responds to external interrupts. In the case of the RFM12B wireless module, for example, and the way it is normally set up by the RF12 code, we need to make sure that the latency remains under 160 µs. The ISR must be called within 160 µs – always! – else we lose data being sent or received.

The beauty of level interrupts, is that they can deal with occasional cli() .. sei() interrupt disabling intervals. If interrupts are disabled when (1) happens, then (2) will not be started. Instead, (2) will be started the moment we call sei() to enable interrupts again. It’s quite normal to see interrupts being serviced right after they are enabled!

The thing about these external events is that they can happen at the most awkward time. In fact, take it from me that such events will happen at the worst possible time – occasionally. It’s essential to think all the cases through.

For example: what happens if an interrupt were to occur while an ISR is currently running?

There are many tricky details. For one, an ISR tends to require quite a bit of stack space, because that’s where it saves the state of the running system when it starts, and then restores that state from when it returns. If we supported nested interrupts, then stack space would at least double and could easily grow beyond the limited amount available in a microcontroller with limited RAM, such as an ATmega or ATtiny.

This is one reason why the processor logic which starts an ISR also disables further interrupts. And re-enables interrupts after returning. So normally, during an ISR no other ISRs can run: no nested interrupt handling.

Tomorrow I’ll describe how multiple triggers can mess things up for the other type of hardware interrupt, called an edge interrupt – this is the type used by the ATmega’s (and ATtiny’s) “pin-change interrupt” mechanism.

Structured data

In Software, Musings on Jun 22, 2012 at 00:01

As hinted at yesterday, I intend to use the ZeroMQ library as foundation for building stuff on. ZeroMQ bills itself as “The Intelligent Transport Layer”, and frankly, I’m inclined to agree. Platform and vendor agnostic. Small. Fast.

So now we’ve got ourselves a pipe. What do we push through it? Water? Gas? Electrons?

Heh – none of the above. I’m going to push data / messages through it, structured data that is.

The next can of worms: how does a sender encode structured data, and how does a receiver interpret those bytes? Have a look at this Comparison of data serialization formats for a comprehensive overview (thanks, Wikipedia!).

Yikes, too many options! This is almost the dreaded language debate all over again…

Ok, I’ve travelled the world, I’ve looked around, I’ve pondered on all the options, and I’ve weighed the ins and outs of ’em all. In the name of choosing a practical and durable solution, and to create an infrastructure I can build upon. In the end, I’ve picked a serialization format which most people may have never heard of: Bencode.

Not XML, not JSON, not ASN.1, not, well… not anything “common”, “standard”, or “popular” – sorry.

Let me explain, by describing the process I went through:

  • While the JeeBus project ran, two years ago, everything was based on Tcl, which has implicit and automatic serialization built-in. So evidently, this was selected as mechanism at the time (using Tequila).

  • But that more or less constrains all inter-operability to Tcl (similar to using pickling in Python, or even – to some extent – JSON in JavaScript). All other languages would be second-rate citizens. Not good enough.

  • XML and ASN.1 were rejected outright. Way too much complexity, serving no clear purpose in this context.

  • Also on the horizon: JSON, a simple serialization format which happens to be just about the native source code format for data structures in JavaScript. It is rapidly displacing XML in various scenarios.

  • But JSON is too complex for really low-end use, and requires relatively much effort and memory to parse. It’s based on reserved characters and an escape character mechanism. And it doesn’t support binary data.

  • Next in the line-up: Bernstein’s netstrings. Very elegant in its simplicity, and requiring no escape convention to get arbitrary binary data across. It supports pre-allocation of memory in the receiver, so datasets of truly arbitrary size can safely be transferred.

  • But netstrings are a too limited: only strings, no structure. Zed Shaw extended the concept and came up with tagged netstrings, with sufficient richness to represent a few basic datatypes, as well as lists (arrays) and dictionaries (associative arrays). Still very clean, and now also with exactly the necessary functionality.

  • (Tagged) netstrings are delightfully simple to construct and to parse. Even an ATmega could do it.

  • But netstrings suffer from memory buffering problems when used with nested data structures. Everything sent needs to be prefixed with a byte count. That means you have to either buffer or generate the resulting byte sequence twice when transmitting data. And when parsed on the receiver end, nested data structures require either a lot of temporary buffer space or a lot of cleverness in the reconstruction algorithm.

  • Which brings me to Bencode, as used in the – gasp! – Bittorrent protocol. It does not suffer from netstring’s nested size-prefix problems or nested decoding memory use. It has the interesting property that any structured data has exactly one representation in Bencode. And it’s trivially easy to generate and parse.

Bencode can easily be used with any programming language (there are lots of implementations of it, new ones are easy to add), and with any storage or communication mechanism. As for the Bittorent tie-in… who cares?

So there you have it. I haven’t written a single line of code yet (first time ever, but it’s the truth!), and already some major choices have been set in stone. This is what I meant when I said that programming language choice needs to be put in perspective: the language is not the essence, the data is. Data is the center of our information universe – programming languages still come and go. I’ve had it with stifling programming language choices.

Does that mean everybody will have to deal with ZeroMQ and Bencode? Luckily: no. We – you, me, anyone – can create bridges and interfaces to the rest of the world in any way we like. I think HouseAgent is an interesting development (hi Maarten, hi Marco :) – and it now uses ZeroMQ, so that might be easy to tie into. Others will be using Homeseer, or XTension, or Domotiga, or MisterHouse, or even… JeeMon? But the point is, I’m not going to make a decision that way – the center of my universe will be structured data. With ZeroMQ and Bencode as glue.

And from there, anything is possible. Including all of the above. Or anything else. Freedom of choice!

Update – if the Bencode format were relaxed to allow whitespace between all elements, then it could actually be pretty-printed in an indented fashion and become very readable. Might be a useful option for debugging.

TK – Programming

In Software on Jun 21, 2012 at 00:01

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Time for a different subject. All the tools discussed so far have been about electronics and computing hardware.

But what about the flip side of that computing coin – software?

As mentioned recently, software has no “fixed point”. There’s no single center of its universe. Everything is split across the dimension of programming language choice. We’re operating in a computing world divided by language barriers – just like in the real world.

Here’s the language divide, as seen on GitHub (graph extracted from this site):

Screen Shot 2012 06 20 at 19 51 36

Here’s another one, from the TIOBE Programming Community Index:

Screen Shot 2012 06 20 at 22 28 27

(note the complete lack of correlation between these two examples)

It’s easy to get carried away by this. Is “my” language up? Or is it down? How does language X compare to Y?


Programming language choice (as in real life, with natural languages) has huge implications, because to get to know a language really well, you have to spend 10,000 hours working with it. Maybe 9,863 if you try really hard.

As we learn something, we get better at it. As we get better at something, we become more productive with it.

So… everyone picks one (or perhaps a few) of the above languages and goes through the same process. We learn, we evolve, and we gain new competences. And then we find out that it’s a rabbit hole: languages do not inter-operate at a very low level. One of the best ways to inter-operate with other software these days is probably something called ZeroMQ: a carefully designed low-fat interface at the network-communication level.

The analogy with real-world spoken languages is intriguing: we all eat bread, no matter what our nationality is or which language we speak (bear with me, I’m simplifying a bit). We can walk into a shop in another country, and we’ll figure out a way to obtain some bread, because the goods and the monetary exchange structure are both bound to be the very similar. Language will be a stumbling block, but not a show stopper. We won’t starve.

In the same way, you can think of information exchanges as bread. If we define appropriate data structures and clear mappings to bits and bytes, then we can get them from one system to the other via libraries such as ZeroMQ.

Which brings me to the point I’m trying to make here: programming language choice is no longer a key issue!

What matters, are the high-level data structures we come up with and the protocols (in a loosely defined manner) we use for the interactions. The bread is what it’s about (data). Money is needed to make things happen (e.g. ZeroMQ), and programming languages are going to differ and change over time anyway – so who cares.

We should stop trying to convince each other that everything needs to be written in one programming language. Humanity has had plenty of time to deal with similar natural language hurdles, and look where we stand today…

I feel reasonably qualified to make statements about languages. I speak four natural languages more or less fluently, and I’ve programmed in at least half a dozen programming languages for over two years each (some for over a decade, and with three I think may have passed that 10,000 hour mark). In both contexts, I tend to favor the less widespread languages. It’s a personal choice and it works really well for me. I get stuff done.

Then again, this weblog is written in English, and I spend quite a bit of my time and energy writing in C. That more or less says it all, really: English is the lingua franca of the (Western) internet, and C is the universal language used to implement just about everything on top with. That’s what de facto standards are about!

So what will I pick to program in for Physical Computing, embedded micros, laptops, and the web? The jury is still out on that, but chances are that it will not be any of the first 12 languages in either of those two lists above.

But no worries. We’ll still be able to talk to each other and both have fun, and the software I write will be usable regardless of your mother’s tongue – or your father’s programming language :)

Let’s focus on our software design structures, and our data exchange formats. The rest is too ephemeral.

Goodbye JeeMon

In Software, Musings on Jun 6, 2012 at 00:01

As long-time readers will know, I’ve been working on and off on a project called JeeMon, which bills itself as:

JeeMon is a portable runtime for Physical Computing and Home Automation.

This also includes a couple of related projects, called JeeRev and JeeBus.

JeeMon packs a lot of functionality: first of all a programming language (Tcl) with built-in networking, event framework, internationalization, unlimited precision arithmetic, thread support, regular expressions, state triggers, introspection, coroutines, and more. But also a full GUI (Tk) and database (Metakit). It’s cross-platform, and it requires no installation, due to the fact that it’s based on a mechanism called Starkits.

Screen Shot 2012 05 23 at 19 26 10

I’ve built several version of this thing over the years, also for small ARM Linux boards, and due to its size, this thing really can go where most other scripting languages simply don’t fit – well under 1 Mb if you leave out Tk.

One of (many) things which never escaped into the wild, a complete Mac application which runs out of the box:


JeeMon was designed to be the substrate of a fairly generic event-based / networked “switchboard”. Middleware that sits between, well… everything really. With the platform-independent JeeRev being the collection of code to make the platform-dependent JeeMon core fly.

Many man-years have gone into this project, which included a group of students working together to create a first iteration of what is now called JeeBus 2010.

And now, I’m pulling the plug – development of JeeMon, JeeRev, and JeeBus has ended.

There are two reasons, both related to the Tcl programming language on which these projects were based:

  • Tcl is not keeping up with what’s happening in the software world
  • the general perception of what Tcl is about simply doesn’t match reality

The first issue is shared with a language such as Lisp, e.g. SBCL: brilliant concepts, implemented incredibly well, but so far ahead of the curve at the time that somehow, somewhere along the line, its curators stopped looking out the window to see the sweeping changes taking place out there. Things started off really well, at the cutting edge of what software was about – and then the center of the universe moved. To mobile and embedded systems, for one.

The second issue is that to this day, many people with programming experience have essentially no clue what Tcl is about. Some say it has no datatypes, has no standard OO system, is inefficient, is hard to read, and is not being used anymore. All of it is refutable, but it’s clearly a lost battle when the debate is about lack of drawbacks instead of advantages and trade-offs. The mix of functional programming with side-effects, automatic copy-on-write data sharing, cycle-free reference counting, implicit dual internal data representations, integrated event handling and async I/O, threads without race conditions, the Lisp’ish code-is-data equivalence… it all works together to hide a huge amount of detail from the programmer, yet I doubt that many people have ever heard about any of this. See also Paul Graham’s essay, in particular about what he calls the “Blub paradox”.

I don’t want to elaborate much further on all this, because it would frustrate me even more than it already does after my agonizing decision to move away from JeeMon. And I’d probably just step on other people’s toes anyway.

Because of all this, JeeMon never did get much traction, let alone evolve much via contributions from others.

Note that this isn’t about popularity but about momentum and relevance. And JeeMon now has neither.

If I had the time, I’d again try to design a new programming environment from scratch and have yet another go at databases. I’d really love to spend another decade on that – these topics are fascinating, and so far from “done”. Rattling the cage, combining existing ideas and adding new ones into the mix is such an addictive game to play.

But I don’t. You can’t build a Physical Computing house if you keep redesigning the hammer (or the nails!).

So, adieu JeeMon – you’ve been a fantastic learning experience (which I get to keep). I’ll fondly remember you.

Gravity + Pressure + GLCD

In Software on Jun 2, 2012 at 00:01

For those interested in the above combination, I’ve added a little demo combining the Gravity Plug, Pressure Plug, and Graphics Board – oh and a JeeNode of course (it’s actually an experimental JeeNode USB v4 …):

DSC 3310

The display shows the current X, Y, and Z accelerometer values as raw readings, plus temperature and pressure:

DSC 3313

Both plugs use I2C, and could have been combined on one port. Note that the backlight is off in this example.

The sketch is called – somewhat presumptuouslyglcd_imu.ino. And it’s now on GitHub, as part of GLCDlib:

Screen Shot 2012 05 29 at 13 33 02

The GraphicsBoard class extends the GLCD_ST7565 class with print(…) calls and could be useful in general.

Get the GitHub version to see the details – omitted here for brevity.

It’s about survival

In AVR, Hardware, Software on May 29, 2012 at 00:01

When running off solar power, ya’ gotta deal with lack of (sun-) light…

As shown in a recent post, a 0.47F supercap appears to have enough energy storage to get through the night, at least when assuming that the day before was sunny and that the nights are not too long.

There are still a couple of issues to solve. One which I won’t go into yet, is that the current approach won’t start up properly when there is only a marginal power budget to begin with. That’s a hard problem – some other time!

But another tactic to alleviate this problem, is to try and sail through a low-power situation by reducing power consumption until (hopefully) more energy becomes available again, later on.

Here’s my first cut at implementing a “survival strategy”, using the radioBlip2 sketch:

Screen Shot 2012 05 15 at 14 00 12

It’s all in the comments, really: when power is low, try to avoid sending packets, since turning on the transmitter is by far the biggest power “hog”. And when power is really low, don’t even measure VCC – just slow down even more in maximally efficient sleep mode – I’ve set the times to 5 and 60 minutes. The 1-hour sleep being a last effort to get through a really rough night…

But I’ve also added some kamikaze logic: when running low, you don’t just want the sketch to go into sleep mode more and more and finally give up without having given any sign of life. So instead, when the sketch is about to decide whether it should send a packet again, it checks whether the voltage is really way too low after what was supposedly a long deep-sleep period. If so, and before power runs out completely, it will try to send out a packet anyway, in the knowledge that this might well be its last good deed. That way, the central node might have a chance to hear this final swan song…

The thresholds are just a first guess. Maybe there are better values, and maybe there is a better way to deal with the final just-about-to-die situation. But for now, I’ll just try this and see how it goes.

One last point worth mentioning: all the nodes running this sketch can use the same group and node ID, because they are transmit-only. There is never a need to address packets sent towards them. So the BLIP_ID inside the payload is a way to still disambiguate incoming packets and understand which exact node each one came from.

Re-using the same node ID is useful in larger setups, since the total number of IDs in a group is limited to 30.

I’ll do some tests with the above logic. Let’s hope this will keep nodes alive in long and dark winter days nights.

Watching it go down

In AVR, Hardware, Software on May 13, 2012 at 00:01

Now that there’s low-power vccRead() code to measure the current power supply voltage, we can finally get a bit more valuable info from the radioBlip sketch, which sends out one packet per minute.

So here’s a new radioBlip2 sketch which combines both functions. To support more test nodes, I’m adding a byte to the payload for a unique node ID, as well as a byte with the measured voltage level:

Screen Shot 2012 05 09 at 18 28 02

As a quick test I used a JeeNode without regulator, running off an electrolytic 1000 µF cap, charged to 5V via a USB BUB, and then disconnected (this is running the RFM12B module beyond its 3.8V max specs, BTW):

DSC 3127

Here’s a dump of the received packets:

    L 16:56:32.032 usb-A600dVPp OK 17 1 0 0 0 1 209
    L 16:57:35.859 usb-A600dVPp OK 17 2 0 0 0 1 181
    L 16:58:39.543 usb-A600dVPp OK 17 3 0 0 0 1 155
    L 16:59:43.029 usb-A600dVPp OK 17 4 0 0 0 1 134
    L 17:00:46.323 usb-A600dVPp OK 17 5 0 0 0 1 115
    L 17:01:49.431 usb-A600dVPp OK 17 6 0 0 0 1 98
    L 17:02:52.314 usb-A600dVPp OK 17 7 0 0 0 1 82
    L 17:03:55.016 usb-A600dVPp OK 17 8 0 0 0 1 66
    L 17:04:57.526 usb-A600dVPp OK 17 9 0 0 0 1 50

Or, more graphically, as voltage – showing 8 minutes before the sketch runs out of juice:

Screen Shot 2012 05 09 at 19 06 19

This consumes only marginally more power than without the VCC measurements: the original radioBlip sketch lasted 1 minute longer under similar conditions, i.e. one extra packet transmission.

Improved VCC measurement

In AVR, Software on May 12, 2012 at 00:01

As shown in this post, it is possible to read out the approximate level of VCC by comparing the internal 1.1 V bandgap with the current VCC level.

But since this is about tracking battery voltage on an ultra-low power node, I wanted to tinker with it a bit further, to use as little energy as possible when making that actual supply voltage measurement. Here’s an improved bandgap sketch which adds a couple of low-power techniques:

Screen Shot 2012 05 09 at 15 42 39

First thing to note is that the ADC is now run in noise-canceling-reducing mode, i.e. a special sleep mode which turns off part of the chip to let the ADC work more accurately. With as nice side-effect that it also saves power.

The other change was to drop the 250 µs busy waiting, and use 4 ADC measurements to get a stable result.

The main delay was replaced by a call to loseSomeTime() of course – the JeeLib way of powering down.

Lastly, I changed the sketch to send out the measurement results over wireless, to get rid of the serial port activity which would skew the power consumption measurements.

Speaking of which, here is the power consumption during the call to vccRead() – my favorite graph :)


As usual, the red line is the integral of the current, i.e. the cumulative energy consumption (about 2300 nC).

And as you can see, it takes about 550 µs @ 3.5 mA current draw to perform this battery level measurement. The first ADC measurement takes a bit longer (25 cycles i.s.o. 13), just like the ATmega datasheet says.

The total of 2300 nC corresponds to a mere 2.3 µA average current draw when performed once a second, so it looks like calling vccRead() could easily be done once a minute without draining the power source very much.

The final result is pretty accurate: 201 for 5V and 147 for a 4V battery. I’ve tried a few units, and they all are within a few counts of the expected value – the 4-fold ADC readout w/ noise reduction appears to be effective!

Update – The latest version of the bandgap sketch adds support for an adjustable number of ADC readouts.

Measuring VCC via the bandgap

In AVR, Hardware, Software on May 4, 2012 at 00:01

The ATmega’s (and ATtiny’s for that matter) all have a 10-bit ADC which can be used measure analog voltages. These ADC’s are ratiometric, meaning they measure relative to the analog reference voltage (usually VCC).

On a 5V Arduino, that means you can measure 0..5V as 0..1023, or roughly 5 mV per step.

On a 3.3V JeeNode, the measurements are from 0 to 3.3V, or roughly 3.3 mV per step.

There’s no point connecting VCC to an analog input and trying to measure it that way, because no matter what you do, the ADC readout will be 1023.

So can we figure out what voltage we’re running at? This would be very useful when running off batteries.

Well, there is also a “bandgap reference” in each ATmega/ATtiny, which is essentially a 1.1V voltage reference. If we read out that value relative to our VCC, then a little math will do the trick:

  • suppose we read out an ADC value “x” which represents 1.1V
  • with 5V as VCC, that value would be around 1100 / 5000 * 1023 = 225
  • whereas with 3.3V as VCC, we’d expect a reading of 1100 / 3300 * 1023 = 341
  • more generally, 1100 / VCC * 1023 = x
  • solving for VCC, we get VCC = 1100 / x * 1023

So all we have to do is measure that 1.1V bandgap reference voltage and we can deduce what VCC was!

Unfortunately, the Arduino’s analogRead() doesn’t support this, so I’ve set up this bandgap demo sketch:

Screen Shot 2012 04 22 at 21 44 43

Sample output, when run on a JeeNode SMD in this case:

Screen Shot 2012 04 22 at 21 47 20

There’s a delay in the vccRead() code, which helps stabilize the measurement. Here’s what happens with vccRead(10) – i.e. 10 µs delay instead of the default 250 µs:

Screen Shot 2012 04 22 at 21 51 15

Quite different values as you can see…

And here’s the result on an RBBB with older ATmega168 chip, running at 5V:

Screen Shot 2012 04 22 at 21 53 44

I don’t know whether the 168’s bandgap accuracy is lower, but as you can see these figures are about 10% off (the supply voltage was measured to be 5.12 V on my VC170 multimeter). IOW, the bandgap accuracy is not great – as stated in the datasheet, which specifies 1.0 .. 1.2V @ 25°C when VCC is 2.7V. Note also that the bandgap reference needs 70 µs to start up, so it may not immediately be usable when coming out of a power-down state.

Still, this could be an excellent way to predict low-battery conditions before an ATmega or ATtiny starts to run out of steam.

Learning to program

In News, Software on Apr 15, 2012 at 00:01

As it so happens, someone very recently brought to my attention a site called, which announces itself as simple and as clearly as can be:

Free online university classes for everyone.

It’s a phenominally exciting initiative, second only to the Khan Academy, if you ask me:

Screen Shot 2012 04 14 at 21 23 47

The idea: great video lectures plus exercises to let anyone with (good) internet access learn some major topic really well. You have to be fluent in English, evidently, but apart from that the courses seem to be designed to give the broadest possible group of people access to this new form of – literally! – world-class education.

These guys are serious. With a pool of well-known researchers and teachers and set up to scale massively (the class on Artificial Intelligence which led to all this had over 160,000 people signed up).

For some background info about this project, see these Hack Education and Reuters articles.

The format is slightly different from the Khan Academy in that the courses start on a fixed date and have a fixed duration. So you really have to “sign up” for class if you want to benefit from what they have to offer.

As it so happens, these classes start tomorrow, Monday, April 16th and they will last for 7 weeks.

It looks like there will be a bunch of videos each week, plus some homework assignments, which you can then follow whenever you have time that week. You can enroll in multiple courses, but I’m sure they will be repeated at a later date, so it’s probably best to just pick what feels like a good match right now.

What can I say? IMO, this is a unique chance to learn about modern software programming on many levels. Whether you’ve never built any software or whether you are curious about how some really sophisticated problems can be solved, these six courses cover a breathtaking range of topics.

I don’t know how these courses will turn out, but I do know about some of the names involved, and frankly, I’d have loved to have this sort of access when starting out in programming.

FWIW, out of curiosity, I’ve signed up for CS101. What a nice birthday present.

There has never been a better time to learn than now. This world will never be the same again.

Reading out DHTxx sensors

In Software on Apr 14, 2012 at 00:01

The DHT11 and DHT22 sensors measure temperature and humidity, and are easy to interface because they only require a single I/O pin. They differ in their measurement accuracy, but are in fact fully interchangeable.

There is code for these sensors floating around on the web, but it all seems more complicated than necessary, and I really didn’t want to have to use floating point. So I added a new “DHTxx” class to JeeLib which reads them out and reports temperature in tenths of a degree and humidity in tenths of a percent.

Along with a new demo sketch called dht_demo:

Screen Shot 2012 04 02 at 03 40 38

That’s about as simple as it gets, and it compiles to less than 4 KB.

Sample output, in this case a DHT11 which only reports whole degrees and percentages:

Screen Shot 2012 04 02 at 03 26 05

Humidity is gradually shooting up as I breathe next to it (there’s a slight lag).


Analog Plug readout

In Software on Apr 13, 2012 at 00:01

The analog plug contains an MCP3424 4-channel ADC which has up to 18 bits of resolution and a programmable gain up to 8x. This can measure microvolts, and it works in the range of ± 2.048 V (or ± 0.256 V for 8x gain).

However, the analog_demo example sketch was a bit limited, reading out just a single fixed channel, so I’ve added a new AnalogPlug class to JeeLib to simplify using the Analog Plug hardware. An example:

Screen Shot 2012 04 01 at 21 10 25

This interfaces to an Analog Plug on port 1, and uses 0x69 as default I2C device address. There are a number of ways to use this, but if you want to read out multiple channels, you have to select the proper channel and then wait for at least one conversion to complete. Since conversions take time, especially at 18-bit resolution, a delay() is needed to get proper results.

Sample output:

Screen Shot 2012 04 01 at 21 18 18

I tied a 1.5V battery to channel 1 and left the rest of the pins unconnected. Touching both battery pins lowers the voltage briefly, as you can see.

These results are in microvolts, due to this expression in the code:

    long uvolts = ((adc.reading() >> 8) * 1000) / 64;

Here’s the reasoning behind this formula:

  • the reading() call returns 32 bits of I2C data, but we only need the first 24 bits
  • of these 24 bits, the first 6 will simply be a sign-extended copy of bit 18
  • the top of the measurement range is 2.047 Volts, i.e. 2047 millivolts
  • but we want to report in terms of microvolts, so we multiply by 1000
  • only 11 bits are needed to represent 0 .. 2047 mV, the remaining 6 bits are fractional
  • so we shift right by 6 bits (i.e. divide by 64) to get the actual result

It’s a bit convoluted, but as you can see, the measured value comes out as about 1.477 Volts, with a few more digits of resolution. If you do the math, you’ll see that the actual “step” size of these measurements is 1000 / 64 = 15.625 µV – and it drops to under 2 µV when used with 8x gain!

With this sort of precision, electrical noise can easily creep in. But it’s pretty neat: 5 digits of precision for 4 channels, with nothing more than one teeny little I2C chip.

EtherCard improvements

In Software on Apr 11, 2012 at 00:01

This has been an often-requested feature, so I’ve added a way to get an Ethernet reply back after you call tcpSend() in the EtherCard library:

Screen Shot 2012 04 01 at 15 08 44

The one thing to watch out for, is that – over time – packets going out and coming back are going to interleave in unforeseen ways, so it is important to keep track of which incoming reply is associated to which outgoing request. Fortunately, the EtherCard library already has some crude support for this:

  • Each new tcpSend() call increases an internal session ID, which consist of a small integer in the range 0..7 (it wraps after 8 calls).
  • You have to store the last ID to be able to look for its associated reply later, hence the “session” variable, which should be global (or at least static).
  • There’s a new tclReply() call which takes that session ID as argument, and returns a pointer to the received data if there is any, or a null pointer otherwise. Each new reply is only returned once.

A simple version of this had been hacked in there in a Nanode-derived version of EtherCard, so I thought I might as well bring this into the EtherCard library in a more official way.

This code – the whole EtherCard library in fact – is fairly crude and not robust enough to handle all the edge cases. One reason for this is that everything is going through a single packet buffer, since RAM space is so tight. So that buffer gets constantly re-used, for both outgoing and incoming data.

Every time I go through the EtherCard code, my fingers start itching to re-factor it. I already did quite a few sweeps of the code a while back as a matter of fact, but some of the cruft still remains (such as callback functions setting up nested callbacks). It has to be said though, that the code does work pretty well, with all its warts and limitations, and it’s non-trivial, so I’d rather stick to hopping from one working state to the next, instead of starting from scratch, working out all the cases, and tracking out all the new bugs that would introduce.

The biggest recent change was the addition of a “Stash” mechanism, which is a way to temporarily use the RAM inside the ENC28J60 Ethernet controller as scratchpad for all sorts of data. Its already useful in its current state because it lets you “print” data to it in the Arduino way to construct a request or a reply for an Ethernet session.

There are a few more steps planned, with as goal to avoid the need to have a full packet buffer in the ATmega’s RAM. Once that goal is reached, it should also become possible to track more than one session at the same time, so that more frequent requests (in and out) should be possible. There is no reason IMO, why an ENC28J60-based Ethernet board should be much less capable than a Wiznet-based one (apart from needing a bit more flash memory for the library code, and not supporting multi-packet TCP sessions).

The remaining steps to get away from the current high demands on RAM space are:

  • generate the final outgoing packet directly from one or more stashes, without going through our RAM-based buffer
  • collect the incoming request into a stash as well, again to avoid the RAM buffer, and to quickly release the receiver buffer again
  • reduce the RAM buffer to only store all the headers and the first few bytes of data, this should not affect all too much of the current code
  • add logic to easily “read” incoming data from a stash as an Arduino stream (just as “writing” to a stash is already implemented)

Not there yet, but thinking this through in detail is really the first step…

PortI2C – C++ syntax

In Software on Apr 4, 2012 at 00:01

To finish last week’s discussion about C++ classes for I2C buses and devices – here’s the nasty bit… syntax!

The PortI2C class is defined as follows in Ports.h (I’ve omitted some of the internal details):

    class PortI2C : public Port {
        enum { KHZMAX = 1, KHZ400 = 2, KHZ100 = 9 };

        PortI2C (uint8_t num, uint8_t rate =KHZMAX);

        uint8_t start(uint8_t addr) const;
        void stop() const;
        uint8_t write(uint8_t data) const;
        uint8_t read(uint8_t last) const;

Couple of things to note:

  • PortI2C is a subclass of Port (defined here), which handles raw I/O for one port (1..4 or 0)
  • there’s an “enum” which defines some constants, specifically for PortI2C use
  • there’s a “constructor” which takes two arguments (the second one is optional)
  • there are four member functions available to any instance of class PortI2C

But that’s not all. Since PortI2C is a subclass of public Port, all the members of the Port class are also available to PortI2C instances. So even when using a PortI2C instance as I2C bus, you could still control the IRQ pin on it (mode3(), digiWrite3(), etc). An I2C port is a regular port with I2C extensions.

Note this line:

    PortI2C (uint8_t num, uint8_t rate =KHZMAX);

This is the constructor of the PortI2C class, since it has the same name as the class. You never call it directly, it gets called automatically whenever a new instance of PortI2C is declared.

This constructor takes one or two arguments: the last argument can be omitted, in which case it will get a default value of KHZMAX, which is the constant 1, as defined in the preceding enum.

Note that the first argument is required. The following instance declaration will generate a compile error:

    PortI2C myPort; // compile-time error!

There’s no way to create an instance of a port without specifying its port number (an int from 1 to 4, or 0). Instead, you have to use either one of the following lines:

    PortI2C myPort (3);                  // this defaults to KHZMAX ...
    PortI2C myPort (3, PortI2C::KHZ400); // ... or you can specify the rate

And this is where things get nasty: PortI2C is a subclass of Port, which also has a constructor requiring a port number. So the PortI2C constructor somehow has to pass this information to the Port constructor. To see how this is done, look at the PortI2C constructor function, defined in Ports.cpp:

    PortI2C::PortI2C (uint8_t num, uint8_t rate)
        : Port (num), uswait (rate)

This is the PortI2C constructor, and welcome to the murkier side of C++:

  • “PortI2C::PortI2C” is the way to refer to PortI2C’s constructor function
  • “blah::blah (…) : Port (…) { … }” is how the Port subclass constructor gets called
  • “blah::blah (…) : uswait (…) { … }” is used to initialize the “uswait” member variable
  • note that the “=KHZMAX” optional value for the 2nd arg is not repeated in this code

To summarize: the above code is called when you create a instance (such as “PortI2C myPort (3);”), and what it’ll do (in that order), is:

  • initialize the included Port instance it is based on, passing it the 1st arg
  • set the uswait member variable (which is part of the PortI2C instance) to the 2nd arg
  • call sdaOut(), mode2(), and sclHi(), all defined in the Port and PortI2C classes

It gets worse. Let’s have a look at the definition of class DeviceI2C:

    class DeviceI2C {
        const PortI2C& port;
        uint8_t addr;

        DeviceI2C(const PortI2C& p, uint8_t me)
          : port (p), addr (me << 1) {}

        bool isPresent() const;
        uint8_t send() const { ... }
        uint8_t receive() const { ... }
        void stop() const { ... }
        uint8_t write(uint8_t data) const { ... }
        uint8_t read(uint8_t last) const { ... }
        uint8_t setAddress(uint8_t me) { ... }

DeviceI2C is not a subclass, but it does need to refer to the PortI2C instance specified as 1st argument and remember the bus address. The way this is done is through member variables “port” and “addr”. These are defined at the top of the class, and initialized in the DeviceI2C constructor.

The reason we can’t use subclassing here, is that a device is not a port, it’s merely associated with a port and I2C bus, since multiple devices can coexist on the bus. The “&” notation is a bit like the “*” pointer notation in C, I’ll refer you to C++ language documentation for an explanation of the difference. It’s not essential here.

Not being a subclass of PortI2C, means we can’t simply send I2C packets via send(), write(), etc. Instead, we have to go through the “port” variable. Here’s the above write() member function in more detail:

        uint8_t write(uint8_t data) const { return port.write(data); }

In other words, instead of simply calling “write()”, we have to call “port.write()”. No big deal.

So much for the intricacies of C++ – I hope this’ll allow you to better read the source code inside JeeLib.

Watchdog tweaking

In AVR, Software on Apr 1, 2012 at 00:01

The other day, I mentioned a way to keep approximate track of time via the watchdog timer, by running it as often as possible, i.e. in a 16 ms cycle.

This brought out some loose ends which I’d like to clean up here.

First of all, the loseSomeTime() code now runs 60 times per second, so optimizing it might be useful. I ended up moving some logic out of the loop, to keep the time between power-down states as short as possible:

Screen Shot 2012 03 27 at 11 00 51

The result is this power consumption profile, every ≈ 16 ms:


That’s still about 22 µs @ 6.5 mA. But is it, really?

The above current measurement was done between the battery and the power supply of a JeeNode SMD. Let’s redo this without regulator, i.e. using a “modded” JeeNode with the regulator replaced by a jumper:


Couple of observations:

  • different ATmega, different watchdog accuracy: 17.2 vs 16.3 ms
  • the rise and fall times of the pulse is sharper, i.e. not dampened by a 10 µF buffer cap
  • new behavior: there’s now a 0.4 mA current during 80 µs (probably the clock starting up?)
  • that startup phase adds another 75 nC to the total charge consumed
  • note that there is a negative current flow, causing the charge integral to decrease

The worrying bit is that these two ways of measuring the current pulses differ so much – I can’t quite explain it. One thing is clear though: an adjusted fuse setting with faster clock startup should also make a substantial difference, since this now needs to happen 60 times per second.

A second improvement is to assume that when a watchdog cycle gets interrupts, half the time has passed – on average that’ll be as best as we can guess, assuming the interrupt source is independent:

Screen Shot 2012 03 27 at 13 17 55

The last issue I wanted to bring up here, is that small code optimizations can sometimes make a noticeable difference. When running the test sketch (same as in this post) with a 8192 millisecond argument to loseSomeTime(), the above code produces the following profile:


The reason the pulse is twice as wide is that the “while” in there now loops a few times, making the run time almost 50 µs between power-down phases. As Jörg Becker pointed out in a comment, the ATmega has no “barrel shifter” hardware, meaning that “shift-by-N” is not a hardware instruction which can run in one machine cycle. Instead, the C runtime library needs to emulate this with repeated shift-by-1 steps.

By changing the while loop from this:

Screen Shot 2012 03 27 at 12 18 20

… to this:

Screen Shot 2012 03 27 at 13 06 49

… we get this new power consumption profile (the horizontal scale is now 20 µs/div):


IOW, this takes 20 µs less time. Does it matter? Well, that might depend on who you ask:

  • Marketing guy: “50 µs is 67% more than 30 µs – wow, that means your sketch might last 5 years i.s.o. 3 years on the same battery!”

  • Realist: “50 µs i.s.o. 30 µs every 8 seconds – nah, those extra 120 nC or so will merely add 15 nA to the average current consumption.”

The moral of this story: 1) careful how you measure things, and 2) optimize where it matters.

Anyway, I’ve added all three changes to JeeLib. It won’t hurt, and besides: it leads to smaller code.

PortI2C – C++ classes

In Software on Mar 28, 2012 at 00:01

Last week, I described how the PortI2C + DeviceI2C definitions in JeeLib work together to support daisy-chaining multiple “JeePlugs” on a “JeePort”. To describe how, I need to go into some C++ concepts.

PortI2C and DeviceI2C (and MemoryPlug) are each defined as a C++ class – think of this as a little “software module” if you like. But a class by itself doesn’t do much – just like the C type “int” doesn’t do much – until you create a variable of that type. JeeLib is full of classes, but to make any one of them come alive you have to create an instance – which is what the following line does:

    PortI2C myPort (3);

This could also have been written as follows, but for classes the parentheses are preferable:

    PortI2C myPort = 3;

The class is “PortI2C”, the new variable is “myPort”, and its initial value is based on the integer 3.

In C++, instances are “constructed”. If a class defines a constructor function, then that code will be run while the instance is being set up. In this case, the PortI2C constructor defined inside JeeLib takes the port number and remembers it inside the new instance for later use. It also sets up the I/O pins to properly initialize the I2C bus signals. You can find the code here, if you’re curious.

So now we have a “myPort” instance. We could use it to send and receive data on the I2C bus it just created, but keeping track of all the plugs (i.e. devices) on the bus would be a bit tedious.

The next convenience JeeLib provides, is support per plug. This is what the DeviceI2C class does: you tell it what port to use, and the address of the plug:

    DeviceI2C plugOne (myPort, 0x20);

Same structure: the class is “DeviceI2C”, the new variable is “plugOne”, and the initial value depends on two things: a port instance and the integer 0x20. The port instance is that I2C port we set up before.

The separation between PortI2C and DeviceI2C is what lets us model the real world: each port can act as one I2C bus, and each bus can handle multiple plugs, i.e. I2C devices. We simply create multiple instances of DeviceI2C, giving each of them a different variable name and a unique bus address.

The Memory Plug example last week takes this all even further. There’s a “MemoryPlug” class, i.e. essentially a specialized DeviceI2C which knows a little more about the EEPROM chips on the Memory Plug.

In C++, this sort of specialization is based on a concept called subclassing: we can define a new class in terms of an existing one, and extend it to behave in slightly different ways (lots of flexibility here).

In the code, you can see this in the first line of the class definition:

    class MemoryPlug : public DeviceI2C {

IOW, a MemoryPlug is a specialized DeviceI2C. Simply remember: English “is a” <=> C++ “subclass”.

Next week, I’ll elaborate on the funky C++ notation for constructors and subclasses – stay tuned!

Tracking time via the watchdog

In AVR, Software on Mar 27, 2012 at 00:01

The JeeLib library has a convenient loseSomeTime() function which puts the ATmega in low power mode for 16 to 60,000 ms of time. This is only 10% accurate, because it uses the hardware watchdog which is based on an internally RC-generated 128 KHz frequency.

But the worst bit is that when you use this in combination with interrupts to wake up the ATmega, then you can’t tell how much time has elapsed, because the clock is not running. All you know when waking up, is that no more than the watchdog timeout has passed. The best you can assume is that half of it has passed – but with loseSomeTime() accepting values up to 1 minute that’s horribly imprecise.

Can we do better? Yes we can…

Internally, loseSomeTime() works by cutting up the request time into smaller slices which the watchdog can handle. So for a 10000 ms request, for example, loseSomeTime() would wait 8192 + 1024 + 512 + 256 + 16 ms to reach the requested delay, approximately. Convenient, except for those long waits.

Here’s a test sketch which simply keeps waiting 8192 ms at a time:

Screen Shot 2012 03 21 at 13 14 16

The corresponding current consumption, measured via oscilloscope, shows this:


First of all, note how 8192 ms ends up being 8255 ms, due to the watchdog timer inaccuracy.

But the main result is that to perform this sketch, the ATmega will draw 5 mA during about 50 µs. The rest of the time it’ll be a few µA, i.e. powered down. These wake-ups draw virtually no current, when averaged.

The downside is that under these conditions, interrupts can cause us to lose track of time, up to 8192 ms.

So let’s try something else. Let’s instead run the watchdog as briefly as possible:

Screen Shot 2012 03 21 at 13 04 43

Current consumption now changes to this:


I don’t really understand why Because of a loop in the loseSomeTime() code which runs faster, the running time drops by half in this case (and hence total nanocoulomb charge halves too). But note that we’re now waking up about 60 times per second.

This means that interrupts can now only mess with our sense of time by at most 16 ms. Without interruptions (i.e. most of the time), the watchdog just completes and loseSomeTime() adds 16 ms to the millis() clock.

Let’s try and estimate the power consumption added by these very frequent wake-ups:

  • each wake-up pulse draw 5.5 mA for about 25 µs
  • the charge consumed by each pulse is 122 nC
  • there are (roughly) 60 wake-up pulses per second
  • so per second, these pulses consume 60 x 122 nC ≈ 7.3 µC in total
  • that comes down to an average current consumption of 7.3 µA

That’s not bad at all! By waking up 60 times per second (and going back to sleep as quickly as possible), we add only 7.3 µA current consumption to the total. The reason this works, is that wake-ups only take 25 µs, which – even at 60 times per second – hardly adds up to anything.

So this technique might be a very nice way to keep approximate track of time while mostly in sleep mode, with the ability to wake up whenever some significant event (i.e. interrupt) happens!

PS. In case you’re wondering about the shape of these signals, keep in mind that I’m measuring current draw before the regulator and 10 µF capacitor on the JeeNode.

PortI2C – The Big Picture

In Software on Mar 21, 2012 at 00:01

The JeeLib library includes two C++ classes called “PortI2C” and “DeviceI2C”, on which a lot of other code in JeeLib is based – derived classes for various plugs as well as several examples.

This stuff is well inside C++ territory, so if you’re not familiar with “OO design” it’s easy to get lost…

Let’s first look at what we’re trying to model:

Screen Shot 2012 03 17 at 12 13 55

In short: one port (any of the 4 on a JeeNode) is driven as an I2C bus using software bit-banging, and one or more I2C-type JeePlugs are attached to it. Each plug may or may not be of the same type.

What we want is a set of software modules which we can use in our sketch. Say we have three plugs, responding to I2C addresses 0x20, 0x21, and 0x22. Then the code might be something like:

    const byte portNumber = 3;
    PortI2C myPort (portNumber);
    DeviceI2C plugOne (myPort, 0x20);
    DeviceI2C plugTwo (myPort, 0x21);
    DeviceI2C plugThree (myPort, 0x22);

This would set up three C++ objects, where each knows how to reach and control its own plug.

But that’s not all. Suppose plug #3 is a Memory Plug, i.e. an EEPROM memory of 128..512 kB. JeeLib contains extra support code to easily read and write data to such a plug, in the form of a C++ class called “MemoryPlug”. It’s an I2C device, but it always has a fixed bus address of 0x50, which for convenience is already built into the JeeLib code. To use this, all we have to do is replace that last plugThree definition above by this line:

    MemoryPlug plugMem (myPort);

Once this works, we get a lot of functionality for free. Here’s how to send an I2C packet to plug #1:


Or you can save a 3-byte string to the Memory Plug, on page 12, at offset 21:, "abc", 21, 3);

There’s a lot going on behind the scenes, but the result leads to a fairly clean coding style with all the details nicely tucked away. The question remains how this “tucking away” with C++ classes and objects is done.

Stay tuned, this will be described next week…

MilliTimer example

In Software on Mar 16, 2012 at 00:01

The MilliTimer class in JeeLib is a convenient way to perform time-related activities while doing other things at the same time. See also these weblog posts about wasting time and other ways of scheduling multiple tasks. And if you’re willing to learn a new programming language: check out this post.

With just the basic Arduino library support, i.e. if you have to do everything with delay() calls, it’s a lot harder to do things like making two LEDs blink at an independent rate – as in this blink_timers example:

Screen Shot 2012 03 09 at 11 59 12

This illustrates a simple way of using the millisecond timers: calling one with “poll(ms)” will return true once every “ms” milliseconds. The key advantage is that you can do other things in between these calls. As in the above example, where I used two independent timers to track the blinking rate of two LEDs.

This example uses the millitimers in automatic “retrigger” mode, i.e. poll() will return true every once in a while, because whenever poll() returns true, it also re-arms the timer to start over again.

There may be cases where you need a one-shot type of trigger. This can also be handled by millitimers:

    MilliTimer t;
    while (...) {
      if (t.poll())

When used in this way, the timer will go off only once, 123 milliseconds after having been set.

There are a few other things you can do with millitimers, like finding out whether a particular timer is running, or how many milliseconds remain before it will go off.

These timers only support timeouts in the range of 1..60,000 milliseconds. For longer timeouts, you’ll have to do a bit more work yourself, i.e. to wait one hour you could do:

    MilliTimer timer;
    int seconds = 3600;
    while (...) {
      if (timer.poll(1000) && --seconds <= 0)
        Serial.println("ding dong");

Note that the MilliTimer class implements software timers, and that you can have as many as you like. No relationship to the hardware timers in the ATmega, other than that this is based on the Arduino runtime’s “millis()” function, which normally uses hardware TIMER0 internally.

For a more advanced mechanism, see the Scheduler.

Serial Port on JeeNode Micro

In AVR, Software on Mar 9, 2012 at 00:01

The JeeNode Micro is based on an ATtiny84, which has quite a bit less hardware functionality built-in than an ATmega. There is some rudimentary byte-shifting hardware for sending or receiving a serial bit stream, but that’s already assigned to the SPI-style RFM12B interface.

So how about a serial port, for debugging? Even just serial out would be a big help, after all.

Luckily, the developers of the Arduino-Tiny library have thought of this, and have implemented a software solution. Better still, it’s done in an almost completely compatible way.

Here’s an example test sketch:

Screen Shot 2012 03 08 at 07 52 04

Look familiar? Of course it does: it’s exactly the same code as for a standard ATmega sketch!

The one thing you have to keep in mind, is that only a few baud rates are supported:

  • 9600, 38400, and 115200 baud

The latter is unlikely to work when running on the internal 8 MHz clock, though. It’s all done in software. For an ATtiny84 running at 8 MHz, the serial output appears on PB0. This is pin 2 on the chip and pin 10 on the 10-pin header of the JNµ, marked “IOX”.

The code for this software-based serial port is fairly tricky, using embedded assembly code and C++ templates to create just the right timing loops and toggle just the right pin.

Note also that since this is all done in software, interrupts cannot occur while sending out each byte, and sending eats up all the time – the code will resume after the print() and println() calls after all data has been sent.

But apart from these details, you get an excellent debugging facility – even on an ATtiny!

Which boot loader do I have?

In AVR, Software on Mar 7, 2012 at 00:01

Denisj asked on the forum recently whether it’s easy to find out which boot loader is stored in the ATmega. That would indeed be very useful, now that we have a few different versions floating around.

Here’s a new “bootCheck.ino” sketch which tries to identify the boot loader and reports it on the serial port:

Screen Shot 2012 03 06 at 12 23 06

Here’s some sample output when uploaded to the current JeeNodes and RBBBs:

      CRC 2048b @ 0x7800 = CD70
      CRC 512b @ 0x7E00 = FD70
    Boot loader: OptiBoot 4.4

The current version only knows about a few boot loaders so far, but it’s table-driven and can quickly be extended.

So now that it exists, let’s use the power of crowdsourcing and make it really useful, eh?

If you’ve got an ATmega328-based board with a bootloader on it, upload this sketch to find out what it reports. If it says “UNKNOWN” and you happen to know exactly what boot loader is present, please let me know what output you get (i.e. the two CRC values) and the name/type of the boot loader, and I’ll add it to the table.

I’ll update the code for each new boot loader. The sketch is maintained as gist on GitHub, so please be sure to get the latest version before you try this. Your boot loader might already be in there!

Note that this is not limited to JeeNodes or RBBBs. Anything with an ATmega328 that will run this Arduino sketch can be included.

Putting the JNµ to sleep

In AVR, Hardware, Software on Jan 18, 2012 at 00:01

The Sleepy::loseSomeTime() code in the Ports library was never tested on a JeeNode Micro. Turns out that only a minor library difference kept it from working (the arduino-tiny library has its own version of millis(), etc):

Screen Shot 2012 01 14 at 23 55 34

So now, the JNµ can be put to sleep as well. Here’s the jouleTest sketch I used, tweaked to run on the JNµ as well:

Screen Shot 2012 01 14 at 23 55 09

And sure enough, about once every 10 seconds it shows that familiar packet-transmit current consumption:


The blue line is the AC-coupled supply voltage, a 3x AA Eneloop battery pack in this case. It supplies about 3.8V, but when running off a 1000 µF cap, it looks like this continues to work down to 1.8V (well below the RFM12B’s minimum 2.2V specs) – although with only about half the transmit power by then.

This current use fingerprint is almost identical to the ATmega328 running this same code. Not surprising really, since it’s the RFM12B which determines most of the power consumption, not the ATmega vs ATtiny difference.


Browsing the Arduino run-time code

In AVR, Software on Jan 10, 2012 at 00:01

The Arduino IDE is a thin wrapper around the avr-gcc compiler and the avr-libc run-time library. It also includes a fairly basic IDE, i.e. a text editor and conventions for managing projects, in the form of “sketches” and libraries.

I prefer to use my own programmer’s editor, because it supports multiple programming languages and has a lot more features for software development (such as Git integration, code folding, and save-on-lose-focus). The Arduino IDE supports external editors by disabling its own one – which is an option in the preferences:

External edit

Now I can simply use my own editor and switch to the Arduino IDE for compiling and uploading.

One advantage of using an external editor, is that you can look at other source code than just your own sketches. In the rest of this post, I’m going to describe how to look at one of the most interesting parts of the Arduino IDE: its run-time library, i.e. the Wiring code which adds supports for everything which makes an Arduino different from the ATmega’s on which it is based.

Note: what follows is specific for Mac OSX, but apart from the location of these files and the editor used, you should be able to transpose all of this to your own beloved computer environment.

The first task, is to figure out where the Arduino IDE’s run-time code is located. In Mac OSX, this code is located inside the Arduino application. To view this area, you can right-click on the Arduino app:

Screen Shot 2012 01 09 at 19 29 48

This leads to the following directory structure:

Screen Shot 2012 01 09 at 19 28 16

The interesting bits are inside the “hardware” folder:

Screen Shot 2012 01 09 at 19 48 09

Here are the first few lines of “Arduino.h”, for example (this used to be “WProgram.h”):

Screen Shot 2012 01 09 at 19 55 09

These source files are where everything specific to the Arduino IDE’s runtime happens. The “Serial” object, and the “millis()” code, for example. If you want to really understand what the Arduino is about, then it’s well worth going through some of these files. As you’ll find out, there’s no magic once you start looking in the right places.

Try it!

Low-power blink test

In Hardware, Software on Jan 8, 2012 at 00:01

Here is a new snoozeBlink sketch which can run off the new experimental 12 mW Low-power Supply:

Screen Shot 2011 12 28 at 14 48 03

It does all the right things to start off in low-power mode and puts the ATmega to sleep, even as the LED blinks!

The LED is a normal red LED with a forward voltage of about 1.6V and with a 470 Ω series resistor. The result:


(lots of noise because I’m using the default 1:10 probe and the scope at its most sensitive 1 mV/div setting)

Voltage over the 100 µF reservoir cap in blue, current consumption in yellow. You can see the startup dip when the cap reaches about 6V, then the 2s wait, and then the LED blink at about 2 Hz with a 10% duty cycle. There’s not much energy to spare – the reservoir cap doesn’t even get a chance to fully re-charge between blinks.

After about 4 seconds, I turned off the power to find out what would happen. Yet there’s still enough energy left to get two more full blinks, and then an aborted blink as the reservoir cap voltage drops below 3V.

Note how the power consumption is just 3 mA while the LED is turned on. The ATmega itself is hardly ever running (the very brief 10 mA peaks got filtered out in this scope capture).

It can even be made to flash with a 26 mA LED current (omitting the resistor) for 16 ms @ 2 Hz. In this case the reservoir cap voltage varied from 9.4 to 4.4 V, again leaving very little energy to spare. Maybe one day this can be refined to drive a TRIAC, which needs very frequent but brief pulses (a BT137, BTA312, or L6004L3?).

But there’s actually something extra-ordinary going on with that power-up sequence – let’s investigate:


The BIG surprise? This is running on a standard JeeNode with standard bootstrap – no power-up troubles at all!

Let me try and interpret everything happening in that last image:

  • the initial very high blip (over 25 mA) is the JeeNode’s on-board 10 µF capacitor charging up
  • the 65 ms @ 3.2 mA is the clock startup delay, as defined by the default fuse setting
  • up to this point, the reservoir cap has lost some 2V of its charge
  • the next blip is the boot loader passing control immediately to the sketch (!)
  • then there’s the 32 ms loseSomeTime() call in setup(), with the ATmega finally powered down
  • the last blip at the right-end side of the screen puts the RFM12B into low-power sleep mode

So what’s going on, and above all why is the boot loader problem gone, after all that trouble it gave me before?

The only explanation I can think of lies in the one change I made since then: I’m now using OptiBoot v4.4 … and it probably does the right thing, in that it skips the boot loader on power-up and only goes through a boot-loader sequence on reset. This is the well known ladyada fix. I guess my previous boot loader setup wasn’t using that.

This is really good news. It means you just need a recently-flashed ATmega and you can continue to use the normal FTDI upload mechanism while fooling around with this ultra low-power stuff. Even the 10 µF cap and regulator on the JeeNode can be left in when powering it from the new Low-power supply.

Getting ready for OptiBoot 4.4

In AVR, Hardware, Software on Jan 7, 2012 at 00:01

Ok, it’s official – starting this week, all new ATmega’s here will be flashed with the OptiBoot 4.4 boot loader.

It’s going to take a while for all current inventory units to “flush through the system” so to speak (both DIPs and SMDs), but at some point this month all ATmega’s will be running the same boot loader as the Arduino Uno. Faster, smaller, and – hopefully – no more troubles with Windows being unable to upload sketches through FTDI.

One of the things I’ve done is to turn one of the new JeeNode Blocks into a dedicated Portable ISP Programmer for DIP-28’s. It’s the same isp_repair2 sketch as before (modified for the Block’s minor pin allocation diff’s):

DSC 2847

Note the 16 MHz resonator behind the ZIF socket. Here’s the wiring:

DSC 2848

There’s no 10 kΩ pull-up resistor for RESET, because ATmega’s have a (weak) built-in pull-up.

To avoid the delay-on-hardware-reset, I’ve added a push-button which briefly shorts +3V and GND together through a 10 µF electrolytic cap. Enough to force the JeeNode Block into a hardware reset. There’s a 10 kΩ resistor across the cap to discharge it afterwards. This is really quick because the reset occurs on button press i.s.o. release.

The savings are minimal – just 1..2 seconds more for the standard bootstrap – but for production, it matters. Total flash time, including boot loader, RF12demo sketch, and setting all the fuses is now just a few seconds:

  • insert DIP chip
  • close ZIF socket
  • press button
  • wait for two LED blinks
  • open ZIP socket
  • remove programmed chip
  • rinse and repeat…

When a serial port is connected via FTDI, you can see the progress of it all:

Screen Shot 2012 01 04 at 20 01 42

Now let’s just hope that this version of OptiBoot doesn’t lead to the same mess as the previous one.

If you have older JeeNodes or other ATmega328 boards running previous bootstrap loaders, I suggest looking at the recent ISP programmer post and the older summary. You might consider bringing all units up to date, because with a mix of boot loaders you end up constantly using the wrong one in the IDE and having to adjust board types each time.

Just be careful when messing with boot loaders. If the process goes wrong and you pick the wrong fuse settings, then you can end up with a “bricked” unit (only a high-voltage programmer can recover from such a state).

But apart from that: enjoy the doubled 115.2 Kbaud upload speed and the 1.5 Kb extra space for sketches!

Pin-change interrupts for RF12

In Software on Jan 6, 2012 at 00:01

The recently released (limited-edition) JeeNode Block has a couple of changes w.r.t. pin allocation, most notably the RFM12B’s “nIRQ” interrupt pin. This was moved from PD2 to PB1 (Arduino digital pin 2 to pin 9). The main reason for this change was to free the entire set op pins on PORTD, i.e. PD0..PD7 – in Arduino-speak: digital pin 0 through 7.

That means interrupts are no longer coming in over the INT0 line, and can no longer use the attachInterrupt() and detachInterrupt() functions in the Arduino run-time library. Instead, the RF12 driver needs to switch to pin-change interrupts, which I’ve now added to the /code.

To use the RF12 driver with the JeeNode Block, you need to make the following changes:

  1. Get the latest version of JeeLib from GitHub

  2. Enable pin-change interrupts by removing the comment so it defines PINCHG_IRQ:

        #define PINCHG_IRQ 1
  3. Change the RFM_IRQ define (make sure you pick the ATmega328 case):

        // ATmega168, ATmega328, etc.
        #define RFM_IRQ 1
  4. If you plan to use RF12demo.ino – don’t forget to change the LED pin to match the JeeNode Block:

        #define LED_PIN 8
  5. Compile and upload, and things should work as on any other JeeNode.

Note that pin-change interrupts are a bit tricky. First of all, changes are not the same as responding to a level interrupt, but the new code should take care of that. More importantly, the RF12 driver assumes it’s the only one using pin changes on the corresponding “port” (in Atmel datasheet terminology). If you use more pins for port interrupts, you may have to adjust the RF12.cpp file.

These changes are not on by default, the latest RF12 driver comes configured for the original JeeNodes.

Update – 2014-11-23: RFM_IRQ changed from 9 to 1

Bit-banged I2C timing

In Hardware, Software on Jan 3, 2012 at 00:01

The JeeNode Ports library has supported software-based I2C right from the start. There are some limitations to this approach, but it’s been very useful to allow connecting lots of I2C-based plugs on any of the 4 JeeNode “ports”.

I’ve always suspected that the timing of this software “bit-banging” was off. Well… it’s waaaaay too slow in fact:


That’s a JeeNode talking to an RTC Plug over I2C, set to 400 KHz. Except that it’s talking at a mere 40 KHz!

The code works, of course. It’s just far from optimal and wasting time. The main reason for this is that the Arduino’s digitalWrite(), digitalRead(), and pinMode() calls take a huge amount of time – due to their generality. A second problem is that the delayMicroSeconds() call actually has a granularity of 4 µs.

As a quick test, I hard-coded the calls in the Ports.h header to use much more efficient code:

Screen Shot 2011 12 24 at 16 25 56

The result is that I2C can now run at over 300 KHz, and it still works as expected:


It can even run at over 600 KHz, which is beyond the official I2C specs, but it works fine with many I2C chips:


Except that this now requires a pull-up resistor on SDA (i.e. the AIO pin). The rise-time of the data pulses is now too low to work reliably off the internal ATmega pull-up resistors. I used a 1 kΩ resistor, as pull-up to 3.3V:


Note the glitch at the end – it’s probably from emulating open-collector logic with the ATmega’s three-state pins.

Pull-ups are also a very good idea with shorter bus lines, because they also lower the impedance of the wire, reducing noise. These tests were all done with the RTC Plug stuck directly into the Port 1 header, BTW.

Here’s the SDA signal on a 5 µs/div scale – via 4 Extension Cables back to back, i.e. 65 cm – with 1 kΩ pull-up:


And without – showing abysmal rise times and lots of crosstalk, making I2C on this hookup totally useless:


I’ll need to figure out how to properly implement these software optimizations one day, since that means we can’t just use the generic I/O pin calls anymore. There are several cases: different speeds as well as different ports (including “port 0” which uses A5 and A6, the official I2C pins – but still in bit-banged mode).

All in all, this software-based I2C works fine, with the advantage of supporting it on any two I/O pins – but it could be optimized further. The other thing to keep in mind is that the SCL clock line is not open-collector, but driven by a normal totem pole output pin, so this doesn’t support I2C clock stretching for slow (or busy) slaves.

Scheduler power optimization

In Software on Jan 2, 2012 at 00:01

Last year’s post showed how a packet transmission w/ ACK reception works out in terms of power consumption. It also uncovered a fairly large “power consumption bug”, with the scheduler idling only 0.1s at a time, causing the ATmega to exit power-down mode through its watchdog far more often than necessary.

Here’s the relevant code in the general-purpose Scheduler class in Ports.cpp:

Screen Shot 2011 12 21 at 19 00 38

And here’s what I ‘m going to change it to, to optimize and stay powered-down much longer:

Screen Shot 2011 12 21 at 19 00 10

This changes the wake-up period from 30 times per second, to roughly once every 8s, with blips like this:


My interpretation of this picture, is that the ATmega on this JeeNode needs a whopping 10 mA of power for 50 µs once every eight seconds to keep going. That 1 ms “lead-in” at < 1 mA is probably clock startup, or something.

This current draw is the same as before (this capture was with PIR installed). But instead of 1800 wake-ups per minute, there will now be 10 or so. This will reduce the power consumption from 2,000 µC to roughly 10 µC!

Does that mean the Room Node will now last 200 times longer on a single battery? Unfortunately, no. With these low-power games, the weakest link determines total battery life. In this case, there’s a PIR sensor which has to be constantly on, drawing about 50 µA. That’s 3,000 microcoulombs per minute.

But still, this little optimization should add quite a bit to the lifetime of a battery:

  • old: 3000 (PIR) + 130 (radio) + 600 (estimated) ATmega/regulator + 2,000 (scheduler) = 5730 µC/min
  • new situation, again per minute: 3,000 + 130 + 600 + 10 = 3740 µC/min

If you’re wondering what “microcoulombs/minute” means: that’s just current, so rescaling it to µC/s (i.e. µA’s), we get: old ≈ 96 µA, new = 63 µA. With a 2000 mAh 3x AA battery pack, that’s 2.5 vs 3.6 years of run time.

Note that these values are fairly optimistic. With motion in the room, there will be more than one packet transmission per minute. I have yet to see the real-life battery life of these room nodes, although right now it’s looking quite good: the nodes I have around here have been working for many months – 2012 will tell!

But either way, that change of just a few lines of C code should add about 50% to the battery life. Crazy, eh?

Tempus fugit …

In AVR, Hardware, Software on Dec 31, 2011 at 00:01

Another year is about to end, and the next one is already anxiously waiting to carry us along into the future…

A fitting moment to get that Dutchtronix clock working (a lot easier than this geek version):

DSC 2832

I bought this little kit long ago, not realizing that a low-end USB scope front-end can’t deal with it. Besides, it turns out that it didn’t work back then because of some bad solder joints (ground planes and 15W soldering irons don’t go well together – even my current soldering iron has trouble heating up some of those pads!).

Anyway, this is as far as the Hameg HMO2024 will go:


Recognizable, but a far cry from what analog scopes can do. That’s what you get when digital sampling, waveform refresh rates, and vector drawing clash (bumping up the sampling rate causes a fuller, but flickering, image).

This design was from 2007, I think – which goes to show that fun stuff (especially clocks) can be time-less!

I wish you a healthy, safe, and happy 2012 – with lots of opportunities to tinker, learn, and create.

Update – for another example of how such X-Y displays differ between analog and high-end vs low-end DSO’s, see this video on Dave Jones’ EEVblog.

Out with the old – in with the new!

In AVR, Software on Dec 29, 2011 at 00:01

Le roi est mort …

It’s time to move on. I’m dropping all support and maintenance of the Ports and RF12 libs, as well as some others. Well, as far as subversion is concerned. All files are kept read-only at – for reference.

… vive le roi!

But the good news is that all this software has now been moved and updated on GitHub … and is fully supported.

All libraries on GitHub have been adjusted to work with Arduino IDE 1.0 (they may or may not still work on 0022 or 0023, but that will be supported only to the point of accepting patches to increase backward compatibility).

There is one change which I’ve been meaning to get to for a long time: the Ports and RF12 libraries have been combined into a new library called JeeLib. This means that you no longer have to include “Ports.h” and “RF12.h” at the start of your sketches (although those still continue to work). Instead, insert this one line instead:

    #include <JeeLib.h>

All the examples from both libraries have been retained, but make sure that the old Ports and RF12 are gone.

There are three other libraries for the JeeNodes which you may be using: EtherCard, GLCDlib, and RTClib. These too are now on GitHib and have all been adapted for use with Arduino IDE 1.0.

So how does one start using any of these libraries? Ah, glad you asked :)

  • go to GitHub, and pick the library you’re interested in to get to its home page
  • at the bottom of each home page is a README section with the latest info – always worth a check
  • one of the links at the top is marked “ZIP” – you could click it to download everything as ZIP archive
  • … but you shouldn’t !

The reason for this is Git. If you download a ZIP archive, unpack it, and install it in the right place (which is in the “libraries” folder where all your sketches are), then you’ll get a copy of the code your after, but no way to stay up to date. That might sound like a minor detail, but with open source, there are usually many changes over time. Bug fixes as well as new features. If you grabbed a ZIP file, then you’ll be forced to re-install the next version over the previous one every time there is an update. This quickly gets very boring, and can lead to all sorts of awful mistakes, such as overwriting a file you had fixed in some way (after a lengthy debug session, long ago).

I can’t stress it enough: don’t grab open source software and install it on your disk by just downloading and unpacking it. Not just the stuff from JeeLabs – any open source software distributed in source code form. It’s a nuisance, a ticking time bomb, a mine field, a disaster waiting to happen – get the message?

There is only one sane / long-term approach to working with source code, and that’s through a Version Control System. In this case Git – in combination with GitHub, which is the web site where all source code is stored.

So instead of downloading a ZIP or TAR archive, you need to create a “clone” of the original code on GitHub. The difference with the ZIP archive is that a clone knows where it came from, and gives you the ability to find out what changed, to instantly update your clone, or to revert to any older version, since GitHub keeps track of all versions of the code – past, present, and future.

The trick is to get started with Git and GitHub without drowning in the capabilities they offer. How depends on the environment you’re in:

  • On Windows, you can install TortoiseGit – then you need to get that clone onto your machine. For JeeLib, you’ll probably need to enter this path to it somewhere: git://

  • On Mac OSX, you need to install the Xcode developer tools, which includes Git (it’s a free install from the App Store). If you have an account at GitHub (or don’t mind getting one, it’s free), then I highly recommend getting the GitHub for Mac application. If not, you can always use the command line, like so:

        cd ~/Documents/Arduino/libraries
        git clone git://
  • On Linux, you need to install git through whatever package manager you’re using. On Debian / Ubuntu, this command line will probably do it:

        sudo apt-get install git-core

    After that, it’s the same as the Mac OSX command-line version, i.e. “cd …” and “git clone …” (see above).

So far, this is only a little more involved than grabbing that ZIP file. The gains start when you want to update your copy to the latest version. In git-speak, this is called “pulling” the latest changes (to your local disk). I encourage you to search around on the web, it’s usually as simple as doing “git pull” from inside the cloned area (from the command-line, some GUI, whatever). The difference with re-installing is absolute security – “git pull” will never overwrite or lose any changes you made, if they conflict with the original.

If all you do is track changes from time to time, then this is all you need to benefit from Git and GitHub.

If you actually make changes, then you’ll need to dig a little deeper when a “conflict” occurs, meaning you’ve made a change which interferes with changes made in the original on GitHub. But on the plus side, you’ll also be able to easily submit your changes (GitHub calls this “making a pull request”, i.e. pulling your changes back into the original). Why bother? Because contributions, no matter how small, are what make open source software fly!

Welcome to the world of open source software collaboration – where code lives, evolves, and grows.

The convenience of “Git”

In Software on Dec 27, 2011 at 00:01

I’ve written before about source code respositories and Subversion / Git – about a year ago, in fact.

Suprisingly, lots of people working on software are still not using any “Version Control System” (VCS) !

So let me try a new angle today, and show you how you need only use 5% of those systems to get a huge benefit out of them when dealing with the Arduino sketches and libraries from JeeLabs. Since all the code from JeeLabs is now located at GitHub, the rest of this post will be about Git (and GitHub, a truly phenomenal free web resource).

Why bother with a VCS? Well… I once heard of a little anecdote about a sign at the dentist, which said:

“You don’t have to floss all your teeth, just the ones you want to keep.”

It’s a bit like that with source code. If you don’t write code, or don’t ever change it, then you don’t need to manage it. But if you do, then you know how much effort goes into a working sketch, library, script, program, etc. Things change – either your own code, libraries from others, tools like the Arduino IDE (1.0 broke lots of stuff), even core computer software or hardware. Things always change, and if you want to be able to deal with it, you need VCS.

Not everyone aspires to be a full-time software developer (what an odd world that would be!), but even in the simplest situations you can benefit from version control. Here’s a possible scenario, close to home JeeLabs:

  • Suppose you want to use the EtherCard library, but not necessarily with the JeeLabs Ether Card hardware.
  • The GitHub page includes a README at the bottom, with a link to the ZIP archive with all the code.
  • So you download the ZIP, unpack it, and figure out where to put it so the Arduino IDE can find it.
  • Say you want to try out the “backSoon” example, as demo of a tiny web server. You compile and upload.
  • It doesn’t work – now what? After a lot of digging and searching, you figure out that the code was written for the SPI select signal tied to Arduino’s digital pin 8 (PB0) whereas your hardware uses pin 10 (PB2).
  • After that it’s easy: change “#define SELECT_BIT 0” to “#define SELECT_BIT 2” in enc28j60.cpp.
  • Fantastic, you’ve cut through the problem and made the EtherCard library work for you. Congratulations!

This is a typical case: code was written for a specific purpose, but a small change makes it more widely usable.

I picked this example, because it is likely that we’ll update the EtherCard library soon to support different pins at runtime. That’ll allow you to leave the EtherCard library alone and set things up in your sketch, i.e. at run time. Note that it’s always a trade-off – some things usually become more general, but only if there’s a need. You can’t write code to cover all the cases. Well, you could, but then you end up with a nightmare (I won’t give examples).

So sometime in the near future, there will be an update to the EtherCard library. That’s when trouble starts…

Because there are now three different versions of the EtherCard library, and you’re about to create a fourth:

  • the original EtherCard library on GitHub
  • your copy of that library, on your disk, with a tiny change made to it
  • the updated! improved! whiter-than-white! new EtherCard library on GitHub
  • your copy of the new library, which you’re now forced to update and adjust again

There’s more going on: in this particular case, that update made your small change superfluous. But now the API of the EtherCard library has changed, so you need to change each of your sketches using the EtherCard library.

It’s not hard to see how these little details can turn into a nightmare. You build something, and you end up having to keep track of all sorts of dependencies and changes on your disk. Worse: everyone is wasting their time on this!

Version control systems such as Git can do a lot more, but the key benefit they offer can be summarized as:

You can easily manage THREE different versions of all the source files: the OLD, the NEW, and YOURS.

So if all you want is to use open source code and stay up to date, then Git lets you do that – even if you make changes yourself in any of those files. And it gets even better: if your changes have nothing to do with the changes made to the original code, then all you need to do is to issue the “git pull” command, and the remote changes will be folded into your local copy of the source file – chances are that you can simply re-compile, upload, and keep on running with the latest updates!

If your local changes do interfere or overlap with the changes made to the original code, then “git pull” will detect this and avoid making the changes to prevent it from altering yours (which always take precedence). This is called a “conflict” – and Git offers several (maybe too many) ways to deal with it.

For JeeLabs, all the code is now maintained on a free and public web site called GitHub. You don’t have to have an account there to browse or download any source code. You can not only see all the latest code, you can also look at any previous changes, including the complete history from the moment the first version was placed on GitHub.

Here’s an example:

You’ll see a comment about the change, the list of files affected, and for each file an exact summary of the changes and some nearby lines for context. This change has been recorded forever. If logged in, you could even discuss it.

Even if you never intend to share your own changes and code, you can see how this resource lets you examine the code I’m providing, the changes I’m making to it over time, and the affected files. You can view a source file as it was at any point in time (wouldn’t it be great if this also included the future?).

Here’s one more gem: say you are looking at the enc28j60.cpp file, and wondering when what change was made. There’s a way to see where each change came from (and the look at that specific change in a larger context, for example). This annotated source file is generated by “git blame” – follow this link as example.

So what VCS, Git, and GitHub offer is really no less than a Time Machine. But even if you don’t ever use that, Git gives you the ability to keep your code in sync with the changes happening in the original source code repository. And if something gets messed up, you can revert to an older version of the code – without having copies on your disk, which need to be given meaningful names so you can find them back. The “git diff” feature alone is worth it.

Wanna get started? Terrific. Cast your doubts aside, and keep in mind that you need a little time to get used to it.

How to start using Git (and GitHub) depends a bit on your operating system: on Windows, best option might be to install TortoiseGit (see also this intro, which goes a bit deeper). On the Mac, it’s included with the Xcode developer tools, but there’s actually a great app for it called GitHub for Mac, and on Linux it’s probably one of the best supported packages out there, since it came from the same guy who made Linux possible in the first place…

With Git, you can stay up to date, you’ll never lose changes, and you never have to guess who changed what, when, where, why, or how. Is there a new update? Try it! Doesn’t work as expected? Back out! It’s that simple.

Developing a low-power sketch

In AVR, Software on Dec 13, 2011 at 00:01

As you’ll know if you’ve been reading this weblog for more than a few nanoseconds, I put a lot of time and effort into making the ATmega-based JeeNode use as little power as possible – microwatts, usually.

In the world of ultra-low power, the weakest link in the chain will determine whether your sketch runs days, weeks, months, or years… low power can be a surprisingly elusive goal. The last microoulombs are the hardest!

But it’s actually quite easy to get some real savings with only a little effort. Here are some things to avoid:

  • Don’t optimize in the wrong place – it’s tempting to start coding in a way which seems like a good idea in terms of power consumption, although more often than not the actual gains will be disappointing.

  • Don’t leave the lights on – the ATmega is amazingly easy to power down, which instantly reduces its consumption by several orders of magnitude. Make sure you do the same for every major power consumer.

  • Don’t just sit there, waiting – the worst thing you can do in terms of power consumption is wait. Unfortunately, that’s precisely what the Arduino runtime’s delay() and delayMicroseconds() calls do.

Ok, with this out of the way, I’ll describe a very simple way to get power consumption down – and hence battery lifetimes up (waaay up in fact, usually).

The trick is to use a convenience function from JeeLib (a.k.a. the Ports library). It’s in the “Sleepy” class, and it’s called loseSomeTime(). So if you have this in your code:


… then you should replace it with this:


You also need to include the following code at the top of your sketch to avoid compilation and run-time errors:

    #include <JeeLib.h>

    ISR(WDT_vect) { Sleepy::watchdogEvent(); }

As the name indicates, the timing is not exact. That’s because the ATmega is put into a power down mode, and then later gets woken up by the watchdog timer (this is hardware, part of the ATmega). This timer can be several percent off, and although the milliseconds timer will automatically be adjusted by loseSomeTime(), it won’t be as accurate as when updated by the crystal or the ceramic resonator often used as system clock.

The second issue with the watchdog is that it can only delay in multiples of ≈ 16 ms. Any call to loseSomeTime() with an argument less than 16 will cause it to return immediately.

Furthermore, loseSomeTime() can only work with argument values up to 60,000 (60 seconds). If you need longer delays, you can simply create a loop, i.e. to wait 120 minutes in ultra-low power mode, use this:

    for (byte i = 0; i < 120; ++i)

One last aspect of loseSomeTime() to be aware of, is that it will abort if an interrupt occurs. This doesn’t normally happen, since the ATmega is shut down, and with it most interrupt sources. But not all – so if loseSomeTime() returns prematurely, it will return 0. Normally, it returns 1.

The trade-off of loseSomeTime() is power consumption (system clock and timers are shut down) versus accuracy.

But the gains can be huge. Even this simple LED blink demo will use about 10 mA less (the ATmega’s power consumption while running at 16 MHz) than a version based on delay() calls:

    void loop () {
      digitalWrite(4, 1);
      digitalWrite(4, 0);

To reduce this even further, you could shorten the blink ON time as follows:

    void loop () {
      digitalWrite(4, 1);
      digitalWrite(4, 0);

The LED may be somewhat dimmer, but the battery will last 10x longer vs. the original delay() version.

Battery-powered operation isn’t hard, you just have to think a bit more about where the energy is going!

Inside the RF12 driver – part 3

In Software on Dec 12, 2011 at 00:01

After part 1 and part 2, I’d like to conclude with a description of how everything fits together.

The main functions in the public API are:

  • uint8_t rf12_initialize (uint8_t id, uint8_t band, uint8_t g)
    Sets up the driver and hardware, with the RFM12B in idle mode and rxstate set to TXIDLE.

  • uint8_t rf12_recvDone ()
    If in receive mode, check whether a complete packet has been received and is intended for us. If so, set rxstate to TXIDLE, and return 1 to indicate there is a fresh new packet. Otherwise, if we were in TXIDLE mode, enable the RFM12B receiver and set rxstate to TXRECV.

  • uint8_t rf12_canSend ()
    This only returns true if we are in TXRECV mode, and no data has been received yet, and the RFM12B indicates that there is no signal in the air right now. If those are all true, set rxstate to TXIDLE and return 1.

  • void rf12_sendStart (uint8_t hdr, ...)
    This function may only be called right after rf12_recvDone() or rf12_canSend() returned true, indicating that we are allowed to start transmitting. This turns the RFM12B transmitter on and set rxstate to TXPRE1.

  • void rf12_sendWait (uint8_t mode)
    Can be called after rf12_sendStart() to wait for the completion of the transmission. The mode arg can be used to do this waiting in various low-power modes, to minimize the ATmega power consumption while the RFM12B module is drawing a relative large current (ca 25 mA at full power). This call is optional – either way, the RF12 driver will return to TXRECV mode after the transmission.

Note that this design places all time-critical code inside the RF12 driver. You don’t have to call rf12_recvDone() very often if you’re busy doing other things. Once a packet has been received, the RF12 driver will stop all further activity and prevent the current contents of the packet buffer from being overwritten.

Likewise, since you have to ask permission to send, the logic is such that the packet buffer will only be filled with outgoing data at a time when it is not being used for reception. Thus, a single packet buffer can be used for both.

The main thing to keep in mind, is that rf12_recvDone() needs to be called as main polling mechanism, even if you don’t care about incoming packets. You can’t simply call rf12_canSend() forever, hoping that it will return 1 at some point. Instead, use this logic:

    while (!rf12_candSend())

This code ignores all incoming packets, i.e. when rf12_recvDone() returns true, but the call is still needed to keep the finite state machine in the driver going.

So much for the general structure and flow through the RF12 driver. To find out more about the protocol and how the different header fields are used, see this post and this post. And lastly there’s the RF12 library‘s home page.

Inside the RF12 driver – part 2

In Software on Dec 11, 2011 at 00:01

Yesterday, I described the big picture of the RF12 driver: a public polling-type API with a private interrupt-driven finite state machine (FSM). The FSM turns the RF12 driver into a background activity for the ATmega.

But first, let’s take a step back. This is how you would write code to send out a packet without FSM:

  • set up the RFM12B for transmission
  • feed it the first byte to send (i.e. 0xAA, the start of the preamble)
  • wait for the RFM12B to request more data
  • feed it more bytes, as long as there is more to send
  • then terminate transmit mode and reset the RFM12B mode to idle

Here’s an old sketch which does exactly this, called rf12xmit.pde.

The problem with this code is that it keeps the ATmega occupied – you can’t do anything else while this is running. For sending, that wouldn’t even be such a big deal, but for reception it would be extremely limiting because you’d have to poll the RFM12B all the time to avoid missing packets. Even a simple delay() in your code would be enough to miss that first incoming byte – keep in mind that although packets don’t come in all the time, when they do there is only 160 µs time to deal with each incoming byte of data.

The solution is to use interrupts. It’s for this same reason that the millis() clock and the Serial handler in the Arduino runtime work with interrupts. You don’t want to constantly check them in your code. With interrupts, you can organize your own code however you want, and check whether a new RFM12B packet was received when you are ready for it. The simplest way is to add a call to rf12_recvDone() in the loop() body, and then simply make sure that the loop will be traversed “reasonably often”.

With interrupts, the RF12 driver is no longer in control of when things happen. It can’t wait for the next event. Instead, it gets activated when there is something to do – and the way it knows what the next step needs to be, is to track the previous step in a private state variable called rxstate.

As mentioned yesterday, the RF12 driver can be doing one of several different things – very roughly summarized as: waiting for reception of data, or waiting to send the next data byte. Let’s examine both in turn:

Waiting for reception: rxstate = TXRECV

The most common state for the RF12 driver is to wait for new data bytes, with the RFM12B in receive mode. Each interrupt at this point will be treated as a new data byte. Recall the packet format:

RF12 packets

The good news, is that the RFM12B will internally take care of the preamble and SYN bytes. There won’t be any interrupts until these two have been correctly received and decoded by the RFM12B itself. So all we have to do is store incoming bytes, figure out whether we have received a full packet (by looking at the length byte in the packet), and verify the checksum at the end.

All of this happens as long as rxstate is set to TXRECV.

The driver doesn’t leave this mode when done, it merely shuts down the RFM12B receiver. Meanwhile, in each call to rf12_recvDone() we check whether the packet is complete. If so, the state is changed to TXIDLE and we return 1 to indicate that a complete packet has been received.

TXIDLE state is a subtle one: the driver could start sending data, but hasn’t yet done so. If the next API call is a call to rf12_recvDone(), then rxstate will be reset to TXRECV and we start waiting for incoming data again.

Sending bytes: rxstate = TXPRE1 .. TXDONE

I’ll skip the logic of how we enter the state TXPRE1 for now, and will just describe what happens next.

In these states, interrupts are used to keep things moving. Whenever an interrupt comes in, the driver decides what to do next, and adjusts the state. There’s a switch statement in rf12_interrupt() which looks like this:

Screen Shot 2011 12 09 at 14 05 42

It’s fairly compact, but the key is the “rxstate++” on the first line: the normal behavior is to do something and move to the next state. So the basic effect of this code is to proceed through each state in sequence:


If you look closely, you’ll see that it corresponds to the packet layout above. With one exception: after TXSYN2, the rxstate variable is set to a negative value. This is a special state where payload data is sent out from the buffer. In this context, the two header bytes are treated as payload, and indeed they “happen” to be placed in the data buffer right in front of the rest of the data, so the driver will send header bytes and payload data in the same way:

Screen Shot 2011 12 09 at 14 11 34

Ignore the details, just note that again there is the “rxstate++” in there to keep advancing the state.

At some point, incrementing these negative states will lead to state 0, which was cunningly defined to be the state TXCRC1. Now the switch statement takes over again, and appends the CRC etc to the outgoing data.

Finally, once state TXDONE has been reached, the interrupt code turns off the RFM12B’s transmit mode, which also means there will be no more interrupts coming in, and bumps the state one last time to TXIDLE.

This concludes today’s story. What you’re looking at is a fairly conventional way to write an interrupt-driven device driver. It may be full of trickery, but the logic is nevertheless quite clean: a public API to start things going and to pick up important state changes (such us having received a complete packet), while interrupts push through the states and “drive” the steps taken at each point.

Each interrupt does a little dance: where was I? oh yes, now do this and call me again when there is new work.

In a previous life I wrote a few Unix (not Linux) kernel drivers – this was for a PDP11 at the time. It’s amazing how the techniques and even the programming language are still the same (actually it’s more powerful now with C++). The difference is the price of the hardware – 10,000 x cheaper and affordable by anyone!

Tommorow, I’ll conclude with some more details about how everything fits together.

Inside the RF12 driver

In Software on Dec 10, 2011 at 00:01

This is the first of 3 posts about the RF12 library which drives the RFM12B wireless modules on the JeeNode, etc.

The RF12 driver is a small but reasonably complex bit of software. The reason for this is that it has some stringent time constraints, which really require it to be driven through interrupts.

This is due to the fact that the packet data rate is set fairly high to keep the transmitter and receiver occupied as briefly as possible. Data rate, bandwidth, and wireless range are inter-related and based on trade-offs. Based on some experimentation long ago, I decided to use 49.2 kBaud as data rate and 134 KHz bandwidth setting. This means that the receiver will get one data byte per 162 µs, and that the transmitter must be fed a new new byte at that same rate.

With an ATmega running at 16 MHz, interrupt processing takes about 35 µs, i.e. roughly 20% of the time. It works down to 4 MHz in fact, with processor utilization nearing 100%.

Even with the ATtiny, which has limited SPI hardware support, it looks like 4 MHz is about the lower limit.

So how does one go about in creating an interrupt-driven driver?

The first thing to note is that interrupts are tricky. They can lead to hard-to-find bugs, which are not easy to reproduce and which happen only when you’re not looking – because interrupts won’t happen exactly the same way each time. And what’s worse: they mess with the way compiled code works, requiring the use of the “volatile” datatype to prevent compiler optimizations from caching too much. Just as with threads – a similar minefield – you need to prepare against all problems in advance and deal with weird things called “race conditions”.

The way the RF12 driver works, is that it creates a barrier between the high-level interface (the user callable API), and the lower-level interrupt code. The public calls can be used without having to think about the RFM12B’s interrupts. This means that as far as the public API is concerned, interrupt handling can be completely ignored:

RF12 driver structure

Calling RF12 driver functions from inside other interrupt code is not a good idea. In fact, performing callbacks from inside interrupt code is not a good idea in general (for several reasons) – not just in the RF12 driver.

So the way the RF12 driver is used, is that you ask it to do things, and check to find out its current state. All the time-critical work will happen inside interrupt code, but once a packet has been fully received, for example, you can take as much time as you want before picking up that result and acting on it.

The central RF12 driver check is the call:

    if (rf12_recvDone()) ...

From the caller’s perspective, its task is to find out whether a new packet has been received since the last call. From the RF12’s perspective, however, its task is to keep going and track which state the driver is currently in.

The RF12 driver can be in one of several states at any point in time – these are, very roughly: idling, busy receiving a packet, packet pending in buffer, or busy transmitting. None of these can happen at the same time.

These states are implemented as a Finite State Machine (FSM). What this means, is that there is a (private) variable called “rxstate“, which stores the current state as an integer code. The possible states are defined as a (private) enum in rf12.cpp (but it can also have a negative value, this will be described later).

Note that rxstate is defined as a “volatile” 8-bit int. This is essential for all data which can be changed inside interrupt code. It prevents the compiler from applying certain optimizations. Without it, strange things happen!

So the “big picture” view of the RF12 driver is as follows:

  • the public API does not know about interrupts and is not time-critical
  • the interrupt code is only used inside the driver, for time-critical activities
  • the RF12 driver is always in one of several well-defined “states”, as stored in rxstate
  • the rf12_recvDone() call keeps the driver going w.r.t. non time-critical tasks
  • hardware interrupts keep the driver going for everything that is time-critical
  • “keeping things going” is another way of saying: adjusting the rxstate variable

In a way, the RF12 driver can be considered as a custom single-purpose background task. It’ll use interrupts to steal some time from the running sketch, whenever there is a need to do things quickly. This is similar to the milliseconds timer in the Arduino runtime library, which uses a hardware timer interrupt to keep track of elapsed time, regardless of what the sketch may be doing. Another example is the serial port driver.

Interrupts add some overhead (entering and exiting interrupt code is fairly tedious on a RISC architecture such as the ATmega), but they also make it possible to “hide” all sorts of urgent work in the background.

In the next post, I’ll describe the RF12 driver states in full detail.

PS. This is weblog post number 900 ! (with 3000 comments, wow)

RF12 power optimization

In AVR, Hardware, Software on Dec 1, 2011 at 00:01

Here’s a small evolutionary change to the RF12 driver, to squeeze one more drop of power consumption out of it.

This is the code for which I’m trying to optimize power consumption:

Screen Shot 2011 11 19 at 11 31 16

The Sleepy::loseSomeTime() call is the big power saver. It puts the ATmega into a total power down mode, except for the watchdog timer, which is used to get it back out of this comatose state. So we’re sleeping for 10 seconds.

Around it are the rf_sleep() calls needed to put the RFM12B into low-power mode as well.

And lastly, the rf12_sendWait(3) call does something pretty nifty: it puts the ATmega into full power down mode between each byte sent to the RFM12B while transmitting. This requires a non-standard fuse setting in the ATmega – it only works with a ceramic resonator or the internal clock oscillator, not with a crystal: wake up out of power down within a few clock cycles.

The most efficient mode turns out to be with the ATmega running at 8 MHz off the internal RC oscillator (which starts up really fast). With default Arduino’ish settings, you have to use mode 2, i.e. a less extreme power down mode so it can wake up fast enough.

Here’s one complete transmission of a 8-byte payload (scope details to follow tomorrow):


Each vertical division is 5 mA current draw (the voltage drop across a 10 Ω series resistor). You can see the ATmega turn on, drawing 5 .. 9 mA, and the RFM12B in transmit mode consuming about 23 mA.

The red line is pretty advanced stuff: it integrates the current over time – which is equivalent to the amount of charge consumed. At the end of the trace, this leads to a result of 7.22 microcoulombs per packet sent. One way to interpret this (thanks, Jörg – see comments), is that you could send one packet per second on an average current of less than 8 µA (hm, I think that should be 80 µA).

The “blips” are when the ATmega wakes up and feeds another byte to the RFM12B. In detail (edited image):


These blips take about 32 µS @ 5 mA, which is what it takes to communicate with the RFM12B on the SPI bus at 2 MHz. The reason is that the RFM12B supports a 2.5 MHz maximum SPI rate (it turns out that this limitation only applies for data read from the RFM12B module).

The blips repeat 18 times, every 162 µS. Why 18? Well, an RF12 data transmission looks as follows:

RF12 packets

That’s 9 bytes of overhead, plus the 8 payload bytes, plus a trailing interrupt to shut down the RFM12B once the transmission is over.

For completeness – in case you think I can’t count blips: there’s one more activity peak at the start. That’s the call to rf12_canSend(), to check that the RFM12B is ready to start a transmission (which then takes 250 µs to start).

This is probably the limit of what you can push out of power savings with an ATmega (or ATtiny) and an RFM12B. Well, apart from: 1) sending less data, 2) increasing the transmit data rate, or 3) decreasing transmitter power.

When re-using the same code with “rf12_sendWait(2)” and running with normal fuse settings at 16 MHz using a ceramic resonator, it uses only slightly more charge – 7.70 µC, i.e. 7% more. So while this was a nice exercise, it’s not really a major improvement.

All in the name of nanowatt yak power s(h)aving…

Maximum speed wireless transfers

In Software on Nov 26, 2011 at 00:01

Prompted by a question of the forum, I wanted to go bit into the way you can collect data from multiple JeeNodes as quickly as possible.

Warning: I’m completely disregarding the “1% rule” on 868 MHz, which says that a device should not be sending more than 1% of the time, so that other devices have a good chance of getting through as well (even if they are used for completely unrelated tasks). This rule is what keeps the 868 MHz band relatively clean – no one is allowed to “flood”. Which is exactly what I’m going to do in this test…

Ok, first of all note that all devices on the 868 MHz wireless ISM band have to share that frequency. It only works if at most one device is transmitting at a time. Many simple OOK transmitters, such as weather sensor nodes, don’t do that: they just send out their packet when they feel like it. Fortunately, most of them do so relatively infrequently, once every few minutes or so. And due to the 1% rule, most transmissions will be ok – since the 868 MHz band is available most of the time.

This changes when you start to try and push as much information across as you can. With one sender, it’s easy: just ignore the rule and send as much as you can. With the default RF12 settings, you should be able to get a few hundred small packets per second across. With occasional loss due to a collision with another sender.

But how do you get the maximum amount of data across from say three different nodes?

It won’t work to let them all send at will. It’s also a bit complicated to make them work in perfect sync, with each of them keeping accurate track of time and taking turns in the right order.

Here’s a simpler idea to “arbitrate media access”, as this is called: let the central node poll each of the remote nodes, and let each remote node then send out an ACK with the “requested” data only when asked.

I decided to give it a go with two simple sketches. One is the poller, which sits at the center and tries to obtain as many packets as it can:

Screen Shot 2011 11 24 at 12 47 39

It cycles over each of the remote node ID’s, sends them a packet, and waits briefly for a reply to come in. Note that the packet sent out is empty – it just needs to trigger the remote node to send an ACK with the actual payload.

The remote nodes each run a copy of the pollee sketch, which is even simpler:

Screen Shot 2011 11 24 at 12 37 07

They just wait for an empty incoming packet addressed to them, and reply with the data they want to get across. I just send the node ID and the current time in milliseconds.

Here is the result, with one poller and three pollee’s:

    1: 36974
    2: 269401
    3: 10128
    1: 36992
    2: 269417
    3: 10145
    1: 37009
    2: 269434
    3: 10163

As you can see, each node gets one packet across about once every 17 ms (this will slow down if more data needs to be sent). So that’s 6 short packets flying through the air every 17 ms, i.e. ≈ 350 packets per second.

There are ways to take this further, at the cost of extra complexity. One idea (called TDMA), is to send out one poll packet to line up the remote’s clocks, and then have them send their payload a specific amount of time later. IOW, each node gets its own “time slot”. This reduces the 6 packets to 4, in the case of 3 remote nodes.

No more collisions, but again: this will block every other transmission attempted on the 868 MHz band!

JeeMon for early birds

In Software on Nov 25, 2011 at 00:01

Time to dive in. Let’s create a development setup for JeeMon on Mac OSX, in a new folder called “jee”:

    cd ~/Desktop
    git clone git:// jee
    cd jee

That gives me the following files:

Screen Shot 2011 11 23 at 23 21 47

There is more here than strictly needed for production use – just ignore most of this for now. The main bits are “jeemon” and the “kit/” sub-folder with all the JeeRev code in it.

On Linux, the commands will be almost the same, but you’ll need to get a different JeeMon zip file.

Since I don’t use Windows myself, I’ll have to rely on help / support from others (yes, you!) to get the details right. Thanks to @tankslappa, here’s a first report of an install on XP SP3:

cd %homepath%\desktop
“C:\Program Files\Git\bin\git” clone git:// jee
Cloning into jee…
remote: Counting objects: 1810, done.
remote: Compressing objects: 100% (670/670), done.
remote: Total 1810 (delta 1187), reused 1742 (delta 1119)
Receiving objects: 100% (1810/1810), 1.45 MiB | 87 KiB/s, done.
Resolving deltas: 100% (1187/1187), done.
cd jee

Then unzip using your favorite zip manager, and you should be good to go (I’ll optimize further, one day).

Note: on Mac OSX and Linux, if “.” is not in your path, you’ll need to add it or type “./jeemon” i.s.o. “jeemon” everywhere it is mentioned below.

At this point, JeeMon is ready for use. There are a few built-in commands – here’s a quick sanity check:

    jeemon env general

The output provides some general details about the current runtime environment:

           JeeMon = v1.5
          Library = /Users/jcw/Desktop/jee/kit
         Encoding = utf-8
        Directory = /Users/jcw/Desktop/jee
       Executable = /Users/jcw/Desktop/jee/jeemon
      Tcl version = 8.6b1.1

If you got this far, then everything is working as intended. If not: you’ve hit a bug – please get in touch.

But the normal procedure is to simply launch it:


If this is the first time, you’ll get something like this:

    No application startup code found.
    21:22:29.157      app is now running in first-time configuration mode
    21:22:29.189      web server starting on

Where “first-time configuration mode” means that JeeMon didn’t find a “main.tcl” rig, which is normally used to start up. To get past this step, you need to point your web browser to the indicated URL, which’ll show this page:

Screen Shot 2011 11 23 at 21 28 54

There’s not much point in selecting anything else but “YES”at this stage. This creates a 3-line “main.tcl” file:

    # default JeeMon startup file
    Log main.tcl {in [pwd]}
    Jm needs HomeApp

From now on, JeeMon will start up using the built-in “HomeApp” rig when there are no command-line args.

The next steps depend on what you’re after – you can either dive into Tcl programming and explore how JeeRev (i.e. the kit/ area) is structured, or you can try out some examples and get a more top-down impression of it all.

To explore Tcl, the thing to keep in mind is that JeeMon will act as a standard no-frills Tcl 8.6 programming environment when launched with a source file as argument (just like tclsh and tclkit). Here’s how to make a hello-world demo – create a file called “hello.tcl” with your favorite text editor and put the following code in it:

    puts "hello, world"

Then run that code using the command “jeemon hello.tcl“. You can probably guess what it does…

If you want to use a convenient interactive shell with scrollback, history, and debugging support, download the TkCon script and launch it using “jeemon tkcon.tcl” – this uses Tk to create a mini IDE.

For Tcl manuals, books, and demos, see To really dive in, check out this tutorial or this.

But chances are that you just want to get an idea of what JeeMon does in the context of Physical Computing, House Monitoring, or Home Automation. I’ll describe just two simple examples here, and will point to the JeeMon homepage for the rest. Note that everything described below is really part of JeeRev, i.e. the standard library used by JeeMon – or to put it differently, by the source files located inside the “kit/” folder.

First, let’s launch a trivial web server (you can see the code at examples/hello-web/main.tcl):

    jeemon examples/hello-web/

You’ll see a log output line, similar to this:

    18:23:46.744      web server starting on

JeeMon is now running in async event mode, and keeps running until you force it to stop (with ^C, kill, whatever). Leave it running and go to the indicated URL in your browser. You’ll be greeted with a web page generated by JeeMon. Hit refresh to convince yourself that it really works.

Now quit JeeMon and check out the files in the “examples/hello-rf12/” folder. This example connects to a JeeNode/JeeLink running the RF12demo sketch, and displays incoming packets via its web server. But before launching JeeMon, you have to tell it what serial interface (COM or tty port) to use: copy “config-sample.txt” to “config.txt” and edit it to match your setup. Make sure the JeeNode/JeeLink is plugged in, then start JeeMon:

    jeemon examples/hello-rf12/

The output will look something like this:

    18:37:57.166      web server starting on
    18:37:58.154    rf12? [RF12demo.8] A i1* g5 @ 868 MHz 

And two dozen more lines which you can ignore. Now go to that same URL again with your web browser. If your config file defines the proper net group and you have some JeeNodes sending out packets in that net group, you’ll see them in your browser after a while. This is what I got within a few minutes, here at JeeLabs:

    #1 18:42:21 - OK 19 186 200 8 1 226 2 32 0 0 213 81 33
    #2 18:42:31 - OK 19 186 200 8 1 226 2 32 0 0 213 81 33
    #3 18:42:44 - OK 3 132 43 9 0
    #4 18:42:51 - OK 19 186 200 8 1 226 2 32 0 0 213 81 33
    #5 18:43:11 - OK 19 186 200 8 1 226 2 32 0 0 213 81 33
    #6 18:43:21 - OK 19 186 200 8 1 226 2 32 0 0 213 81 33
    #7 18:43:29 - OK 6 1 95 212 0

Now might be a good time to look at the code in examples/hello-rf12/main.tcl – to see how it all works.

Wanna experiment? Easy: quit JeeMon, create a new directory “blah“, and copy the files from the example to it. Edit them as you like, then start JeeMon using “jeemon blah“. Go ahead, try it! – the worst that can happen is that you get error messages. Usually, error tracebacks will refer to JeeRev files in the “kit/” folder.

This concludes my intro for those who want to get their feet wet with JeeMon. We’ve only scratched the surface!

I’ll work out more examples on the JeeRev page as I start building up my own setup here at JeeLabs. If you want to try things and run into trouble (there will be rough edges!) – here are two places to post problems and bugs:

Feel free to ask and post anything else on the forum. If it gets out of hand, I’ll set up a separate area for JeeMon.

A quick note about the Shop: I’m running into some shortages across the board (such as wire jumpers), which also affect the JX and WSP packs. All the missing bits are on order, but some of this might not get in before early December. My apologies for the delay and inconvenience! -jcw

JeeRev sits under the hood

In Software on Nov 24, 2011 at 00:01

Here’s another post about the JeeMonBusRev trilogy, i.e. the JeeMon project I started a few years ago.

First, let me go into why there are still two pieces and two names left in this project:

  • JeeMon is two things at once: the name of the entire system, and the name of the executable file to start it up. This exe file contains a general-purpose set of tools – it has no clue about Physical Computing or Home Automation. It could as well be used to build an accounting system or a CAD system… if you wanted to.

  • JeeRev is the name of the software which extends the JeeMon executable for domain specific use, i.e. its run time library. This is where the code lives which does know very specifically about Physical Computing and Home Automation. This is also where all the (software development) action is.

Do these names stand for anything? Yeah, probably – but I’ve stopped agonizing about cool acronyms.

JeeRev is used by JeeMon in one of two ways:

  • Development mode – as a “kit/” folder full of source files, which JeeMon will automatically detect and use. This is what you get when you grab the code from GitHub. Easy to browse, tweak, and keep up-to-date.

  • Production mode – as a single file called “jeemon-rev“. Normally, JeeMon will automatically download the latest official version of that file if it doesn’t exist yet (if there’s no “kit/” folder to put it in development mode). End users (i.e. normal people!) don’t care about JeeRev. They just download and launch JeeMon.

The precise startup process is fully configurable – it’s documented in JeeMon’s README file on GitHub.

I suggest forgetting about production mode for now. It’s not going to matter for quite some time, even though that aspect has had a major effect on the “structural design” of JeeMon. For now, everything will be happening in development mode, with JeeRev fully exposed – like a car kept in the garage with the hood open, so to speak.

To expose what’s in JeeRev, I have to describe one more mechanism – which is the way the different pieces of an application work together in JeeMon. Since I don’t know how familiar you are with recent implementations of Tcl, I’ll start from scratch and go through the basics – it’ll be easy to follow if you know some programming language:

Commands – everything in Tcl is based on commands. The line < puts "hello, world" > (without the <>’s) is treated as a call to the command puts with one argument. The difference with older versions of Tcl, is that commands can also be namespaces (it’s actually the other way around, and they’re called “ensembles”, but let’s not be picky for now). That means that a command such as the following can mean different things:

    Interfaces serial connect $device 57600

The simplest case would be that there is a command called “Interfaces” which gets called with 4 arguments. So you might expect to see a command definition somewhere in the code, which looks like this:

    proc Interfaces {type action device baudrate} {

But there is another interpretation in Tcl 8.6 (and 8.5): it can also be used to call a “serial” command defined in the “Interfaces” namespace. And in this particular case, it’s in fact nested: a command called “connect” defined in the “serial” namespace, which in turn is defined in the “Interfaces” namespace. Perhaps like this:

    namespace eval Interfaces {
      namespace eval serial {
        proc connect {device baudrate} {

Or, more succinctly:

    proc Interfaces::serial::connect {device baudrate} {

This is like writing “Interfaces.serial.connect(…)” in Python, Ruby, or Lua. But not quite – it’s slightly more general, in that callers don’t need to know how code is structured to be able to perform certain operations. It also allows you to alter the internal organization of the code later, without having to change all the calls themselves. But it can also be a bit more tricky, because you can’t tell from the appearance of a call where the actual code resides – you have to follow (or know) the chain in the code.

This is a fairly important concept in Tcl. The command “a b c d e” can lead to very different code paths (even in the same app, if things are added or re-defined at run-time), depending on whether and how “a”, “b”, etc are defined. At some point, the command name lookup “stops” and the arguments “begin”.

Apologies for the lengthy exposé. I really had to do this to introduce the next concept, which is…

Rigs – this is a mechanism I’ve been refining for the past few years now, which unifies the above nesting mechanism with the way code is arranged in source files. To continue the Interfaces example: there’s a file in JeeRev called serial.tcl, and inside is – as you would expect – a “proc connect …” definition.

The point is: that “serial.tcl” source file is located in a folder called “Interfaces”. So the nesting of command detail is reflected in the way files and folders are organized inside the “kit/” area, i.e. JeeRev.

If you’re familiar with Python, you’ll recognize the module mechanism. Yep – “rigs” are similar (not identical).

Some rig conventions: 1) definitions starting with lowercase are public (i.e. “serial” and “connect”), whereas everything else is considered private to that rig, 2) “child” rigs can access the definitions in their “parent” rigs, and 3) if you don’t like the way a built-in rig works, you can supply your own and completely override it.

If you look at this in objected-oriented terms: rigs are essentially singletons, like Python / Ruby / Lua modules. Speaking of objects: the result of the “Interfaces serial connect …” call is a connection object – Tcl 8.6 finally includes a single OO framework (there used to be tons!), so things are becoming a lot more uniform in Tcl-land.

Thanks for reading this far, even though I haven’t said anything about what’s inside JeeRev yet.

But with this out of the way, that’s easy: JeeRev consists of a collection of rigs, implementing all sorts of features, such as serial communication (serial), a web server (Webserver), logging (Log), config files (Config), and the very important “state manager” (State) which provides an event-based publish / subscribe mechanism for representing the state of sensors and actuators around the house. There is a rig which manages automatic rig loading for JeeMon (Jm), and one which has some utility code I didn’t know where else to put (Ju).

There’s a rig called “app”, which orchestrates startup, general flow of events, and application-wide event “hooks”. This hook mechanism is similar to the Drupal CMS (written in PHP) – because good ideas deserve to prosper.

And lastly, if you supply a rig called “main”, then this will become the first point where your code can take control.

I can’t go into much more detail in this post, but I hope it gives an idea of the basic design of JeeMon + JeeRev.

Tomorrow, I’ll close this series with details about getting started with JeeMon – for those who care and dare!

What’s in the yellow box?

In Software on Nov 23, 2011 at 00:01

So much for the big picture – but what’s this JeeMon?

JeeMon yellow

JeeMon ties stuff together – it can talk to serial and network interfaces using event-driven asynchronous I/O. This means that it can talk to lots of interfaces at the same time (like what the Twisted framework does in Python, but more deeply integrated into the system).

JeeMon collects data – it includes a column-oriented embedded and transactional database. This means that it can store lots of measurement data in a very compact format, using variable int sizes from 1 to 64 bits (usually an order of magnitude smaller than SQL databases, which add extra overhead due to their generality and are less suitable for repetitive time-series data).

JeeMon talks to browsers – it has an efficient embedded co-routine based web-server. This means that it can be used with web browsers, delivering rich user interfaces based on JavaScript, HTML 5, CSS 3, Ajax, and SSE. By pushing the logic to the browser, even a low-end JeeMon server can create a snappy real-time UI.

JeeMon likes to network – with the built-in event-based background network support, it’s easy to connect to (or provide) a range of network services. This lets you do remote debugging and create distributed systems based on JeeMon, as well as extract or feed information to other network-based systems.

JeeMon has a pretty face – on desktop machines running Windows, Mac OSX, or Linux, the “Tk” toolkit is included to quickly create a local user interface which does not require a browser. This means that you can create distinctive front panels showing real-time data or perhaps extensive debugging information.

JeeMon is dynamic – it is based on the 100% open source Tcl scripted programming language, which is used by several Fortune 100 companies (if that matters to you). This means you can work in a mature and actively evolving programming environment, which gets away from the usual edit-compile-run cycle. Making changes in a running installation means you don’t have to always stop / restart the system during development.

JeeMon is general-purpose – the structure of JeeMon is such that it can be used for a wide range of tasks, from quick command-line one-offs to elaborate long-running systems running in the background. This is possible because all functionality is provided as modules (“rigs”) which are loaded and activated on-demand.

Technically, JeeMon consists of the following parts:

  • the Tcl programming language implementation
  • (on desktop versions) the Tk graphical user interface
  • support for serial interfaces, sockets, and threads
  • the Metakit embedded database (by yours truly)
  • co-routine based wibble web server, by Andy Goth
  • upgrade mechanism using Starkits (by yours truly)
  • domain-specific code, collectively called JeeRev

That last item is the only part in this whole equation which is in flux. It creates a context aimed specifically at environmental monitoring and home automation. This is the part where I keep saying: Wait, it’s not ready yet!

Tomorrow, I’ll describe what’s inside this “JeeRev” …

JeeMon? JeeBus? JeeRev?

In Software on Nov 22, 2011 at 00:01

For several years now, I’ve been working on a software project on the side, called JeeMon. Then something called JeeBus was created, and then JeeRev came along.

That’s a lotta names for a pile of unfinished code!

I’d like to explain what it’s all about, and where I want to take this (looking back is not so interesting or useful).

The Big Picture ™

With monitoring and control in the home and <cough> Home Automation </cough>, the goal is quite simple, really: to figure out what’s going on in the house and to make things happen based on events and rules.

In a nutshell: I want energy consumption graphs, and curtains which close automatically when it gets dark.

There are no doubt roughly one million different ways to do this, and ten million implementations out there already. JeeMon adds one more to the mix. Because I’ve looked at them all, and none of the existing ones suit me. Call it “Not Invented At JeeLabs” if you like, but I think I can help design / build a better system (please remind me to define “better” in a future weblog post, some day).

I’ve pondered for some time on how to present the big picture. Threw out all my initial attempts. This says it all:

Jeemon context

Let’s call the yellow box… JeeMon ! Forget about JeeBus. Forget JeeRev (it’s still there, under the hood).

JeeMon is the always-on software “switchboard” which talks to all this hardware. It can run on a dedicated little server box or on the router. The devices can be connected via serial, USB, Ethernet, WiFi, FSK, OOK, whatever.

Functionality & Features

The primary tasks of JeeMon are to …

  1. track the state of sensors around the house
  2. display this real-time state via a built-in web server
  3. interface to remotely operated devices around the house
  4. present one or more web pages to control these devices
  5. act as a framework which is easy to install, configure, use, and extend

Everything is highly modular, JeeMon is based on drivers and other extensions – collectively called “features”.

Some other tasks which are a natural fit for JeeMon:

  • configuration of each sensor, device, and interface involved
  • collecting and saving the full history of all sensor data and control actions
  • presenting a drill-down interface to all that historical data
  • managing simple rules to automatically make things happen
  • access control, e.g. to prevent “junior” from wreaking havoc

That’s about it. It’s that simple, and it needs to be built. JeeMon has been “work in progress” for far too long.

Wait… make that 470 µF

In Hardware, Software on Nov 16, 2011 at 00:01

Yesterday’s post used a 6,800 µF capacitor as charge reservoir for stored energy. But wait, we can do better…

Now that I understand how discharge works out with the JeeNode in transmit mode, it all becomes a lot easier to handle. With a relatively small 470 µF 6.3V cap (still charged at 10 mA for now), I see this behavior:

DSC 2752

It’s running on 1000 x less charge than the 0.47 F supercap I started out with! Here’s the actual sketch:

Screen Shot 2011 11 03 at 01 09 37

First of all, the payload sent is now 8 bytes. Probably a reasonable amount for many uses (such as AC current sensing). Also, I’m now sending a single “a” character at the same time as the rest starting up, so there’s no need to wait for it – sending will overlap everything else that’s going on. Debugging comes for free, in this case.

What the scope shows, I think, is that the RFM12B needs about 1.6 ms to start a transmission, and that the transmission takes about 3.4 ms. The rf12_canSend() call probably returns true right away, since the RFM12B has just been woken up (this also means there’s probably no “listen before send” carrier detect in this test!).

Let’s zoom in a bit further…

DSC 2753

Ah yes, that actually makes a lot of sense (the channel 1 scale is actually 10 mV/div, not 100, and AC coupled):

  • ≈ 1 ms before “time is up”, the loseSomeTime() code switches to idle mode and draws a bit more
  • the start bit of the “a” is sent out the moment the loseSomeTime() code returns
  • brief high power consumption as the transmision is also set up and started
  • for roughly 2 ms, the RFM12B is initializing, not drawing much current
  • meanwhile, the ATmega is in rf12_sendWait(), in a relatively low-power mode
  • once transmission actually starts, a lot more current flows and the cap discharges
  • note the little bumps “up” – there’s probably a bit of inductance in the circuit!

All in all, the voltage drop is about 0.2 V, which is ok – especially in this setup, i.e. a JeeNode plus regulator.

Now, all that’s left to do is get the charging current as low as possible. I tried a 22 kΩ resistor, i.e. a 1 mA charge current, but that’s too weak right now: the voltage drops off and the node stops functioning. Looks like the JeeNode (yes, JN, not JNµ yet) is not quite in the ultra low-power mode I thought it was.

Oh, well. More work needed, but progress nevertheless!

Running off a 6800 µF cap

In Hardware, Software on Nov 15, 2011 at 00:01

The running on charge post described how to charge a 0.47 Farad supercap with a very small current, which drew only about 0.26 W. A more recent post improved this to 0.13 W by replacing the voltage-dropping resistor by a special “X2” high voltage capacitor.

Nice, but there was one pretty awkward side-effect: it took ages to charge the supercap after being plugged-in, so you had to wait an hour until the sensing node would start to send out wireless packets!

As it turns out, the supercap is really overkill if the node is sleeping 99% of the time in ultra low-power mode.

Here’s a test I did, using a lab power supply feeding the following circuit:

JC s Doodles page 21

The resistor is dimensioned in such a way that it’ll charge the capacitor with 10 mA. This was a mistake – I wanted to use 1 mA, i.e. approximately the same as 220 kΩ would with AC mains, but it turns out that the ATtiny code isn’t low-power enough yet. So for this experiment I’m just sticking to 10 mA.

For the capacitor, I used a 6,800 µF 6.3V type. Here’s how it charges up under no load:

DSC 2745

A very simple RC charger, with zener cut-off. So this thing is supplying 3.64 V to the circuit within mere seconds. That’s with 10 mA coming in.

Then I took the radioBlip sketch, and modified it to send out one packet every second (with low-power sleeping):

DSC 2746

The blue line is the serial output, which are two blips caused by this debug code around the sleep phase:

Screen Shot 2011 11 02 at 17 30 23

This not only makes good markers, it’s also a great way to trigger the scope. Keep in mind that the first blip is the ‘b’ when the node comes out of sleep, and the second one is the ‘a’ when it’s about to go sleeping again.

So that’s roughly 10 ms in the delay, then about 5 ms to send the packet, then another 10 ms delay, and then the node enters sleep mode. The cycle repeats once a second, and hence also the scope display refresh.

The yellow line shows the voltage level of the power supply going into the JeeNode (the scale is 50 mV per division, but the 0V baseline is way down, off the display area). As you can see, the power drops about 40 mV while the node does its thing and sends out a packet.

First conclusion: a 6,800 µF capacitor has plenty of energy to drive the JeeNode as part of a sensor network. It only uses a small amount of its charge as the JeeNode wakes up and starts transmitting.

But now the fun part: seeing how little the voltage drops, I wanted to see how long the capacitor would be able to power the node without being “topped up” with new charge.

Take a look at this scope snapshot:

DSC 2747

I turned on “persistence” so that old traces remain on the screen, and then turned off the lab power supply. What you’re seeing is several rounds of sending a packet, each time with the capacitor discharged a little further.

The rest of the time, the JeeNode is in ultra low-power mode. This is where the supply capacitor gets re-charged in normal use. In that last experiment it doesn’t happen, so the scope trace runs off the right edge and comes back at the same level on the left, after the next trigger, i.e. 1 second later.

Neat huh?

The discharge is slightly higher than before, because I changed the sketch to send out 40-byte packets instead of 4. In fact, if you look closely, you can see three discharge slopes in that last image:

JC s Doodles page 21

A = the first delay(10) i.e. ATmega running
B = packet send, i.e. RFM12B transmitting, ATmega low power
C = the second delay(10), only ATmega running again

Here I’ve turned up the scale and am averaging over successive samples to bring this out more clearly:

DSC 2750

You can even “see” the transmitter startup and the re-charge once all is over, despite the low resolution.

So the conclusion is that even a 6,800 µF capacitor is overkill, assuming the sketch has been properly designed to use ultra low-power mode. And maybe the 0.13 W power supply could be made even smaller?

Amazing things, them ATmega’s. And them scopes!

Pins, damned pins, and JeeNodes

In AVR, Hardware, Software on Nov 10, 2011 at 00:01

(to rephrase Mark Twain …)

There is confusion in the ATmega / Arduino / JeeNode world, as brought to my attention again in the forum.

It all depends on your perspective, really:

  • if you want to connect to pins on the ATmega chip, then you’re after “pin 6”, “pin 17”, and “pin 26”
  • if you want to understand the ATmega data sheet, then you’re after names like “PD4”, “MOSI”, and “PC3”
  • if you want to follow Arduino sketches, then you want “digital 4”, “digital 12 11″, and “analog 3”
  • if you want to plug into JeeNode ports, then you’d prefer “DIO1”, “MOSI on SPI/ISP”, and “AIO4”

They are all the same thing.

Well, sort of. If you’re looking at an SMD chip, the pin numbers change, and if you’re using an Arduino Mega board, some names differ as well. They might also be different if you were to look at older versions of the JeeNode. In short: it’s messy because people come from different perspectives, and it’s unavoidable. There is no relation between “pin 4”, “PD4”, “digital 4”, and “DIO4” – other than that they all mention the digit 4 …

For that same reason, diagrams are not always obvious. For example, this one is nicely informative:

Arduino To Atmega8 Pins

But it doesn’t help one bit when hooking up a JeeNode.

Likewise, this one doesn’t really help if you’re after the pins on the ATmega328 chip or Arduino pin #’s:

Screen Shot 2011 11 08 at 13 02 04

Then there’s this overview, but it’s incomplete – only the DIO and AIO pins are mapped to the rest, in a table:

JN colors

This one from TankSlappa uses the DIP chip as starting point (and creates new abbreviations for DIO1, etc, alas):

Affectation GPIO JeeNode

Each of these diagrams conveys useful information. But none of them seem to be sufficient on their own.

Can we do better? I’ll be happy to take suggestions and make a new – c o n v e n i e n t – one-pager.

Fixing the Arduino’s PWM #2

In AVR, Software on Nov 9, 2011 at 00:01

Yesterday’s post identified a problem in the way different PWM pulses are generated in the ATmega.

It’s all a matter of configuration, since evidently the timers used to generate the PWM signals all run off the same clock signal, and can therefore be made to cycle at exactly the same rate (they don’t have to roll over at the same time for our purposes, but they do have to count equally fast).

The problem is caused by the fact than PWM pins D.5 and D.6 are based on timer 0, which is also used as millisecond timer, whereas PWM pin D.9 is based on timer 1. It’s not a good idea to mess with timer 0, because that would affect the delay() and millis() code and affect all parts of a sketch which are timing-dependent.

Timer 0 is set in “Fast PWM” mode, which is not perfect. So the Wiring / Arduino team decided to use “Phase Correct PWM” mode for timers 1 and 2.

In Fast PWM mode, the timer just counts from 0 to 255, and the PWM output turns high when the counter is in a specific range. So far so good.

In Phase Correct PWM mode, the timer counts from 0 to 255 and then back to 0. Again, the PWM output turns high when the counter is in a specific range.

The phase correct mode counts up and down, and takes twice as long, so that explains why the green LED output runs at half the speed of the others.

Except that… it’s not exactly twice!

Timer 0 wraps every 256 counts, and that should be kept as is to keep the millisecond code happy.

But timers 1 and 2 wrap every 0 -> 255 -> 0 = 511 counts (I think – or is it 510?). Hence the out-of-sync effect which is causing so much trouble for the LED Node.

See also Ken Shirriff informative page titled Secrets of Arduino PWM, for the ins and outs of PWM.

The solution turns out to be ridiculously simple:

    bitSet(TCCR1B, WGM12);

Just adding that single-bit change to setup() will force timer 1 into Fast PWM mode as well. The result:

DSC 2730

This picture was taken with the scope running. The signals are in lock-step and rock solid. More importantly, the result is a perfectly smooth LED light – yippie!

A color-shifting LED Node

In AVR, Software on Nov 5, 2011 at 00:01

Yesterday’s post described the logic which should be implemented in the LED node. It is fairly elaborate because I want to support subtle gradual color changes to simulate the natural sunrise and sunset colors.

Here’s the ledNode sketch – with a few parts omitted for brevity:

Screen Shot 2011 10 27 at 04 56 36

It’s pretty long, but I wanted to include it anyway to show how yesterday’s specifications can be implemented with not too much trouble.

The main trick is the use of fractional integers in the useRamps() routine. The point is that to get from brightness level X to Y in N steps, you can’t just use integers: say red needs to change from 100 to 200 in 100 steps, then that would simply be a delta of 1 for each step, but if blue needs to change from 10 to 20 in exactly the same amount of time, then the blue value really should be incremented only once every 10 steps, i.e. an increment of 0.1 if it were implemented with floating point.

And although the gcc compiler fully supports floating point on an ATmega, this really has a major impact on code size and execution performance, because the entire floating point system is done in software, i.e. emulated. An ATmega does not have hardware floating point.

It’s not hard to avoid using floating point. In this case, I used 32-bit (long) ints, with a fixed decimal point of 23 bits. IOW, the integer value 99 is represented as “99 << 23” in this context. It’s a bit like saying, let’s drop the decimal point and write down all our currency amounts in cents – i.e. 2 implied decimals (€10 = 1000, €1 = 100, €0.50 = 50, etc).

To make this work for this sketch, all we need to do is shift right or left by 23 bits in the right places. It turns out that is only affects the setLeds() and useRamp() code. As a result, RGB values will change in whatever small increments are needed to reach their final value in the desired number of 0.01s steps.

The sketch compiles to just slightly over 5 Kb by the way, so all this fractional trickery could have been avoided. But for me, writing mean and lean code has become second nature, and sort of an automatic challenge and puzzle I like to solve. Feel free to change as you see fit, of course – it’s all open source as usual.

So there you go: a color-shifting LED strip driver. Now I gotta figure out nice ramp presets for sunrise & sunset!

A sketch for the LED Node

In AVR, Software on Nov 4, 2011 at 00:01

The LED Node presented yesterday needs some software to make it do things, of course. Before writing that sketch, I first wrote a small test sketch to verify that all RGB colors worked:

Screen Shot 2011 10 26 at 20 59 17

That’s the entire sketch, it’ll cycle through all the 7 main colors (3 RGB, and 4 combinations) as well as black. There’s some bit trickery going on here, so let me describe how it works:

  • a counter gets incremented each time through the loop
  • the loop runs 200 times per second, due to the “delay(5)” at the end
  • the counter is a 16-bit int, but only the lower 11 bits are used
  • bits 0..7 are used as brightness level, by passing that to the analogWrite() calls
  • when bit 8 is set, the blue LED’s brightness is adjusted (pin 5)
  • when bit 9 is set, the red LED’s brightness is adjusted (pin 6)
  • when bit 10 is set, the green LED’s brightness is adjusted (pin 9)

Another way to look at this, is as a 3-bit counter (bits 8..10) cycling through all the RGB combinations, and for each combination, a level gets incremented from 0..255 – so there are 11 bits in use, i.e. 2048 combinations, and with 200 steps per second, the entire color pattern repeats about once every 10 seconds.

Anyway, the next step is to write a real LED Node driver. It should support the following tasks:

  • listen to incoming packets to change the lights
  • allow gradually changing the current RGB color setting to a new RGB mix
  • support adjustable durations for color changes, in steps of 1 second

So the idea is: at any point in time, the RGB LEDs are lit with a certain PWM-controlled intensity (0..255 for each, i.e. a 24-bit color setting). The 0,0,0 value is fully off, while 255,255,255 is fully on (which is a fairly ugly blueish tint). From there, the unit must figure out how to gradually change the RGB values towards another target RGB value, and deal with all the work and timing to get there.

I don’t really want to have to think about these RGB values all the time though, so the LED Node sketch must also support “presets”. After pondering a bit about it, I came up with the following model:

  • setting presets are called “ramps”, and there can be up to 100 of them
  • each ramp has a target RGB value it wants to reach, and the time it should take to get there
  • ramps can be chained, i.e. when a ramp has reached its final value, it can automatically start another ramp
  • ramps can be sent to the unit via wireless (of course!), and stored in any of the presets 1..99
  • preset 0 is special, it is always the “immediate all off” setting and can’t be changed
  • to start a ramp, just send a 1-byte packet with the 0..99 ramp number
  • to save a ramp as a preset, send a 6-byte packet (preset#, R, G, B, duration, and chain)
  • preset 0 is special: when “saving” to preset #0 it gets started immediately (instead of being saved)
  • presets 0..9 contain standard fixed ramps on power-up (presets 1..9 can be changed afterwards)
  • the maximum duration of a single ramp is 255 seconds, i.e. over 4 minutes

Quite an elaborate design after all, but this way I can figure out a nice set of color transitions and store them in each unit once and for all. After that, sending a “command” in the form of a 1-byte packet is all that’s needed to start a ramp (or a series of ramps) which will vary the lighting according to the stored presets.

Hm, this “ledNode” sketch has become a bit longer than expected – I’ll present that tomorrow.

Meet the LED Node

In AVR, Software on Nov 3, 2011 at 00:01

More than a year has passed, and I still haven’t finished the RGB LED project. The goal was to have lots of RGB LED strips around the house, high up near the ceiling to provide indirect lighting.

The reason is perhaps a bit unusual, but that has never stopped me: I want to simulate sunrise & sunset (both brightness and colors) – not at the actual time of the real sunset and sunrise however, but when waking up and and when it’s time to go to bed. Also in the bedroom, as a gentle signal before the alarm goes off.

Now that winter time is approaching and mornings are getting darker, this project really needs to be completed. The problem with the previous attempt was that it’s pretty difficult to achieve a really even control of brightness and colors with software interrupts. The main reason is that there are more interrupt sources (the clock and the RFM12B wireless module), which affect the timing in subtle, but visibly irregular, ways.

So I created a dedicated solution, called the LED Node:

Screen Shot 2011 10 26 at 11 48 54

It’s basically the combination of a JeeNode, one-and-a-half MOSFET Plugs, and a Room Board, with the difference that all MOSFETs are tied to I/O pins which support hardware PWM. The Room Board option was added, because if I’m going to put 12V power all over the house anyway for these LEDs, and if I want to monitor temperature, humidity, light, and motion in most of the rooms, then it makes good sense to combine it all in one.

Here is my first build (note that all the components are through-hole), connected to a small test strip:

DSC 2706

The pinouts are pre-arranged to connect to a standard common cathode anode RGB strip, and the SHT11 temp / humidity sensor is positioned as far away from the LEDs as possible, since every source of heat will affect its readings. For the same reason, the LDR is placed at the end so it can be aimed away from the light-producing LED strip. I haven’t thought much about the PIR mounting so far, but at least the 3-pin header is there.

The LED Node is 18 by 132 mm, so that it fits inside a U-shaped PVC profile I intend to use for these strips. There can be some issues with color fringing which require care in orienting the strips to avoid problems.

Apart from some I/O pin allocations required to access the hardware PWM, the LED Node is fully compatible with a JeeNode. It’s also fully compatible with Arduino boards of course, since it has the same ATmega328. There’s an FTDI header to attach to a USB BUB for uploading sketches and debugging.

The MOSFETS easily support 5 m of LED strips with 30 RGB LEDs per meter without getting warm. Probably much more – I haven’t tried it with heavier loads yet.

Here’s what I used as basic prototype of the whole thing, as presented last year:

DSC 2710

Tomorrow, I’ll describe a sketch for this unit which supports gradual color changes.

Running LED ticker #2

In Software on Nov 2, 2011 at 00:01

After the hardware to feed yesterday’s scrolling LED display, comes the software:

Screen Shot 2011 10 25 at 18 31 38

The code has also been added to GitHub as tickerLed.ino sketch.

Fairly basic stuff. I decided to simply pass incoming messages as is, but to do the checksum calculation and start/end marker wrapping locally in the sketch. The main trick is to use a large circular 1500-character FIFO buffer, so that we don’t bump into overrun issues. The other thing this sketch does is throttling, to give the unit time to collect a message and process it.

Here’s what the display shows after power-up, now that the JeeNode is feeding it directly:

DSC 2705

(that’s the last part of the “Hello from JeeLabs” startup message built into the sketch)

So now, sending a 50-byte packet to node 17 group 4 over wireless with this text:

    <L1><PA><FE><MA><WC><FE>Yet another beautiful day!

… will update line 1 on page A and make the text appear. The unit rotates all the messages, and can do all sorts of funky stuff with transitions, pages, and time schedules – that’s what all the brackets and codes are about.

Time to put the end caps on and close the unit!

The AC current sensor node lives!

In Hardware, Software on Oct 28, 2011 at 00:01

At last, everything is falling into place and working more or less as intended:

    OK 17 172 31 173 31   <- 25 W incandescent
    OK 17 169 31 177 31
    OK 17 40 0 41 0       <- open
    OK 17 245 95 40 0     <- 75 W incandescent
    OK 17 140 95 245 95
    OK 17 43 0 140 95     <- open
    OK 17 39 0 43 0
    OK 17 118 2 42 0      <- 2W LED-light
    OK 17 211 2 97 2
    OK 17 219 2 102 2
    OK 17 107 2 219 2
    OK 17 89 2 107 2
    OK 17 40 0 82 2       <- open
    OK 17 39 0 40 0
    OK 17 38 0 39 0
    OK 17 219 53 38 0     <- 40 W incandescent
    OK 17 234 53 219 53
    OK 17 43 0 234 53     <- open
    OK 17 149 75 43 0     <- 60 W incandescent
    OK 17 23 0 149 75     <- open
    OK 17 42 0 23 0

That’s the log of packets coming in from the AC current node, as I inserted and removed various light bulbs. Still through the isolation transformer for now.

As you can see, every change is very clearly detected, down to a 2W LED light. These are all the bulbs with an E27 fitting I happened to have lying around, since the test setup was fitted with one of those. Some other time I’ll try out light inductive loads, chargers, and eventually also a 700 W heater as load.

I’m not interested that much in the actual readings, although there is a fairly direct relationship between these wattages and the 2-byte little-endian int readouts. The fluctuations across readings with unchanged load are small, and the no-load readout is lower than in my previous tests, perhaps the shorter and more direct wiring of this setup is helping avoid a bit of noise.

One problem I see is that some packets are lost. Maybe the way the wire antenna is laid out (quite close to the PCB’s ground plane) prevents it from working optimally.

The other problem seems to be that this node stops transmitting after a while. I suspect that the current draw is still too large, either on the ADC side (500+ samples, back to back) or due to the RFM12B packet sends (unlikely, IMO, after the previous test results). At some point the voltage over the supercap was 3.1V – I’m not sure what’s going on, since after a new power-up the unit started transmitting again.

Hm… or perhaps it’s that plus an antenna problem: when I rearranged it, things also started working again (I’m always cutting power before messing with the test circuit, of course).

But all in all I’m delighted, because this unit has a really low component count!

Running on charge

In AVR, Hardware, Software on Oct 26, 2011 at 00:01

Now that the supercap charger works, and now that I’ve switched to the ATtiny84 as processor, everything is ready to create a self-contained AC current sensing node.

The one missing piece is the software for it all. It’s going to take a while to get it tweaked, tuned, and optimized, but a basic check is actually quite easy to do.

Here is the main loop if my first test, reusing most of the code already in the tiny50hz sketch:

Screen Shot 2011 10 17 at 20 03 56

My main goal was to quickly get power consumption down, so that the ATtiny would use less than what’s available through the low-power supply, i.e. roughly 1 mA. Without extra steps, it’ll draw over 4 mA @ 8 MHz.

What I did was to reduce the clock rate to 1 MHz (except while processing RF12 packets), and to turn off subsystems while not needed (timer 1 is never used, and the ADC and USI h/w is now enabled only while used).

These two lines at the top of loop() are special:


They will reduce power consumption by halting the processor until the next interrupt. Since time is being tracked via the millis() call, and since that operates through a timer interrupt, there is no reason to check for a new millisecond transition until the next interrupt. This is a quick way to cut power consumption in half.

But keep in mind that the processor is still running most of the time, and running at 1 MHz. That explains why the current consumption remains at a relatively high 440 µA in the above code (with a brief “power blip” every 16 s). For comparison: in power-down mode, current draw could be cut to 4 µA (with the watchdog timer running).

Still, this should be enough for a first test. Sure enough, it works fine – powered by the ISP programmer:

    OK 17 84 2 0 0
    OK 17 67 2 84 2
    OK 17 103 2 67 2

Every ≈ 16 seconds, a 4-byte packet comes in with the latest reading and the previous reading (as 2-byte ints).

The interesting bit is what happens when the ISP programmer gets disconnected. While connected, it charged the supercap to about 4.9V – so with a bit of luck, this node should keep running for a while, right?

Guess what… it does. My test node just sent out 197 packets before running out of steam!

This includes the ≈ 500 x ADC samples and 16-second waits in each cycle. IOW, a 0.47 F capacitor charged to 4.9V clearly has more than enough energy for this application. In fact, it makes me wonder whether even a normal electrolytic capacitor might be sufficient. The benefit of a smaller capacitor would be that the node can be up and running much faster than the 1 hour or so needed to charge up a supercap.

Here’s my very very sketchy estimate:

  • let’s assume the unit operates down to 2.5 V
  • with 4.9 V charge, it can run for about 200 cycles
  • so with 3.7 V charge (on AC mains), it ought to run for 100 cycles
  • using a capacitor 1/100th the size, we ought to have enough charge for one cycle
  • with more power-saving logic, energy use can no doubt be lowered further

Given that the node is permanently powered, a 4,700 µF cap ought to be sufficient. I’ve ordered a 6,800 µF @ 6.3V electrolytic cap – with a bit of luck, that should also work. And if it does indeed, startup times will go down drastically, to hopefully just a few seconds.


Digital filter design

In Software on Oct 15, 2011 at 00:01

In the moving averages post a few days ago, I just picked whatever seemed reasonable for filtering – i.e. running a moving average of order 31 over 531 samples, sampling at roughly 7,000 samples/second (which is what the free-running ADC does with my clock choice). And indeed, it looks like it will work quite well.

The nice thing about moving averages done this way, is that the calculations are trivial: just add the new value in and omit the oldest one. All it takes is an N-stage sample memory, i.e. 31-int array in this case.

But diving a bit deeper into FIR filters, which includes such a moving average as simplest case, it’s clear that I was sampling more than needed. I really don’t need 531 samples to measure the peak-to-peak level of the underlying 50 Hz signal, I just need to measure for the duration of a few 50 Hz cycles (about 3 in the above case).

As it turns out, once you dive in there are many ways to improve things, and lots of online guides and tools.

There’s a lot more to FIR filter design, and there’s an amazing design technique by Parks and McClellan which lets you basically specify what cutoff frequency you want, and what attenuation you you want above a second (higher) frequency. Then I found this site with an easy to use on-line tool for visualizing it and doing all the calculations:

Screen Shot 2011 10 10 at 0 52 23

That’s attenuation on the Y axis vs frequency on the X axis. The gray lines are what I specified as requirement:

Screen Shot 2011 10 10 at 1 08 23

And this is the C code it generated for me:

    FIR filter designed with

    sampling frequency: 1000 Hz
    fixed point precision: 16 bits

    * 0 Hz - 50 Hz
      gain = 1
      desired ripple = 5 dB
      actual ripple = n/a
    * 100 Hz - 500 Hz
      gain = 0
      desired attenuation = -50 dB
      actual attenuation = n/a

    #define FILTER_LENGTH 31

    int filter[FILTER_LENGTH] = {

There’s a drawback in that this is no longer a moving average. Now each output of the filter is defined by applying these integer coefficients to each of the past 31 samples. So there’s more computation involved – a quick test tells me that each sample would take 100..200 µs extra (on an ATmega @ 16 MHz).

But the nice part of this is that it might support a lower-power implementation. Instead of running the ADC 531 times @ 7 KHz (with an unknown filter response), I could run the ADC 100 times, clocked on the 1 ms timer interrupt (and sleeping in between), and apply this modified FIR filter to extract a similar peak-to-peak results.

Why low power? Well, running this on batteries is probably not practical, but I might consider running this setup off a very low-power transformer-less supply. After all, the goal of these experiments is to create a simple low-cost sensor, which can then be used for all the major energy consumers in the house. I’m trying to reduce power consumption, not add yet more to it from all these extra AC current sensors!

Note that maybe none of this will be needed – a simple RC filter before the ADC pin, plus an order 7..31 moving average may well be more than sufficient after all.

But it’s good to know that this option is available if needed. And it’s pretty amazing to see how easily such DSP tasks can be solved, even for someone who has never done any digital signal processing before!

CC-RT: Initial requirements

In AVR, Hardware, Software on Oct 14, 2011 at 00:01

Let’s get going with the CC-RT series and try to define the Reflow Timer in a bit more detail. In fact, let me collect a wish list of things I’d like to see in there:

The Reflow Timer should…

  • support a wide range of ovens, grills, toasters, and skillets
  • be self-contained and safe to build and operate
  • include some buttons and some sort of indicator or display
  • be created with through-hole parts as much as possible
  • (re-) use the same technologies as other JeeLabs products
  • be built on a custom-designed printed circuit board
  • use a convenient and robust mechanical construction
  • be very low-cost and simple to build

To start with that last point: the aim is to stay under € 100 as end-user price, including a simple toaster and whatever else is needed to control it. That’s a fairly limiting goal, BTW.

I’m sticking to “the same technologies” to make my life easy, both in terms of design and to simplify inventory issues later, once the Reflow Timer is in the shop. That translates to: an Arduino-like design with an ATmega328, and (for reasons to be explained next) an RFM12B wireless module.

Safety is a major concern, since controlling a heater tied to 220 V definitely has its risks. My solution to controlling an oven of up to 2000 W is the same as what I’ve been doing so far: use a commercially available and tested power switch, controlled via an RF signal. KAKU or FS20 come to mind, since there is already code to send out the proper signals through an RFM12B module. Range will not be an issue, since presumably everything will be within a meter or so from each other.

With wireless control, we avoid all contact with the mains power line. I’ll take it one step further and make the unit battery-operated as well. There are two reasons for this: if we’re going to uses a thermocouple, then leakage currents and transients can play nasty games with sensors. These issues are gone if there is no galvanic connection to anything else. The second reason is that having the AC mains cable of a power supply running near a very hot object is not a great idea. Besides, I don’t like clutter.

Having said this, I do not want to rule out a couple of alternatives, just in case someone prefers those: controlling the heater via a relay (mechanical or solid-state), and powering the unit from a DC wall wart. So these should be included as options if it’s not too much trouble.

To guard against heat & fire problems, a standard heater will be used with a built-in thermostat. The idea being that you set the built-in thermostat to its maximum value, and then switch the entire unit on and off via the remote switch. Even in the worst scenario where the switch fails to turn off, the thermostat will prevent the heater from exceeding its tested and guaranteed power & heat levels. One consequence of this is that the entire reflow process needs to unfold quickly enough, so that the thermostat doesn’t kick in during normal use. But this is an issue anyway, since reflow profiles need to be quick to avoid damaging sensitive components on the target board.

On the software side, we’ll need some sort of configuration setup, to adjust temperature profiles to leaded / unleaded solder for example, but also to calibrate the unit for a specific heater, since there are big differences.

I don’t think a few LEDs will be enough to handle all these cases, so some sort of display will be required. Since we’ve got the RFM12B on board anyway, one option would be to use a remote setup, but that violates the self-contained requirement (besides, it’d be a lot less convenient). So what remains is a small LCD unit, either character-based or graphics-based. A graphic LCD would be nice because it could display a temperature graph – but I’m not sure it’ll fit in the budget, and to be honest, I think the novelty of it will wear off quickly.

On the input side, 2 or 3 push buttons are probably enough to adjust everything. In day-to-day operation, all you really need is start/stop.

So this is the basic idea for the Reflow Timer so far:

JC s Doodles page 18

Ok, what else. Ah, yes, an enclosure – the eternal Achilles’ heel of every electronics project. I don’t want anything fancy, just something that is robust, making it easy to pick up and operate the unit. I’ve also got a somewhat unusual requirement, which applies to everything in the JeeLabs shop: it has to fit inside a padded envelope.

Enclosures are not something you get to slap on at the end of a project. Well, you could, but then you lose the opportunity of fitting its PCB nicely and getting all the mounting holes in the best position. So let’s try and get that resolved as quickly as possible, right?

Unfortunately, it’s not that easy. We can’t decide on mechanical factors before figuring out exactly what has to be in the box. Every decision is inter-dependent with everything else.

Welcome to the world of agonizing trade-offs, eh, I mean… product design!

AC measurement status

In AVR, Software on Oct 12, 2011 at 00:01

Before messing further with this AC current measurement stuff, let me summarize what my current setup is:

JC s Doodles page 17

Oh, and a debug LED and 3x AA battery pack, which provides 3.3 .. 3.9 V with rechargeable EneLoop batteries.

I don’t expect this to be the definitive circuit, but at least it’s now documented. The code I used on the ATtiny85 is now included as tiny50hz example sketch in the Ports library, eh, I mean JeeLib. Here are the main pieces:

Screen Shot 2011 10 07 at 00 32 58

Nothing fancy, though it took a fair bit of datasheet reading to get all the ADC details set up. This sketch compiles to 3158 bytes of code – lots of room left.

This project isn’t anywhere near finished:

  • I need to add a simple RC low-pass filter for the analog signal
  • readout on an LCD is nice, but a wireless link would be much more useful
  • haven’t thought about how to power this unit (nor added any power-saving code)
  • the ever-recurring question: what (safe!) enclosure to use for such a setup
  • and most important of all: do I really want a direct connection to AC mains?

To follow up on that last note: I think the exact same setup could be used with a current transformer w/ burden resistor. I ought to try that, to compare signal levels and to see how well it handles low-power sensing. The ATtiny’s differential inputs, the 20x programmable gain, and the different AREF options clearly add a lot of flexibility.


AC current detection works!

In AVR, Software on Oct 10, 2011 at 00:01

As promised, the results for the ATtiny85 as AC current detector, i.e. measuring current over a 0.1 Ω shunt.

Yesterday’s setup showed the following values in the display:

    30 71-101 68.

From right to left, that translates to:

  • it took 68 ms to capture 500 samples, i.e. about 7 KHz
  • the measured values were in the range 71 .. 101
  • the spread was therefore 30

With a 75 W lamp connected, drawing about 100 mA, I get this:

DSC 2677

With the 25 W lamp, the readout becomes:

DSC 2673

And finally, with a 1 kΩ resistor drawing 20 mA, this is the result:

DSC 2679

As soon as the load is removed, readings drop back to the first values listed above.

Now it seems to me that these readings will be fine for detection. Even a low 20 mA (i.e. 4.4 W @ 220V) load produces a reading with is 30 times higher than zero-load (with about 10% variation over time).

I’m measuring with 2.56V as reference voltage to remain independent of VCC (which is 3.8V on batteries). So each step is 2.5 mV with the built-in 10-bit ADC. With the 20x amplification, that becomes 0.125 mV per ADC step. Now let’s see… a 20 mA DC current across a 0.1 Ω shunt would generate a 2 mV differential. I’m using an order 31 moving average, but I didn’t divide the final result by 31, so that 1099 result is actually 35 on the ADC. Given that one ADC step is 0.125 mV, that’s about 4.4 mV peak-to-peak. Hey, that looks more or less right!

There is still a problem, though. Because half of the time I get this:

DSC 2675

Total breakdown, completely ridiculous values. The other thing that doesn’t look right, is that none of the readings are negative. With a differential amplifier fed with AC, one expects the values and signs to constantly alternate. Maybe I damaged the ATtiny – I’ll get another one for comparison. And maybe I didn’t get the sign-extensions right for the ADC’s “bipolar differential” mode. There’s a lot of bit-fiddling to set all the right register bits.

But still… this looks promising!

Update – Problem solved: it was a signed / unsigned issue. The values are now completely stable with under 1 % variation between measurements. I’m not even filtering out high frequencies, although I know I should.

Moving averages

In Software on Oct 8, 2011 at 00:01

In the comments, Paul H pointed me to a great resource on digital filtering – i.e. this online book: The Scientist & Engineer’s Guide to Digital Signal Processing (1999), by Steven W. Smith. I’ve been reading in it for hours on end, it’s a fantastic source of information for an analog-signal newbie like me.

From chapter 21, it looks like the simplest filtering of all will work just fine: a moving average, i.e. take the average of the N previous data point, and repeat for each point in time.

Time for some simple programming and plotting. Here is the raw data I obtained before in gray, with a moving average of order 349 superimposed as black line (and time-shifted for proper comparison):


Note that the data points I’m using were sampled roughly 50,000 times per second, i.e. 1,000 samples for each 50 Hz sine wave. So averaging over 349 data points is bound to dampen the 50 Hz signal a bit as well.

As you can see, all the noise is gone. Perfect!

Why order 349? Because a 349-point average is like a 7-point average every 50 samples. Let me explain… I wanted to see what sort of results I would get when measuring only once every millisecond, i.e. 20 times per sine wave. That’s 50x less than the dataset I’m trying this with, so the 1 ms sampling can easily be simulated by only using every 50th datapoint. To get decent results, I found that an order 7 moving average still sort of works:


There’s some aliasing going on here, because the dataset samples aren’t synchronized to the 50 Hz mains frequency. So this is the signal I’d get when measuring every 1 ms, and averaging over the past 7 samples.

For comparison, here’s an order 31 filter sampled at 10 KHz, i.e. one ADC measurement every 100 µs:


(order 31 is nice, because 31 x 1023 fits in a 16-bit int without overflow – 1023 being the max ADC readout)

The amplitude is a bit more stable now, i.e. there are less aliasing effects.

Using this last setting as starting point, one idea is to take 500 samples (one every 100 µs), which captures at least two highs and two lows, and to use the difference between the maximum and minimum value as indication of the amount of current flowing. Such a process would take about 50 ms, to be repeated every 5 seconds or so.

A completely different path is to use a digital Chebyshev filter. There’s a nice online calculator for this over here (thx Matthieu W). I specified a 4th order, -3 dB ripple, 50 Hz low-pass setup with 1 KHz sampling, and got this:


Very smooth, though I didn’t get the startup right. Very similar amplitude variations, but it needs floating point.

Let me reiterate that my goal is to detect whether an appliance is on or off, not to measure its actual power draw. If this can be implemented, then all that remains to be done is to decide on a proper threshold value for signaling.

My conclusion at this point is: a simple moving average should be fine to extract the 50 Hz “sine-ish” wave.

Update – if you’re serious about diving into DSP techniques, then I suggest first reading The Scientist & Engineer’s Guide to Digital Signal Processing (1999), by Steven W. Smith, and then Understanding Digital Signal Processing (2010) by Richard G Lyons. This combination worked extremely well for me – the first puts everything in context, while the second provides a solid foundation to back it all up.

Captured samples

In Software on Oct 6, 2011 at 00:01

With the USB scope hookup from yesterday’s post it’s also quite easy to capture data for further experiments. It helps to have a fixed data set while comparing algorithms, so I used the DSO-2090 software again to capture over a million 8-bit samples.

The dump is binary data, but decoding the format is trivial. Some header bytes and then each value as 2-byte int. Just to show that JeeMon is also up to tasks like this, here’s the “rec2text.tcl” script I created for it:

Screen Shot 2011 10 04 at 17 13 47

And this is how it was used to generate a text file for the plots below:

Screen Shot 2011 10 04 at 16 43 09

(silly me – I’m still impressed when I see a script process over a million items in the blink of an eye…)

That’s a bit too much data for quick tests, but here are the first 10,000 values from the 75W-bulb-on-20-VAC:

Screen Shot 2011 10 04 at 15 28 12

There’s quite a bit of noise in there if you look more closely:

Screen Shot 2011 10 04 at 15 33 20

The measurements are all over the map, with values over almost the entire 0..255 range of the scope’s 8-bit ADC. The actual spikes are probably even larger, occasionally.

I hope that this is a decent dataset to test out filtering and DSP techniques. It’ll be much quicker to try things out on my Mac than on a lowly ATmega 8-bit MPU, and since both can run the same C & C++ code, it should be easy to bring things back over to the ATmega once it it does the right thing.

Hmmm… now where do I find suitable DSP filtering algorithms and coefficients?

Update – Found a great resource, thx Paul & Andreas. Stay tuned…

Capturing (no go) – part 2

In Software on Oct 4, 2011 at 00:01

On to the next step in capturing samples from the direct 220V connection.

First, let me clarify why I called the first sketch “fatally flawed” – because it is, literally! The sketch required a button press to get started, a big no-no while hooked up to AC mains (isolated or not, I won’t touch it!).

The other problem I didn’t like after all, is that the sampling was taking place in burst, saving to the Memory Plug in between – which takes several milliseconds each time. That doesn’t produce a nice stream of equally-spaced ADC measurements.

So instead, I decided to redo the whole sketch and split it into two separate sketches in fact. One fills the EEPROM on the Memory plug, the other dumps it to the serial port. No more buttons or LEDs. Just a JeeNode with a Memory Plug. To make things work, a fairly peculiar series of steps has to be taken:

  • upload the “saveToMem” sketch and plug in the Memory Plug
  • disconnect, and hook up to the 0.1 Ω shunt etc (with power disconnected)
  • insert AA battery to power the whole thing, it starts collecting
  • turn on AC power
  • let it run for a few seconds
  • turn off AC power
  • disconnect, remove the battery and Memory Plug, and reattach to USB
  • upload the “saveFromMem” sketch
  • open serial monitor and capture the dump to file (with copy & paste)

The key is to remove the Memory Plug as soon as it has been filled, so that the next power-up doesn’t start filling it all over again. There’s logic in the sketches to do nothing without Memory Plug.

Note that the internal ATmega EEPROM is also used to record how far the save has progressed (in units of 128 samples, i.e. 256 bytes).

It turns out that the writes to EEPROM via I2C take quite a bit of time. I settled on a 1 KHz sampling rate to avoid running into any timing issues. That’s 20 samples per 50 Hz cycle, which should be plenty to reliably identify that frequency even if its not a pure sine wave. The samples will tell.

Ok, first test run, nothing powered up. Result is mostly 511’s and a few 510’s, which is as expected – halfway the 0..1023 range:

    $ grep -n 510 unplugged.txt |wc
         115     115    1062
    $ grep -n 511 unplugged.txt |wc
       14605   14605  135044

Next run is with the 60 W light bulb turned on a few seconds after sampling starts.

Whoops, not so good – I’m only getting 510, 511, and 512 readings, almost nothing else!

    $ grep -n 509 powered.txt |wc
           0       0       0
    $ grep -n 510 powered.txt |wc
         110     110    1012
    $ grep -n 511 powered.txt |wc
       13750   13750  126668
    $ grep -n 512 powered.txt |wc
         220     220    2026
    $ grep -n 513 powered.txt |wc
           0       0       0
    $ wc powered.txt 
       14084   14086   56366 powered.txt

My conclusion so far is: can’t detect these small AC signals without additional amplification, it’s simply too weak.

It’s not really a setback, since I wasn’t planning on creating a directly-connected JeeNode setup as official solution, but it would have been nice to get a basic detection working with just a few resistors.

Maybe there’s an error in these sketches, but I’ve verified that the input senses ground as 0 and VCC as 1023, so that part at least is working as expected. I’ve placed the two sketches as a “gist” on GitHub, for reference (sampleToMem and sampleFromMem).

Back to the drawing board!

Dabbling in HTML5 + CSS3

In Software on Oct 3, 2011 at 00:01

(Small change of plans, the continuation of yesterday’s post has been postponed to tomorrow)

I’ve been tipping my toes a bit in the waters of HTML5 + CSS3. Nothing spectacular, but here’s what came out:

Screen Shot 2011 10 02 at 22 59 24

Small “widgets” to display home readings on a web page (it’s in Dutch… but you can probably guess most of it).

There are no colors in here yet, but hey… just visit a site such as the Color Scheme Designer to fix that.

Here are the CSS styles I used:

Screen Shot 2011 10 02 at 22 20 27

And here is the funky code I used to generate the HTML for this in JeeMon:

Screen Shot 2011 10 02 at 22 20 59

It’s still a bit awkward to generate elements which need to be combined inline, i.e. <span> elements.

To see the actual generated HTML, use your browser to check the page source of the new Widgets page at To examine the entire file needed to generate this from JeeMon, click on this link.

For actual use, all I’ll need to change is: 1) generate the HTML with $blah Tcl variables, or 2) use JavaScript on the browser to fill in the values (driven by Server Sent Events, for example). The latter avoids page refreshes.

The separation of content, structure, and style has come a long way since the early days of HTML. It just feels right, the way this works out. Applications can be written with nearly complete independence between, eh… “looks” and “guts”. Even complete page make-overs which are well beyond the positioning capabilities of CSS can be taken care of with basic templating on the server side. It’s interesting that the elaborate and efficient templating “engines” which used to drive dynamic websites and which are still used in many frameworks are becoming irrelevant. With a bit of JavaScript smarts on the client side, the server now only needs to serve static files plus JSON data for REST-style Ajax calls. The good news for embedded home automation systems, is that very low-end servers may well turn out to be sufficient, even for sophisticated web presentations.

As I said, nothing spectacular, but I’m delighted to see how simple and cleanly-structured all this has become.

Capturing some test data

In Software on Oct 2, 2011 at 00:01

(Whoops, looks like I messed up the correct scheduling of this post!)

Coming soon: a bit of filtering to get better AC current readouts.

There are many ways to do this, but I wanted to capture some test measurements from the AC shunt first, to follow up on the 220V scope test the other day. That way I don’t have to constantly get involved with AC mains, and I’ll have a repeatable dataset to test different algorithms on. Trouble is, I want to sample faster and more data than I can get out over wireless. And a direct connection to AC mains is out of the question, as before.

Time to put some JeePlugs from my large pile to use:


That’s a 128 Kbyte Memory Plug and a Blink Plug. The idea is to start sampling at high speed and store it in the EEPROM of the Memory Plug, then power off the whole thing, connect it to a BUB and press a button to dump the saved data over the serial USB link.

Here’s the sketch I have in mind:

Screen Shot 2011-10-02 at 01.31.52.png

Note that saving to I2C EEPROM takes time as well, so there will be gaps in the measurement cycle with this setup. Which is why I’m sampling in bursts of 512. If that doesn’t give me good enough readings, I’ll switch to an interrupt driven mechanism to do the sampling.

Hm… there’s a fatal flaw in there. I’ll fix that and report the results tomorrow.

Relational data

In Software on Sep 30, 2011 at 00:01

This post strays solidly into the realms of software and database architecture…

I’m having a lot of fun with JeeMon, but it’s also starting to look like a bone which is slightly too large to chew on…

As in any application, there is the issue of how to manage, represent, and store data structures. This is an area I’ve spent quite a few years on in a previous life. The nice bit is that for JeeMon I get to pick anything I like – there are no external requirements. Time will tell whether that freedom won’t become a curse.

So now I’m exploring the world of pure relational data structures again:

Screen Shot 2011 09 29 at 14 28 25

What you’re looking at is the definition of 4 drivers (I’m leaving off a bunch more to keep this example limited). This was rendered as nested HTML tables, btw – very convenient.

Each driver has a description of the type of values it can produce, and each set of values is grouped (as in the case of ookRelay2, which can generate data for three different types of packets it is relaying). Drivers can support more than one interface (the roomNode driver supports both remote packets and direct serial connection, for example). And drivers can expose a number of functions (in this case they all only expose a “decode” function).

In the “flat” SQL world, you’d need 5 tables to represent this sort of information. But since I’m supporting relations-as-values, it can all be represented as a single relation in JeeMon (I call ’em “views”, btw).

Note that this isn’t a hierarchical- or network-database. It’s still purely relational. There’s an “ungroup” operator to flatten this structure as many levels as needed, along with the usual “select”, “project”, “join”, etc. operators.

There’s also something else going on. Some of the information shown above is virtual, in the sense that the information is never actually stored into a table but gets extracted on-demand from the running application. The “values” relation, for example, is produced by scanning some simple lists stored (as dicts) in a Tcl variable:

Screen Shot 2011 09 29 at 15 12 46

And the “functions” relation is created on-the-fly by introspecting the code to find out which public functions have been exposed in each driver.

It’s a bit unusual to think of application data in relational terms. This lets you “join” a list with another one, and do strange things such as perform selection and projection on the list of all public functions defined in the application. But it’s also extremely powerful: one single conceptual mechanism to deal with information, whether present in application variables, i.e. run-time state, from a traditional database, or as the result of a remote request of some kind. Here’s another example: perform a join between a local (scalar) variable and a dict / map / hash (or whatever you want to call associative arrays), and you end up doing key lookup.

This approach turns (at least part of) an application on its head: you don’t need to register every fact that’s relevant into a database, you launch the app and it sets up “views” which will extract what they need when they need it, from the running app itself. Don’t call us, we’ll call you!

It might seem like a complicated way to do things (indeed, it is!), but there’s a reason for this madness. I’m looking for a design which can adapt to changes – even in the code – while running. If a driver is updated to add support for Ethernet, say, then it should reflect in these data structures without further explicit effort from the programmer. I’m not there yet by a long stretch, but this sort of experimentation is needed to get there. IMO, the magic lies in not copying facts but in seeking them out when needed, combined with smart caching.

My research project which pioneered this approach is called “Vlerq”, and I’m considering resuscitating it for use in JeeMon. Trouble is, it’s quite a bit too ambitious to implement fully (I’ve hit that wall before) – so my current experimentation is aimed at finding a workable path for the near future. Perfection is the enemy of done – and right now I’m seeing my “done” horizon racing further and further away. It’s like trying to exceed the speed of light – oh, wait, didn’t someone do that the other day?

Noise spectrum

In Software on Sep 29, 2011 at 00:01

Not only is there a glcdScope sketch which is useful for displaying voltage across the 0.1 Ω shunt in my 220V experiments, there’s also a little Spectrum Analyzer sketch to try out. Yummie!

Spectrum analysis tells you what frequencies are present in a given sample of a periodic signal. Lots of math there – just Google around for “Fast Fourier Transform” (FFT) and you’ll get it all on your plate if you’re curious.

But this stuff is a bit harder than a plain signal. Here’s what I’m seeing, with just a short wire connected, picking up some 50 Hz presumably (as with the scope test) – again with the “digital phosphor” persistence:


In principle, this graph is ok: lots of signal at lower frequencies and progressively less at higher frequencies. After all, with a perfect 50 Hz sine wave and no noise, we’d expect a single peak at the start of the scale.

The repeated peaks every few pixels also look promising. With a bit of luck they are in fact the harmonics, i.e. the 100 Hz, 150 Hz, … multiples – which is what you get when a signal is repetitive but not exactly a sine wave. Harmonics are what makes music special – the way to distinguish a note played on the violin and on the piano.

But I was hoping for something else: a bit more peaks at the right hand side of the graph. This would indicate that there are high frequencies in the signal, the computer’s switching power supply close to this setup, for example.

And worse: I’m seeing a completely flat line when hooking this up to the 220V current shunt. Looks like this signal is too weak to play FFT games with (should the data be auto-scaled?).

Anyway, here’s the glcdSpectrum50 sketch:

Screen Shot 2011-09-27 at 19.04.17.png

It relies on the fix_fft.cpp file by Tom Roberts which does the FFT heavy-lifting (original is here).

Also, note that I’m using a slightly different algorithm this time to determine the average signal value: the average of the last 256 samples is used to compute the center value subtracted from the next 256 samples. The outcome should be similar, as long as the signal is indeed symmetric around this value.

All in all a nice try, but it didn’t really provide much new insight (yet?).

GLCD scope on 220V

In Hardware, Software on Sep 27, 2011 at 00:01

I’m not willing to hook my Mac up to 220V through USB, whether through the DSO-2090 USB scope I have, or any other way – even if it’s tied to a galvanically isolated setup such as the recent current measuring setups. One mistake and it’d go up in smoke – I’d rather focus all my attention on keeping myself safe while fooling around with this AC mains stuff.

But there’s this little 100 KHz digital sampling scope sketch, based on the Graphics Board. Looks like it could be a great setup for this context, since it can run detached off a single AA battery.

It took some hacking on the original sketch to get a system which more or less syncs to the power-line frequency (well, just the internal clock, but that turns out to be good enough). Here’s what it shows with the input floating:


Definitely usable. The three different super-imposed waves are most likely an artifact of the scanning, which takes place every 60 ms. One huge improvement for this type of repetitive readout is “digital phosphor”, like the professional DSO’s do: leaving multiple traces on the screen to intensify the readout. Single traces on a GLCD like this one end up with very little contrast, due to the lag of liquid crystals. What I do now, is leave pixels on for 10 scans before clearing the display. It’s not quite digital phosphor (where each scan fades independently), but it’s pretty effective as you can see. And this setup is a tad cheaper than that Agilent 3000X MSO I dream of…

Here’s a readout with this setup tied to the 0.1 Ω shunt in the AC mains line, with the power off:


That’s three pixels of noise, roughly, i.e. some 10 mV.

With a 60 W light bulb turned on, we get this:


Not bad at all! It looks like with a bit of smoothing and averaging, one could turn this into an ON / OFF signal.

Alas, the sensivity does leave to be desired. With a 25 W light bulb:


That’s barely above the noise threshold. It might be difficult to obtain a reliable detection from this, let alone at lower power levels. The 1W power brick showed almost no signal, for example.

Note that with North-America’s 110V, the readout would be twice as sensitive, since it’s measuring current.

Still, these results look promising. Here is the <cough> DSO with digital phosphor </cough> sketch I used:

Screen Shot 2011-09-26 at 15.52.38.png

This code can be found as “glcdScope50” example in GLCDlib on GitHub.

Fun stuff. Just look how simple it is to gain new insight: a few chips, a few lines of code – that’s all it takes!

Package management done right

In Software on Sep 26, 2011 at 00:01

Yeah, ok, this is probably a bit off topic…

I’m talking about installing Unix’y software on the Mac (which has a big fat GUI on top of a Posix-compliant OS).

There have been several projects in the past which have addressed the need to use a Mac as a Unix command-line system with all the packages coming out of the Linux, FreeBSD, etc. universes. The places where “tar”, “patch”, “make”, “autoconf”, “libtool”, and so on rule.

First there was Fink – implemented in Perl:

The Fink project wants to bring the full world of Unix Open Source software to Darwin and Mac OS X. We modify Unix software so that it compiles and runs on Mac OS X (“port” it) and make it available for download as a coherent distribution. Fink uses Debian tools like dpkg and apt-get to provide powerful binary package management. You can choose whether you want to download precompiled binary packages or build everything from source.

Then came MacPorts – implemented in Tcl:

The MacPorts Project is an open-source community initiative to design an easy-to-use system for compiling, installing, and upgrading either command-line, X11 or Aqua based open-source software on the Mac OS X operating system. To that end we provide the command-line driven MacPorts software package under a BSD License, and through it easy access to thousands of ports that greatly simplify the task of compiling and installing open-source software on your Mac.

And now there’s Homebrew – implemented in Ruby:

Homebrew is the easiest and most flexible way to install the UNIX tools Apple didn’t include with OS X. Packages are installed into their own isolated prefixes and then symlinked into /usr/local.

It’s interesting to note that all systems were written in a scripting language. Which makes perfect sense, given that they offer a huge jump in programmer productivity and no downsides to speak of in this context.

As for my pick: Homebrew gets it right.

Homebrew (i.e. the “brew” command) is installed with a one-liner:

    /usr/bin/ruby -e "$(curl -fsSL"

This is a big deal, because the last thing you want is to use a package manager to overcome installation hassles, only to end up with… an extra step which introduces its own installation hassles!

The actual design of that one-liner is quite clever: it points to a script in a secure area, with full history of all changes made to that script, so you can make sure it won’t do weird (or bad) things with your system.

Homebrew doesn’t introduce new conventions (like /sw/ or /opt/, which need to be added to your exe search path). It’ll simply install the packages you ask for in /usr/local/bin etc (which means you don’t need to sudo all the time). And it does it intelligently, because it actually installs in /usr/local/Cellar and then puts symlinks in /usr/local/*. Which means there’s a way out of an installation, and there’s even a (drastic) way to get rid of all Homebrew-installed packages again: delete Cellar, and remove the dangling symlinks from /usr/local/*. Not that you’d ever need to – Homebrew supports uninstalls.

It’s succinct (and colorized):

Screen Shot 2011 09 24 at 0 22 56

No endless lists of compiler commands on my screen, telling me absolutely nothing, other than “CPU == busy!”.

It does the right thing, and works out all the dependencies:

    fairie:~ jcw$ brew search rrd
    fairie:~ jcw$ brew install rrdtool
    ==> Installing rrdtool dependency: gettext
    ==> Downloading
    ######################################################################## 100.0%
    ==> Downloading patches
    ==> Summary
    /usr/local/Cellar/gettext/ 368 files, 13M, built in 5.4 minutes
    ==> Installing rrdtool dependency: libiconv
    ==> Downloading
    ==> Summary
    /usr/local/Cellar/libiconv/1.14: 24 files, 1.4M, built in 70 seconds
    ==> Installing rrdtool dependency: glib
    ... etc, etc, etc ...
    ==> Installing rrdtool
    ==> Downloading
    ######################################################################## 100.0%
    ==> Patching
    patching file configure
    Hunk #1 succeeded at 31757 (offset 94 lines).
    Warning: Using system Ruby. RRD module will be installed to /Library/Ruby/...
    Warning: Using system Perl. RRD module will be installed to /Library/Perl/...
    ==> ./configure --prefix=/usr/local/Cellar/rrdtool/1.4.5 --mandir=/usr/local/Cel
    ==> make install
    /usr/local/Cellar/rrdtool/1.4.5: 148 files, 3.2M, built in 68 seconds
    fairie:~ jcw$ 

It does its work with “formulas”, i.e. Ruby scripts which describe how to fetch, build, and install a certain package. The formulas (formulae?) are managed on GitHub, which means that there is a massive level of collaboration going on: new packages, issue tracking, fixes, and discussion. Here’s what I got when self-updating it again after having done so one or two days ago:

    $ brew update
    remote: Counting objects: 294, done.
    remote: Compressing objects: 100% (103/103), done.
    remote: Total 250 (delta 174), reused 212 (delta 143)
    Receiving objects: 100% (250/250), 39.46 KiB, done.
    Resolving deltas: 100% (174/174), completed with 32 local objects.
       36f4400..99bc0b7  master     -> origin/master
    Updated Homebrew from 36f4400e to 99bc0b79.
    ==> New formulae
    apktool             hexedit             p11-kit             solid
    denyhosts           jbigkit             qi                  tinyfugue
    gearman-php         kbtin               shen
    graylog2-server     opencc              sisc-scheme
    ==> Updated formulae
    android-ndk         ffmpeg              libquicktime        python
    aqbanking           frink               libraw              python3
    audiofile           fuse4x              libvirt             sleepwatcher
    cassandra           fuse4x-kext         nasm                tomcat
    class-dump          gflags              nginx               transcode
    csshx               google-sparsehash   nss                 tsung
    dash                gpsbabel            pdfjam              xml-security-c
    dvtm                gwenhywfar          pixman*
    elasticsearch       libiconv*           pos

But best of all IMO, the formulas use an almost declarative style. Here’s the one for re2c I just installed:

Screen Shot 2011 09 24 at 0 33 33

It’s not hard to guess what this script does. No wonder that so many people submit and tweak such formulas.

Note that there’s neither a description, nor explicit version handling in there (other than what can be gleaned from the download URL). Brew is an installer, not a catalog. Wanna know more? Visit the home pages. Brilliant.

It’s great to see the Ruby community adopt a concise, declarative, and Domain Specific Language style. Nothing new or revolutionary – has been done many times before – but nevertheless underappreciated, IMNSHO.

Hacking around in software

In Software on Sep 24, 2011 at 00:01

Here’s a web page I’ve been designing recently – one of many planned for the JeeMon system I’m working on:

Screen Shot 2011 09 20 at 13 00 26

A couple of notes:

  • there are three “master” tables on this page: Devices, Interfaces, and Drivers
  • devices are a 1:N mapping between drivers and interfaces
  • … except that some devices are remote, and not tied to a local interface
  • clicking on any row of these three tables displays details in the rightmost column
  • drivers are listed but not managed on this page, that’s a software admin task

I’m describing this here for a number of reasons (other than showing off what I’ve been spending my days on).

First of all, you can see that this more or less covers what needs to be set up to in a home monitoring and/or home automation system. I like to see everything at a glance, i.e. on a single page and with as little navigation as possible. With this setup, I hope to keep it all essentially on one page. Whereby the right-side column may vary widely, depending on the device / interface / driver.

But it doesn’t really need to be about home automation at all. A very similar setup could be used to hook up to devices which I’m experimenting with in the lab, or devices which act like measuring instruments, or control some aspect of an experiment. Anything related to Physical Computing, really.

The other thing that interests me is the “degree of variety” needed to cover all the cases. Many devices will simply collect or send out one or more values, but not all devices are like that.

The RF12demo sketch is essentially a gateway from and to an entire group of RFM12B-based nodes. Each node can have its own driver. The ookRelay2 sketch is similar: it collects data from quite different devices, each of them sending out their specific packets with their specific types of measurements. It differs from RF12demo, in that it contains decoders for all the devices it knows about. There are no separate KS300 drivers, etc (although perhaps there should be, since some code is currently duplicated in the CUL driver).

The autoSketch driver is yet another beast. It listens on a serial interface for a line of the form “[…]” and when it sees one, tries to load a driver with the name in brackets and hand over control to it. This is the reason why just about all my sketches start off with the following boilerplate code:

    void setup() {

When associated with the autoSketch driver, this will automatically load a “mySketch” driver, if present. Which in turn means that all you have to do is plug in the JeeNode (JeeLink, Arduino, etc) and JeeMon will automatically detect it and start communicating with it. IOW, plug-and-play – with a simple implementation.

This is why there’s a “Use autoSketch for new interfaces” checkbox. I could have called it “Enable plug & play”.

But although this web page is functional in my test setup, it’s far from finished. The hardest part I want to get right, is to make the page completely dynamic. Right now, a click on a row will update the details via Ajax, but nothing will happen when there is no user interaction. What I want is that the page automatically adjusts when a JeeLink is plugged in (on the web server side). What I really want, is to generalize that mechanism to everything shown on any web page. The challenge is to do this without tedious periodic polling or complete table refreshes, just “pushing” changes in an event-driven manner. Events on the server side, that is.

The visual layout, styling, and behavior of a page like this has become very simple with today’s tools:

  • jQuery is the core JavaScript library
  • jQuery UI is used as basic style engine (the tabbed group on the right, for example)
  • jQuery TableSorter adds support for (surprise!) table sorting via their headers
  • Knockout is used to manage a lot of the user-facing behavior in the browser
  • 960 grid regulates the page structure flow in CSS
  • there is a small amount of custom CSS and custom Javascript (about 20 lines each)
  • the entire page is implemented as a single Tcl source file (175 lines so far)

There are a few trivial utility functions in there, which are bound to migrate to a more general module once I better understand the trade-offs and benefits of re-use. And the rest is gobbledygook which looks like this:

Screen Shot 2011 09 23 at 22 19 41

This is the notation I came up with and described in my Playing with indentation post a short while back. This particular code describes the HTML text returned for drivers in the “details” column, via Ajax.

This notation gets expanded to HTML, but the real value of it is that describing the HTML on a page is fun, because it’s now trivial to insert grouping levels (i.e. mostly <div>’s) and rearrange stuff until it’s just right.

One piece of the puzzle

In Software on Sep 22, 2011 at 00:01

The measurement anomalies of the recent experiments are intriguing. I don’t want to just tweak things (well, up to a point) – I also want to explain what’s happening! Several insights came through the comments (keep ’em coming!).

Let me summarize the measurement algorithm: I measure the < 1 VAC peak-to-peak voltage ≈ 5000 times per second, and keep track of a slow-moving average (by smoothing with a factor of 1/500-th). Then I calculate the arithmetic mean of the absolute difference between each measured value and that average. Results are reported exactly once every second.

Some notes:

  • noise should cancel out for both the average and the mean values, when the signal is large
  • noise will not cancel out if it is larger than the fluctuation, due to the absolute function

There’s also another problem:

JC s Doodles page 15

I’m not synchronizing the averaging period to the power-line frequency, nor taking zero crossings into account.

This matters, and would explain up to some 2% variation in the measurement. Here’s why:

  • each 1-second sampling period measures about 50 cycles
  • let’s assume it’s 49-cycles-and-a bit
  • the extra “bit” will be residuals at both ends
  • the 49 cycles will be fine, averaged from zero-crossing to zero-crossing
  • but the ends may “average” to anything between 0 and 100% of the true average for entire cycles
  • so 1 of the 50 cycles may be completely off, i.e. up to 2% error (always too low)

So it looks like there are two measurement issues to deal with: noise and the lack of zero-crossing syncing.

It doesn’t quite explain the extreme values I’ve been seeing so far, but for now I’ll assume that an excessive amount of noise is being picked up in my crude breadboarded circuit and all the little dangling wires.

Shielding, digital noise filtering, and syncing to zero-crossings… quite a few things to try out!

RFM12B Command Calculator

In Software on Sep 18, 2011 at 00:01

I wanted to learn a bit more about how to implement non-trivial forms in HTML + JavaScript. Without writing too much HTML, preferably…

There are a few versions of a tool floating around the web, which help calculate the constants needed to configure the RFM12B wireless module – such as this recent one by Steve (tankslappa).

Putting one and one together – it seemed like an excellent exercise to try and turn this into a web page:

Screen Shot 2011-09-17 at 15.47.21.png

If only I had known what I had gotten myself into… it took more than a full day to get all the pieces working!

In fairness, I was also using this as a way to explore idioms for writing this sort of stuff. There is a lot of repetitiveness in there, mixed with a frustrating level of variation in the weirdest places.

Due to some powerful JavaScript libraries, the result is extremely dynamic. There is no “refresh”. There is not even a “submit”. This is the sort of behavior I’m after for all of JeeMon, BTW.

The calculator has been placed on a new website, called The page is dynamically generated, but you can simply save a copy of it on your own computer because all the work happens in the browser. The code has been (lightly) tested with WebKit-based browsers (i.e. Safari and Chrome) and with FireFox. I’d be interested to hear how it does on others.

If you want to look inside, just view the HTML code, of course.

As I said, it’s all generated on-the-fly. From a page which is a pretty crazy mix of HTML, CSS, JavaScript, and Tcl. With a grid system and a view model thrown in.

You can view the source code here. I’ve explored a couple of ways of doing this, but for now the single-source-file-with-everything-in-it seems to work best. This must be the craziest software setup I’ve ever used, but so far it’s been pretty effective for me.

Why use a single programming language? Let’s mash ’em all together!

Power Measurement (ACS) – code

In Software on Sep 15, 2011 at 00:01

For yesterday’s setup, I had to write a bit of code. Here’s the main part:

Screen Shot 2011 09 14 at 12 38 44

(this “powerACS” sketch can be found here)

The idea is to keep reading the analog pin as often as possible (several thousand times per second), and then keep track of its moving average, using this little smoothing trick:

    avg = (499L * avg + value) / 500;

Each new measurement adds 1/500th weight to the average. I didn’t want to use floating point, so it takes some care to avoid long int overflow. Here’s how I scale the input (0 .. 1023) to microvolts (0 .. 3,300,000):

    uint32_t value = measure.anaRead() * (3300000L / 1023);

Note that 500 x 3300000 is under the max of a 32-bit int, so the above “avg” calculation is ok.

It’s easy to calculate the offset, once we know the average voltage. Keep in mind that we’re measuring 50 Hz AC, and that the result we’re after is the fluctuation – i.e. what you get when putting a multimeter in VAC measurement mode. The way to do that is to calculate the “average absolute difference from the average”:

JC s Doodles page 13

IOW, what I want is the average height of those red bars. I flipped the lower half up because we’re interested in the absolute difference (otherwise the bars would just cancel out).

Here’s the calculation:

    if (value > avg)
      total += value - avg;
    else if (value < avg)
      total += avg - value;

When the time comes to report a “range” reading, we calculate it as follows and reset the other variables:

    uint32_t range = total / count;
    total = count = 0;

Here’s some sample debug output on the serial port:

    2560650 2569286 5916 31973363 5404
    2576775 2569256 5911 32175178 5443
    2563875 2569266 5917 32888792 5558
    2567100 2570205 5911 33200506 5616
    2573550 2569935 5917 32798060 5543
    2580000 2565675 5910 183251691 31007  <- 100 W light bulb
    2573550 2565108 5916 356840654 60317
    2570325 2565479 5916 355916529 60161
    2512275 2565467 5910 355676919 60182
    2525175 2565688 5916 356664359 60288
    2486475 2566547 5909 354854663 60053
    2480025 2566631 5916 355567509 60102
    2480025 2568658 5910 355608745 60170
    2583225 2569370 5916 178542547 30179  <- off
    2576775 2569122 5915 33193018 5611
    2570325 2569274 5911 33236267 5622
    2560650 2568988 5917 32873935 5555
    2576775 2569290 5911 32733630 5537
    2567100 2568564 5917 33390738 5643
    2560650 2569179 5917 32917499 5563
    2576775 2568640 5911 32354114 5473
    2570325 2569121 5916 32820006 5547
    2576775 2568319 5917 33896284 5728
    2570325 2569996 5911 33120608 5603
    2567100 2568514 5917 37697489 6371    <- 1.1 W power brick
    2550975 2568582 5911 36764137 6219
    2567100 2568767 5916 37434853 6327
    2567100 2568813 5911 36272763 6136
    2563875 2568659 5917 37738504 6377
    2563875 2568355 5911 37911395 6413
    2567100 2568586 5917 37444158 6328
    2563875 2568672 5916 37607002 6356
    2580000 2568879 5911 37293083 6309    <- off
    2570325 2568330 5917 33082874 5591
    2563875 2567713 5911 33007567 5584
    2567100 2568380 5917 32770050 5538
    2580000 2568846 5910 33002639 5584
    2570325 2568844 5917 33201661 5611

You can see the 60 mV signal when the power consumption is 100 W. Which is a bit low: I would have expected 100 W / 230 V * 0.185 V/A = 80 mV. But it turns out that the lamp really is an 87 W lamp, so that makes it 70 mV. I’ll attribute the remaining difference to my crude setup, for lack of a better explanation.

What’s worse though, is that there is always a 5.5 mV “hum” when no current is flowing (even when the whole circuit isn’t anywhere near a power line). One possible explanation could be ADC measurement noise: the 10-bit ADC resolution is 3.3 mV. Then again, note how multiple measurements do lead to a much finer resolution: the 60 mV readout fluctuates by only a fraction of one millivolt. Noise actually improves resolution, when averaged over multiple readings.

Still, the 5.5 mV baseline value makes it hard to reliably measure really low power levels. The 1.1 W test was from an unloaded power brick, i.e. an inductive load. It causes the output to jump from 5.5 mV to only 6.3 mV, which is not a very distinct result. Another 0.3 W test load didn’t even register.

Conclusion: this detects current fairly well, was very easy to set up, and offers galvanic isolation protection. But it does require a hookup to AC mains power and it’s not super effective at really low power levels.

Sending RF12 packets over UDP

In Software on Sep 13, 2011 at 00:01

As I mentioned in a recent post, the collectd approach fits right in with how wireless RF12 broadcast packets work.

Sounds like a good use for an EtherCard + JeeNode pair (or any other ENC28J60 + RFM12B combo out there):


The idea is to pass all incoming RF12 packets to Ethernet using UDP broadcasts. By using the collectd protocol, many different tools could be used to further process this information.

What I did was take the etherNode sketch in the new EtherCard library, and add a call to forwardToUDP() whenever a valid RF12 comes in. The main trick is to convert this into a UDP packet format which matches collectd’s binary protocol:

Screen Shot 2011-09-12 at 17.50.37.png

The sendUdp() code has been extended to recognize “multicast” destinations, which is the whole point of this: with multicast (a controlled form of broadcasting), you can send out packets without knowing the destination.

The remaining code can be found in the new JeeUdp.ino sketch (note that I’m now using Arduino IDE 1.0beta2 and GitHub for all this).

It took some work to get the protocol right. So before messing up my current collectd setup on the LAN here at JeeLabs, I used port 25827 instead of 25826 for testing. With JeeMon, it’s easy to create a small UDP listener:

And sure enough, eventually it all started to work, with RF12 packets getting re-routed to this listener:

Screen Shot 2011-09-12 at 20.04.08.png

Now the big test. Let’s switch the sketch to port 25826 and see what WireShark thinks of these packets:

Screen Shot 2011-09-12 at 17.15.16.png

Yeay – it’s workin’ !

The tricky part is the actual data. Since these are raw RF12 packets, with a varying number of bytes, there’s no interpretation of the data at this level. What I ended up doing, is sending the data bytes in as many collectd 64-bit unsigned int “counter values” as needed. In the above example, two such values were needed to represent the data bytes. It will be up to the receiver to get those values and convert them to meaningful readings. This decoding will depend on what the nodes are sending, and can be different for each sending JeeNode.

I’ve left the original web browser in as well. Here is the “JeeUdp” box, as seen through a web browser:

Screen Shot 2011-09-12 at 18.02.28.png

(please ignore the old name in the title bar, the name is now “JeeUdp”)

It’s not as automatic out of the box as I’d really like. For one, you have to figure out which IP address this unit gets from DHCP – one way to do so is to connect it to USB and open the serial console. The other bit is that you need to configure the unit to set its name, the UDP port, and the RF12 settings to use. There’s a “Configure” link of the web page to do this – a some point, I’d like to make JeeMon aware of this, so it can do the setup itself (via http). And the last missing piece of the puzzle is to hook this into the different drivers and decoders to interpret the data from these UDP packets in the same way as with a JeeLink on USB.

Ultimately, I’d like to make this work without any remote setup:

  • attach Ethernet and power to this box (any number of them)
  • each box starts reporting its status via UDP (including its IP address)
  • a central JeeMon automatically recognizes these units
  • you can now give a name to each box and fill in its RF12 configuration
  • packets start coming in, so now you can specify the type of each node
  • decoders kick in to pick up the raw data and generate meaningful “readings”
  • that’s it – easy deployment of JeeNode-based networks is on the horizon!

Not there yet, but all the essential parts are working!

A site for the home – part 2

In Software on Sep 12, 2011 at 00:01

I’ve been making good progress since the recent post.

Some things work, some things don’t when you use “off-the-shelf Open Source”. For example, I started using Twitters Bootstrap as CSS grid framework for page layout. It looks great. But there was too much interference with other components, such as jQuery DataTables. Well, either that or I simply don’t get it. So I ended up removing it all, in favor of the 960 Grid System (note how all JavaScript development seems to be happening on GitHub).

So now I’ve got a first rough cut at the “Niner” layout, i.e. 3×3 pages accessed via tabs on the bottom/right. Here’s a screen shot from an iPad:


The tabs don’t look so great (I’m so inept at GUI design that I don’t even try), but the general design is shaping up nicely. The area at the bottom left shows the last two lines of the log on the server, and it’s updating in real time. So is the table itself, using Server-Sent events.

With this sort of dynamics, all sorts of details start to jump out. My first approach was to have one page per URL, i.e. “” for page 1, etc. But that causes a full page refresh, which is too disruptive, especially with that log area constantly updating.

Which is where JavaScript and Ajax come in, of course. So now navigation only updates the main area – not the bottom & right-side borders. I’m only testing with Safari/Chrome, and occasionally FireFox, BTW.

The indentation idea I described earlier is also working out nicely. Here’s the main page description which produces the HTML sent to the browser:

Screen Shot 2011 09 11 at 22 39 33

(as you can see, there are limits to syntax coloring when mixing several languages in a single source file!)

And here’s the description of the page with that table shown above:

Screen Shot 2011 09 11 at 22 40 15

Let me just say that I’m “over” HTML. Bye bye, <div> et al. Good riddance.

PS. Oh yes, and here’s the fun part – this defines the structure of the site:

Screen Shot 2011 09 12 at 0 02 40

Tabs and titles are automatically generated unless overridden in this layout.

A site for the home

In Software on Sep 10, 2011 at 00:01

I’m building a “home monitoring and automation” system. Not just the hardware “nodes”, but also the software side of things, i.e. a server running permanently. Web-based, so that it can be accessed from anywhere if needed.

I’ve got a lot of code lying around here, ready to be put to use for just that. It’s not that hard to create a website (although truly dynamic displays are perhaps a little harder).

One thing I’ve been struggling with for a long time, though, is the overall design. Not so much the look & feel, but the structure and behavior of it all.

I’m not too keen on systems with elaborate menu structures, tons of hoops dialogs to jump through, even scrolling messes with our innate sense of location. Been there, done that. Modes suck – they just create labyrinths.

What I’ve come up with is probably not for everyone (heh, nothing new here). An entire system, organized as nine pages. No more, no less. Instant navigation with a keyboard (digits, function keys, whatever), and trivial navigation with a mouse or touch screen:

Web layout

Even a laptop screen with 3+3 buttons (or a joystick) could navigate to any page with just two clicks.

Here is a first exploration of what these pages could be for (vaporware, yeay!):

Web pages

Each page is nearly the full screen, with only the bottom and right-side borders used. And each page can have any content, of course: tabs, sidebars, widget collections, it’s still all up in the air. There’s plenty of software to extend this system infinitely, using HTML, CSS, and JavaScript.

One of the things I’d like to do with this, is make the system automatically return to the main Status Overview page when idle for a minute or so. With the option to “pin” one of the other pages to prevent this from happening.

This might all seem very basic, inconsequential, and even arbitrary, but for me it’s a major step forward. I’ve been running around in circles for ages, because there was no structure in place to start assembling the different parts of the site. It’s all nice and well to develop stuff and tinker endlessly behind the scenes, but in the end this still needs to lead to a practical setup.

So now there’s a plan!

Playing with indentation

In Software on Sep 9, 2011 at 00:01

Bear with me – this isn’t directly related to Physical Computing, but I’ve been having fun with some code.

If you’ve ever written any web apps or designed a web site, chances are that you’ve also written some HTML, or at least HTML templates. Stuff like this, perhaps:

    <!DOCTYPE html>
        <meta charset='utf-8' />
        <title>Flot graph demo</title>
        <script type='text/javascript' src='.../jquery.flot.js'></script>
        <script type='text/javascript'>
          $(function(){ ... });
        <style type='text/css'>
          #placeholder { width: 600px; height: 300px; }
        <div id='placeholder'></div>

Well, I’m getting a bit tired of all this nesting. The essence of it is buried far too deeply inside what I consider line noise. There are currently a number of developments related to HTML, CSS, and JavaScript which might change this sort of development considerably. Much of it seems to be driven by the Ruby and Rails community, BTW.

Some examples: for HTML, there is Haml. For CSS, there is Sass. And for JavaScript there is CoffeeScript.

All of these have in common (along with Python and YAML) that they treat indentation as a significant part of the notation. Before you raise an eyebrow: I know that the opinions on this are all over the map. So be it.

I wanted to try this with JeeMon, so after a bit of coding, I ended up with a “Significant Indentation Formatter” and then applied that to HTML. Here’s the above, re-written in a HAML-like notation:

        title: Flot graph demo
          $(function(){ ... });
          #placeholder { width: 600px; height: 300px; }

It generates the same HTML as above, but there’s less to write. Even less in fact, because I’ve also integrated it with JeeMon’s templating system:

        title: Flot graph demo
        [JScript includes flot]
        [JScript wrap $js]
        [JScript style { #placeholder { width: 600px; height: 300px; } }]

Here’s a slightly more elaborate KAKU send example which illustrates basic template looping:

        title: KAKU send
        [JScript includes ui]
        [JScript wrap $js]
        % foreach x {1 2 3 4}
            button#on$x: On $x
            button#off$x: Off $x
          label: Group:
          % foreach x {I II III IV}
            label/for=g$x: $x
          label: House Code:
          % foreach x {A B C D E F G H I J K L M N O P}
            option/value=$x: $x

So far, this system doesn’t use standard conventions, is undocumented, hasn’t been used in anger or proven itself to handle more complex layouts, has almost no error handling, and has only been used by yours truly for a few hours. Still, I’ve adjusted all the examples in JeeRev to gain a bit more experience with this approach.

I like it. The above is pretty close to how I want to design the structures and sections of a web page.

Low-end Linux

In Hardware, Software on Sep 8, 2011 at 00:01

It’d be neat to use a very low-end embedded Linux system for running (part of) my home monitoring and automation system.

I’ve mentioned the BifferBoard before, twice in fact. With a BogoMips rating of 56, it’s as low end as it gets.

Unlike most low-end boards, which are ARM-based, the BifferBoard runs on a “S3282/CodeTek” which is 486SX-compatible, so many 32-bit Linux binaries will work as is. I’m running a fairly standard Debian Lenny on mine, very convenient. It needs 650 Mb, including a full gcc build chain.

IMO, there’s great value in such a low-power system. For one, with just 30 MB RAM and no swap support on the standard setup, you’re really forced to look at memory consumption. Running out of memory will cause a program to be terminated (it can easily be restarted with a watchdog shell script).

The other feature this board does not have is hardware floating point. It’s all emulated. Here’s what that means:

  $ ssh biffie jeemon tcl 'time {/ 1 2} 1000'
  36.735 microseconds per iteration
  $ ssh biffie jeemon tcl 'time {/ 1.0 2.0} 1000'
  336.717 microseconds per iteration

Versus my lowly 1.4 GHz laptop:

  $ jeemon tcl 'time {/ 1 2} 1000'
  0.569055 microseconds per iteration
  $ jeemon tcl 'time {/ 1.0 2.0} 1000'
  0.546295 microseconds per iteration

As you can see, it’s a whole different ballgame :)

Anyway, let’s try and get a bit more substantial code running on this setup. I’ll use JeeMon, since I’ve recently been doing some work to combine round-robin data storage with graphing in the browser:

Screen Shot 2011 09 07 at 17 10 29

The graphs on the right are 1-day / 1-week / and 2-month previews, but let’s ignore the actual graphs since there is not enough test data for them to be meaningful.

What I wanted to see, is whether this would run on a BifferBoard. Here’s my first attempt:

Screen Shot 2011 09 07 at 17 03 14

Whoops… out of memory. Not only that, it’s taking ages to generate the JSON data for each Ajax request coming from the browser. Here’s my laptop for comparison:

Screen Shot 2011 09 07 at 16 59 31

The laptop is generating data for 5 graphs (one is off-screen) in a fraction of a second, while the BifferBoard needs several minutes (before running out of memory).

Good. Because with such a slow process, it’s much easier to find the bottlenecks. It turned out to be in the round-robin storage module I recently coded (using a USB stick as mass storage device doesn’t help either). Two hours later, things started working a lot better:

Screen Shot 2011 09 07 at 19 02 40

Progress! Still too slow for practical use… at least it’s no longer running out of memory. But although the BifferBoard is no longer maxed out, it is struggling – here’s a “vmstat 3” display, with one line per 3s:

Screen Shot 2011 09 07 at 20 13 16

I now realize that with my current settings, all data streams are aggregated to a larger “bucket” every 5 minutes. This probably explains these periods of 100% CPU activity.

Memory use is reasonable (with all interactive SSH shells logged out):

  $ ssh biffie free
               total       used       free     shared    buffers     cached
  Mem:         30000      28248       1752          0        444       8040
  -/+ buffers/cache:      19764      10236
  Swap:            0          0          0

There’s still a lot of floating point activity going on, because I store all data points as 64-bit doubles. This needs to be improved at some point. My hunch is that it will take care of the performance problem. But it’s nice to see that JeeMon can cope (sort of), even in such a very limited environment. A BifferBoard with JeeMon will definitely be sufficient as “slave” controller in a larger configuration.

Power consumption is pretty neat: 2.85 W under full load and 2.20 W when idle. Measured at the AC outlet, i.e. including 5V power supply losses.

There are numerous low-end Linux systems out there. If the above can be made to work reasonably well, then it should work with most of them… ARM- / PPC- / x86-based, whatever.

For now, that BifferBoard is a good way to keep me honest, and on edge :)

Versions à la GitHub

In Software on Sep 7, 2011 at 00:01

As mentioned a few days ago, I’m in the process of moving all my software development to GitHub.

In a nutshell: Git is a Version Control system which lets you keep track of changes to groups of files, and do it in such a way that many people can make changes and nothing will ever get lost. You might feel lost if you really make a mess of it, but every change is tracked and can be recovered and compared against any other point in time. Based on git, GitHub takes it all to the next level, by fostering collaboration and combining tons of useful features – for free, via their website. GitHub is to git as JavaScript is to assembler (the analogy is a bit rough).

If you’re writing code, there is no excuse for not using a VCS. Even as a single developer, even if no-one else ever gets involved, and even if you’re the most organized person on the planet. An organized developer? Hah! :)

Life without a VCS looks like this:

  • let’s write some code – great, it works!
  • let’s try out something more difficult now
  • drat, didn’t work, I hope my editor undo buffer lets me back out
  • let’s make a copy of what worked before
  • now let’s write some more code – great, it works!
  • … time passes …
  • time to write code again – hm, I messed up
  • where was that copy I made earlier, again?
  • drat, it doesn’t have my latest changes
  • do I remember which part worked, and what I added?
  • … time passes …
  • I want to continue on that first attempt again
  • hmm, guess I’ll start from scratch
  • no wait, let’s make a copy before I mess up
  • … time passes, the disk fills with copies, none of them work …

Now let’s revisit that with a VCS:

  • write some code, check it in – add a brief comment
  • try out something more difficult now
  • check it in, let’s add a note that it’s still experimental
  • hm, can’t seem to get it to work
  • oh, well, I’ll go back to the previous version – easy!
  • writing more code, having fun, we’re on a roll
  • let’s check it in right away
  • … disaster strikes, my laptop breaks down
  • get it fixed or get a new one, check out from GitHub
  • where was I? oh yes, got one good version and two bad ones
  • let’s switch to that failed attempt again
  • wait, let’s first merge the latest good changes in!
  • cool, now I’ve got an up-to-date set of files to work on
  • oh, darn, still can’t make it work, let’s save it with a note
  • … time passes …
  • ok, time to review the different attempts and take it from there

Note that there are no more copies of files. Only a repository (on your laptop and on GitHub – but you still need to back it up, like everything else). The repository contains the entire history of all the changes, including notes, backing out steps, merges, everything.

A lot of this is possible with any modern VCS, including the “subversion” setup I’ve been using until now. But GitHub makes it effortless, and throws in a lot of nice extras. You still need a “git” application on your laptop (there are several versions), and if you’re using a Mac, you can use GitHub’s own app to communicate with GitHub without having to go through a browser.

Here’s what a “diff” looks like, i.e. a comparison of the source code, showing only the differences:

Screen Shot 2011 09 02 at 21 53 17

One line was replaced, two more lines were added at the end. It’s totally obvious, right?

Here’s a history of the “commits” made, i.e. the times when a new version was sent to git and GitHub:

Screen Shot 2011 09 02 at 21 51 01

Almost everything is a link – you can compare versions, find out what else was changed at the same time, see what else any of the developers is working on, and of course download a full copy of any version you want.

The other mechanism I’d like to describe is branching and merging. That change shown above was not made by me but by “Vliegendehuiskat” (hello, Jasper :) – and the way it works is quite neat, if you’ve never seen collaboration via VCS in action.

What Jasper did, was create a “fork” of JeeLib, i.e. a personal copy which knows exactly where it came from, when, and from which version of which files. And then he made a small change (without even telling me!), and at some point decided that it would be a good idea to get this change back into the official JeeLib code. So he submitted a “pull request”, i.e. a request for me to pull his changed version back in:

Screen Shot 2011 09 06 at 22 25 56

This generated a new entry in GitHub’s issue tracker and an email to me. I liked his change and with one click, merged it back in. Only thing left to do was to “pull” those changes to my laptop copy as well.

GitHub has a graphical “network” view to display all commits, forks, branches, and merges:

Screen Shot 2011 09 06 at 22 29 10

Each dot is a commit. The black line is my code, the blue diversion shows Jasper’s fork, his commit, and my merge. Hovering over any of these dots will show exactly what happened.

This illustrates the simplest possible case of GitHub for collaboration. If you want to see how a entire team puts git and GitHub at the center of a large-scale collaborative software development effort, read this article by Scott Chacon (who works with and at GitHub, BTW).

I’ve been moving lots of stuff to GitHub these past few days, not all of it related to JeeLabs. Here are the three main projects, as far as physical computing goes:

Screen Shot 2011 09 02 at 21 48 41

Note another nice feature: you can instantly see how active a project is. JeeLib is the brand new combination of the Ports and RF12 libraries, while EtherCard and GLCDlib have been around a bit longer (and the full change history has been imported). It’s been over half a year since Steve and I worked on GLCDlib, as you can see – with a small blip before the summer, when I added font support.

There are a few features which I haven’t enabled at GitHub. One of them is their wiki – because having yet another spot on the web with pages related to JeeLabs would probably do more harm than good. The Café at is as before the place to look for documentation. There are ways to merge this stuff, so maybe I’ll go there one day.

A warning: the “sketches” on GitHub have all been adjusted for the new upcoming Arduino 1.0 IDE, and do not work with Arduino 0022, which is still the main version today. So I’m slightly ahead of the game with this stuff.

I’m really looking forward to how the software will evolve – and you’re cordially invited to be part of it :)

PS. I’m updating to GitHub fairly often because I work on two different machines – and this is the easiest and safest way to keep stuff in sync.

System status

In Software on Sep 6, 2011 at 00:01

Yesterday, the WordPress server dropped out. Looks like some joker decided to grab the whole site, no throttling, nothing. Oh well, I guess that’s what fat pipes lead to.

In principle, WordPress should be able to handle such things, but I think it caused the underlying MySQL to run out of memory and abort.

A restart of the WordPress VM fixed it, once I found out. But this gives me an excuse to describe some things I’ve been starting to set up here.

One of of things I did this summer, was set up a collectd process on most of the machines running here – collectd is a bit Linux-centric, but one of the things it can do is grab various system statistics and broadcast it on the LAN over UDP. There are loads of plugins, and you can interface to it with scripts to feed it anything you like.

The nice thing about UDP broadcasts, is that senders don’t need to know anything about receivers. As with the RF12 driver, you just kick stuff into the network and then it’s up to others to decide what to do with it, if anything.

Collectd is also extremely low overhead, both in terms of CPU & memory load and in terms of network load. I’ve enabled it permanently on five systems, sending out 50..100 system parameters each, once every 30 seconds.

Here’s a partial view of the information being collected:

Screen Shot 2011 09 05 at 12 57 52

This was generated in the browser using JeeMon and jstree, as an exercise (I’m looking for a more convenient JavaScript widget to adjust a live tree using JSON).

Broadcasting is very convenient, as the collectd tool and the RF12 driver have shown, because there’s no setup. In theory, packets can get lost without acknowledgements and re-transmissions. In practice, some do indeed get lost, but very rarely. No problem at all for periodic / repetitive data collection.

I created a small script to display system status on my development Mac, using a tool called GeekTool. It can put information right on the desktop, i.e. as unobtrusively as you want, really. Here’s what I see when there are no windows obscuring the desktop background:

Screen Shot 2011 09 05 at 12 42 27

On the left is a periodic “ps“, the output on the right is from “ssh myserver jeemon overview.tcl“. This is a remote call to the server using a small script to pull info out of a text file and format it. The output shows that there are 5 systems online (running collectd, that is), and the script last ran at 12:41 in 39 ms.

Perhaps more interesting is that the server also collects this data from the LAN. To do this, I just launch JeeMon next to this “main.tcl” script:

    History group * 1w/5m 1y/6h
    collectd listen sysinfo

The first line is optional – it defines the parameters for historical round-robin data storage. So what I end up with, is the last 7 days of information, with details every 5 minutes, as well as up to one year of information aggregated per 6 hours.

The second line starts listening for collectd packets on the standard UDP port, tagging the results with the “sysinfo” prefix. As side effect, JeeMon will update a “stored/” text file once a minute with the current values of all variables plus some other details. This is what was used to generate the above GeekTool on-screen status. There’s a built-in web server to query and access the saved historical data.

There’s no further setup. It’s up to the clients to decide what to collect and how often to send it. Data storage on the server is fixed, since this is stored in round-robin fashion (like rrdtool). It’s now collecting nearly 600 800 variables and using about 100 125 Mb of disk space. Data collection adds 2% load on a 2.66 GHz CPU.

This could also be used to collect home sensor data.

The bits have moved

In News, Software on Sep 3, 2011 at 00:01

There are some changes planned in how things are going to be done around here. I want to streamline things a bit more, and make it easier for people to get involved.

One of the major changes is to move all JeeLabs open source software to GitHub:

Screen Shot 2011 09 02 at 22 01 27

The main reason for doing this, is that it makes it much easier for anyone to make changes to the code, regardless of whether these are changes for personal use or changes which you’d like to see applied to the JeeLabs codebase.

For the upcoming Arduino IDE 1.0 release (which appears to break lots of 0022 projects), I’ve moved and converted a couple of JeeLabs libraries so far:

  • Ports and RF12 have been merged into a single new library called JeeLib
  • the EtherCard and GLCDlib libraries have been moved without name change
  • all *.pde files have been renamed to *.ino, the new 1.0 IDE filename extension
  • references to WProgram.h have been changed to Arduino.h
  • the return type of the write() virtual function has been adjusted
  • some (char) casts were needed for byte to fix unintended hex conversion

If you run into any other issues while using this code with the new Arduino IDE 1.0beta2, let me know.

So what does all this mean for you, working with the Arduino IDE and these JeeLabs libraries?

Well, first of all: if you’re using IDE 0022, there’s no need to change anything. The new code on GitHub is only for the next IDE release. The subversion repositories and ZIP archives on the libraries page in the Café will remain as is until at least the end of this year.

New development by yours truly will take place on GitHub, however. This applies to all embedded software as well as the JeeMon/JeeRev stuff.

The new JeeLib should make it easier to use Ports and RF12 stuff – just use #include <JeeLib.h>.

Note that you don’t need to sign up with GitHub to view or download any of the JeeLabs software. The code stored there is public, and can be used by anyone. Just follow the zip or tar links in the README section at the bottom of the project pages, or use git to ̶