Computing stuff tied to the physical world

Archive for September 2013

Flashback – Batteries came later

In AVR, Hardware, Software on Sep 30, 2013 at 00:01

During all this early experimentation in 2008 and 2009, I quickly zoomed in on the little ATmega + RFM12B combo as a way to collect data around the house. But I completely ignored the power issue…

The necessity to run on battery power was something I had completely missed in the beginning. Everyone was running Arduino’s off either a 5V USB adapter or – occasionally – off a battery pack, and never much more than a few days. Being “untethered” in many projects at that time, meant being able to do something for a few hours or a day, and swapping or recharging batteries at night was easy, right?

It took me a full year to realise that a wireless “node” tied to a wire to run for an extended period of time made no sense. Untethered operation also implies being self-powered:

DSC_0496

Evidently, having lots of nodes around the house would not work if batteries had to be swapped every few weeks. So far, I just worked off the premise that these nodes needed to be plugged into a power adapter – but there are plenty of cases where that is extremely cumbersome. Not only do you need a power outlet nearby, you need fat power adapters, and you have to claim all those power outlets for permanent use. It really didn’t add up, in terms of cost, and especially since the data was already being exchanged wirelessly!

Thus started the long and fascinating journey of trying to run a JeeNode on as little power as possible – something most people probably know this weblog best for. Over the years, it led to some new (for me) insights, such as: transmission draws a “huge” 25 mA, but it’s still negligible because the duration is only a few milliseconds. By far the most important parameter to optimise for is sleep-mode power consumption of the entire circuit.

In September 2010, i.e. one year after starting on this low-power journey, the Sleepy class was added to JeeLib, as a way to make it easy to enter low-power mode:

class Sleepy {
public:
    /// start the watchdog timer (or disable it if mode < 0)
    /// @param mode Enable watchdog trigger after "16 << mode" milliseconds 
    ///             (mode 0..9), or disable it (mode < 0).
    static void watchdogInterrupts (char mode);
    
    /// enter low-power mode, wake up with watchdog, INT0/1, or pin-change
    static void powerDown ();
    
    /// Spend some time in low-power mode, the timing is only approximate.
    /// @param msecs Number of milliseconds to sleep, in range 0..65535.
    /// @returns 1 if all went normally, or 0 if some other interrupt occurred
    static byte loseSomeTime (word msecs);

    /// This must be called from your watchdog interrupt code.
    static void watchdogEvent();
};

The main call was named loseSomeTime() to reflect the fact that the watchdog timer is not very accurate. Calling Sleepy::loseSomeTime(60000) gives you approximately one minute of ultra low-power sleep time, but it could be a few seconds more or less. To wait longer, you can call this code a few times, since 65,535 ms is the maximum value supported by the Sleepy class.

As a result of this little class, you can do things like put the RFM12B into sleep mode (and any other power-hungry peripherals you might have connected), go to sleep for a bit, and restore all the peripherals to their normal state. The effects can be quite dramatic, with a few orders of magnitude less power consumption. This extends a node’s battery lifetime from a few days to a few years – although you have to get all the details right to get there.

One important design decision in the JeeNode was to use a voltage regulator with a very low idle current (the MCP1700 draws 2 µA idle). As a result, when a JeeNode goes to sleep, it can be made to draw well under 10 µA.

Most nodes here at JeeLabs now keep on running for well over a year on a single battery charge. Everything has become more-or-less install and forget – very convenient!

Flashback – RFM12B wireless

In AVR, Hardware, Software on Sep 29, 2013 at 00:01

After the ATmega µC, the second fascinating discovery in 2008 was the availability of very low-cost wireless modules, powerful enough to get some information across the house:

wireless-modules

It would take another few months until I settled on the RFM12B wireless module by HopeRF, but the uses were quickly falling into place – I had always wanted to track the energy consumption in the house, to try and identify the main energy consumers. That knowledge might then help reduce our yearly energy consumption – either by making changes to the house, or – as it turned out – by simply adjusting our behaviour a bit.

Here is the mouse trap which collected energy metering data at JeeLabs for several years:

mousetrap

This is also when I found Modern Devices’s Real Bare Bone Board Arduino clone by Paul Badger – all the good stuff of an Arduino, without the per-board FTDI interface, and with a much smaller form factor.

Yet another month would pass to get a decent interrupt-driven driver working, and some more tweaks to make transmission interrupt-based as well. The advantage of such as design is that you get the benefits of a multi-tasking setup without all the overhead: the RF12 driver does all its time-critical work in the background, while the main loop() can continue to use delay() calls and blocking I/O (including serial port transmission).

In February 2009, I started installing the RF12demo code on each ATmega, as a quick way to test and try out wireless. As it turned out, that sketch has become quite useful as central receiving node, even “in production” – and I still use it as interface for HouseMon.

In April 2009, a small but important change was made to the packet format, allowing more robust use of multiple netgroups. Without this change, a bit error in the netgroup byte will alter the packet in a way which is not caught by the CRC checksum, making it a valid packet in another netgroup. This is no big deal if you only use a single netgroup, but will make a difference when multiple netgroups are in use in the same area.

Apart from this change, the RF12 driver and the RFM12B modules have been remarkably reliable, with many nodes here communicating for years on end without a single hick-up.

I still find it pretty amazing that simple low-power wireless networking is available at such a low cost, with very limited software involvement, and suitable for so many low-speed data collection and signalling uses in and around the house. To me, wireless continues to feel like magic after all these years: things happening instantly and invisibly across a distance, using physical properties which we cannot sense or detect…

Flashback – Discovering the Arduino

In AVR, Hardware on Sep 28, 2013 at 00:01

It’s now just about 5 years ago when I started with JeeLabs, so I thought it might be a good idea to bring back some notes from the past. Get ready for a couple of flashback posts…

One of the first posts on this weblog was about the Arduino, or rather Atmel’s AVR ATmega chip I had just discovered (it was the ATmega168 back then):

atmega328p

It was the Arduino IDE which made it trivial to play with this chip, an open source multi-platform software package combining an editor, the avr-gcc compiler, and the avrdude uploader. And despite the use of very Arduino-ish names such as “sketches” (firmware) and “shields” (add-on hardware), it was all nearly-standard C and C++, with a couple of convenient libraries to easily access I/O pins, the ADC, the serial port, timers, and more.

Five years ago, a fascinating brand new world opened up for me. I knew all about C and C++ as well as digital I/O and boot loaders, but a lot of really interesting and powerful new technologies were new to me: I2C, SPI, embedded timer hardware, and above all: sleep mode. These chips were not only able to run at an amazing 16 MHz clock rate, they could actually go to sleep and use a watchdog timer to wake up again later, saving 3 orders of magnitude on energy consumption.

A small universe, controlled by standard software and able to interface to the real world.

Physical Computing. Low cost. Accessible to anyone.

Wow.

LevelDB, MQTT, and Redis

In Software on Sep 27, 2013 at 00:01

To follow up on yesterday’s post, here are some more interesting properties of LevelDB, in the context of Node.js and HouseMon:

  • The levelup wrapper for Node supports streaming, i.e. traversing a set of entries for reading, as well as sending an entire set of PUTs and/or DELs as a stream of requests. This is a very nice match for Node.js and for interfaces which need to be sent across the network (including to/from browsers).

  • Events are generated for each change to the database. These can easily be translated into a “live stream” of changes. This is really nothing other than publish / subscribe, but again in a form which matches Node’s asynchronous design really well.

  • Standard encoding conventions can be defined for keys and or values. In the case of HouseMon, I’m using UTF-8 as key encoding default and JSON as value encoding. The latter means that the default “stuff” stored in the database is simply a JavaScript object, and this is also what you get back on access and traversal. Individual cases can still override these defaults.

Together, all these features and conventions end up playing together very nicely. One particularly interesting aspect is the “persistent pub-sub” model you can implement with this. This is an area where the “Message Queue Telemetry Transport” MQTT has been gaining a lot of attention for things like home automation and the “Internet of Things” IoT.

MQTT is a message-based system. There are channels, named in a hierarchical manner, e.g. “/location/device/sensor”, to which one can publish measurement values (“publish /kitchen/roomnode/temp 17”). Other devices or software can subscribe to such channels, and get notified each time a value is published.

One interesting aspect of MQTT is its “RETAIN” functionality: for each channel, the last value can be retained, meaning that anyone subscribing to a channel will immediately be sent the last retained value, as if it just happened. This turns an ephemeral message-based setup into a state-based mechanism with change propagation, because you can subscribe to a channel after devices have published new values on it, yet still see what the best known value for that channel is, currently. With RETAIN, the MQTT system becomes a little database, with one value per channel. This can be very convenient after power-up as well.

LevelDB comes from the other side, i.e. a database with change tracking, but the effect is exactly the same: you can subscribe to a set of keys, get all their latest values and be notified from that point on whenever any keys in this range changes. This makes LevelDB and MQTT functionally equivalent, although LevelDB can handle much larger key spaces (note also that MQTT has features which LevelDB lacks, such as QoS).

It’s also interesting to compare this with the Redis database: Redis is another key-value store, but with more data structure options. The main difference with LevelDB, is that Redis is memory-based, with periodic snapshots to disk, whereas LevelDB is disk based, with caching in memory (and thus able to handle datasets which far exceed available RAM). Redis, LevelDB, and MQTT all support pub-sub in some form.

One reason for me to select LevelDB for HouseMon, is that it’s an in-process embedded database. This means that there is no need for a separate process to run alongside Node.js – and hence to need to install anything, or keep a background daemon running.

By switching to LevelDB, the HouseMon installation procedure has been simplified to:

  • download a recent version of HouseMon from GitHub
  • install Node (which includes the “npm” package manager)
  • run “npm install” to get everything needed by HouseMon

And that’s it. At this point, you just start up Node.js and everything will work.

But it’s not just convenience. LevelDB is also so compact, performant, and scalable, that HouseMon can now store all raw data as well as a complete set of aggregated per-hour values without running into resource limits, even on a Raspberry Pi. According to my rough estimates, it should all end up using less than 1 GB per year, so everything will easily fit on an 8 or 16 GB SD card.

The LevelDB database

In Software on Sep 26, 2013 at 00:01

In recent posts, I’ve mentioned LevelDB a few times – a database with some quite interesting properties.

Conceptually, LevelDB is ridiculously simple: a “key-value” store which stores all entries in sorted key order. Keys and values can be anything (including empty). LevelDB is not a toy database: it can handle hundreds of millions of keys, and does so very efficiently.

It’s completely up to the application how to use this storage scheme, and how to define key structure and sub-keys. I’m using a simple prefix-based mechanism: some name, a “~” character, and the rest of the key, of which each field can be separated with more “~” characters. The implicit sorting then makes it easy to get all the keys for any prefix.

LevelDB is good at only a few things, but these work together quite effectively:

  • fast traversal in sorted and reverse-sorted order, over any range of keys
  • fast writing, regardless of which keys are being written, for add / replace / delete
  • compact key storage, by compressing common prefix bytes
  • compact data storage, by a periodic compaction using a fast compression algorithm
  • modifications can be batched, with a guaranteed all-or-nothing success on completion
  • traversal is done in an isolated manner, i.e. freezing the state w.r.t. further changes

Note that there are essentially just a few operations: GET, PUT, DEL, BATCH (of PUTs and DELs), and forward/backward traversal.

Perhaps the most limiting property of LevelDB is that it’s an in-process embedded library, and that therefore only a single process can have the database open. This is not a multi-user or multi-process solution, although one could be created on top by adding a server interface to it. This is the same as for Redis, which runs as one separate process.

To give you a rough idea of how the underlying log-structured merge-tree algorithm works, assume you have a datafile with a sorted set of key-value pairs, written sequentially to file. In the case of LevelDB, these files are normally capped at 2 MB each, so there may be a number of files which together constitute the entire dataset.

Here’s a trivial example with three key-value pairs:

one-level

It’s easy to see that one can do fast read traversals either way in this data, by locating the start and end position (e.g. via binary search).

The writing part is where things become interesting. Instead of modifying the files, we collect changes in memory for a bit, and then write a second-level file with changes – again sorted, but now not only containing the PUTs but also the DELs, i.e. indicators to ignore the previous entries with the same keys.

Here’s the same example, but with two new keys, one replacement, and one deletion:

two-levels

A full traversal would return the entries with keys A, AB, B, and D in this last example.

Together, these two levels of data constitute the new state of the database. Traversal now has to keep track of two cursors, moving through each level at the appropriate rate.

For more changes, we can add yet another level, but evidently things will degrade quickly if we keep doing this. Old data would never get dropped, and the number of levels would rapidly grow. The cleverness comes from the occasional merge steps inserted by LevelDB: once in a while, it takes two levels, runs through all entries in both of them, and produces a new merged level to replace those two levels. This cleans up all the obsolete and duplicate entries, while still keeping everything in sorted order.

The consequence of this process, is that most of the data will end up getting copied, a few times even. It’s very much like a generational garbage collector with from- and to-spaces. So there’s a trade-off in the amount of writing this leads to. But there is also a compression mechanism involved, which reduces the file sizes, especially when there is a lot of repetition across neighbouring key-value pairs.

So why would this be a good match for HouseMon and time series data in general?

Well, first of all, in terms of I/O this is extremely efficient for time-series data collection: a bunch of values with different keys and a timestamp, getting inserted all over the place. With traditional round-robin storage and slots, you end up touching each file in some spot all the time – this leads to a few bytes written in a block, and requires a full block read-modify-write cycle for every single change. With the LevelDB aproach, all the changes will be written sequentially, with the merge process delayed (and batched!) to a different moment in time. I’m quite certain that storage of large amounts of real-time measurement data will scale substantially better this way than any traditional RRDB, while producing the same end results for access. The same argument applies to indexed B-Trees.

The second reason, is that data sorted on key and time is bound to be highly compressible: temperature, light levels, etc – they all tend to vary slowly in time. Even a fairly crude compression algorithm will work well when readings are laid out in that particular order.

And thirdly: when the keys have the time stamp at the end, we end up with the data laid out in an optimal way for graphing: any graph series will be a consecutive set of entries in the database, and that’s where LevelDB’s traversal works at its best. This will be the case for raw as well as aggregated data.

One drawback could be the delayed writing. This is the same issue with Redis, which only saves its data to file once in a while. But this is no big deal in HouseMon, since all the raw data is being appended to plain-text log files anyway – so on startup, all we have to do is replay the end of the last log file, to make sure everything ends up in LevelDB as well.

Stay tuned for a few other reasons on why LevelDB works well in Node.js …

Small Gravity Plug update

In Hardware on Sep 25, 2013 at 00:01

The Gravity Plug contains a BMA020 3-axis accelerometer from Bosch Sensortec, which is being replaced by a newer BMA150 chip. Luckily, it’s fully compatible in both hardware and software, so the GravityPlug implementation in JeeLib will continue to work just fine with either one.

But the new chip does have more features: you can override its calibration parameters and defaults, by changing values in a new EEPROM area added to the chip. That in itself is probably not too useful for simple setups, but the new chip also offers access to its built-in temperature sensor, so I’ve added some extra code and expanded the gravity_demo sketch to take advantage of that:

#include <JeeLib.h>

PortI2C myBus (1);
GravityPlug sensor (myBus);
MilliTimer measureTimer;

struct {
  int axes[3];
  int temp;
} payload;

void setup () {
    Serial.begin(57600);
    Serial.println("\n[gravity_demo]");

    if (sensor.isPresent()) {
      Serial.print("sensor version ");
      sensor.send();
      sensor.write(0x01);
      sensor.receive();
      Serial.println(sensor.read(1), HEX);
      sensor.stop();
    }

    rf12_initialize(7, RF12_868MHZ, 42);
    sensor.begin();
}

void loop () {
    if (measureTimer.poll(1000)) {
        memcpy(payload.axes, sensor.getAxes(), sizeof payload.axes);
        payload.temp = sensor.temperature();

        rf12_sendNow(0, &payload, sizeof payload);

        Serial.print("GRAV ");
        Serial.print(payload.axes[0]);
        Serial.print(' ');
        Serial.print(payload.axes[1]);
        Serial.print(' ');
        Serial.print(payload.axes[2]);
        Serial.print(' ');
        Serial.println(payload.temp);
    }
}

Here is some sample output:

GRAV 2 -10 130 41
GRAV 0 -9 133 41
GRAV 4 -14 127 41
GRAV 0 -12 134 41
GRAV 4 -15 130 41
GRAV 4 -13 129 42
GRAV 3 -8 128 41

The reported temperature is in units of 0.5°C, so the above is reporting 20.5 .. 21.0 °C.

Unfortunately, the old chip isn’t reporting anything consistent as temperature:

GRAV 51 -31 -225 2
GRAV 50 -29 -225 0
GRAV 51 -30 -224 1
GRAV 52 -30 -227 1
GRAV 51 -30 -229 1
GRAV 52 -31 -223 2
GRAV 51 -31 -225 1

I have not found a way to distinguish these two chips at runtime – both report 0x02 in register 0 and 0x12 in register 1. Makes you wonder what the point is of having a chip “ID” register in the first place…

Anyway – in the near future, the Gravity Plug will be shipped with the new BMA150 chips. It should have no impact on your projects if you use these accelerometers, but at least you’ll be able to “sort of” tell which is which from the temperature readout. Note that the boards will probably still be marked as “BMA020” on the silkscreen for a while – there’s little point in replacing them, given that the change is so totally compatible either way…

In search of a good graph

In Software on Sep 24, 2013 at 00:01

I’ve been using the DyGraphs package for some time now in HouseMon, but it’s not perfect. Here is a generated graph for this morning, comparing two different ways to measure the same solar power generation:

Screen Shot 2013-09-23 at 09.20.52

The green series comes from the a pulse counter which generates a pulse every 0.5 Wh. These values then get sent to HouseMon every 3 seconds. A fairly detailed set of readings, as you can see – this is a typical morning in autumn on a cloudy day.

The blue series comes from the SMA inverter, which I’m reading every 5..6 minutes. It might be more accurate, but there is far less data coming in.

The key is that the area (i.e. Wh) under these two lines should be identical, if both devices are accurately calibrated. But this doesn’t quite seem to be the case here. (see update)

One known problem is that DyGraphs is plotting these “step charts” incorrectly: the value of the step should be at the end of the step, but DyGraphs can only plot steps given a starting value. So the way to read these blue steps, I think, is to keep the points, but to think of the step lines as being drawn horizontally preceding that point, not trailing it. I’m not fully convinced that this explains all the differences in the above examples, but it would seem to be a better match.

I’ve been looking at Highcharts and Highstocks as an alternative graphing library to use. They seem to support steps-with-end-values. But this sort of use really has quite a few other requirements as well, such as: being able to draw large datasets efficiently, support for multiple independent time series (DyGraphs was a bit limited here as well), interactive and intuitive zooming, good automatic choice of axis scales and fonts.

It would also be nice to be able to efficiently redraw a graph when new data points come in. Graphs used to be live in HouseMon, but at some point I ran into a snag and never bothered to fix it. For me, right now, live updating is less important than being able to generate accurate graphs. What’s the point of showing results on screen after all, if those results are not really represented truthfully…

I’ll stay on the lookout for better graphing solution. All suggestions welcome!

Update – See @ward’s comments below for the proper explanation of what’s going on here. It’s not a graphing problem after all (or rather: a different one…).

Animation in the browser

In Software on Sep 23, 2013 at 00:01

If you’ve programmed for Mac OS X (my actual experience in that area is fairly minimal), then you may know that it has a feature called Core Animation – as a way to implement all sorts of flashy smooth UI transitions – sliding, fading, colour changes, etc.

The basic idea of animation in this context is quite simple: go from state A to state B by varying one or more values gradually over a specified amount of time.

It looks like web browsers are latching onto this trend as well, with CSS Animations. It has been possible to implement all sorts of animations for some time, using timed JavaScript loops running in the background to simulation this sort of stuff, as jQuery has been doing. Then again, getting such flashy features into a whole set of independently-developed (and competing) web browsers is bound to take more time, but yeah… it’s happening. Here’s an illustration of what text can do in today’s browsers.

In Angular, browser-side animation has recently become quite a bit simpler.

I’m no big fan of attention-drawing screen effects. There was a time when HTML’s BLINK tag got so much, ehm, attention that every page tried to look like a neon sign. Oh, look, a new toy tag, and it’s BLINKING… neat, let’s use it everywhere!

Even slide transitions (will!) get boring after a while.

But with dynamic web pages updating in real time, it seems to me that there is a case to be made for subtle hints that something has just happened. As trial, I used a fade transition (from jQuery) in the previous version of HouseMon to briefly change the background of a changed row to very light yellow, which then fades back to white:

Screen Shot 2013-09-22 at 20.26.32

The effect seems to work well – for me anyway: it’s easy to spot the latest changes in a table, without getting too distracted when looking at some specific data.

In the latest version of HouseMon, I wanted to try the same effect using the new Angular animation feature, and I must say: the basic mechanism is really simple – you make animations happen by defining the CSS start settings, the finish settings, and the time period plus transition style to use. In the case of adding a row to a table or deleting a row, it’s all built-in and done by just adding some CSS definitions, as described here.

In this particular case, things are slightly more involved because it’s not the addition or deletion of a row I wanted to animate, but the change of a timestamp on a row. This requires defining a custom Angular “directive”. Directives are the cornerstone of Angular: they can (re-) define the behaviour of HTML elements or attributes, and let you invent new ones to do just about anything possible in the DOM.

Unfortunately, directives can also be a bit obscure at times. Here is the one I ended up defining for HouseMon:

ng.directive 'highlightOnChange', ($animate) ->                                  
  (scope, elem, attrs) ->                                                        
    scope.$watch attrs.highlightOnChange, ->                                     
      $animate.addClass elem, 'highlight', ->                                    
        attrs.$removeClass 'highlight'   

With it, we can now use a new custom highlightOnChange attribute to an element to implement highlighting when a value on the page changes.

Here is how it gets used (in Jade) on the status table rows shown in yesterday’s post:

tr(highlight-on-change='row.time')

In prose: when row.time changes, start the highlight transition, which adds a custom highlight class and removes it again at the end of the animation sequence.

The CSS then defines the actual animation (using Stylus notation):

.highlight-add
  transition 1s linear all
  background lighten(yellow, 75%)
.highlight-add.highlight-add-active
  background white

I’m leaving out some finicky details, but that’s the essence of it all: set up an $animate call to trigger on the proper conditions, and then define what is going go happen in CSS.

Note that Angular picks name suffixes for the CSS classes during the different phases of animation, but then it can all be used for numerous types of animation: you pick the CSS setting(s) at start and finish, and the browser’s CSS engine will automatically “run” the selected animation – covering motions, colour changes, and lots more (could fonts be emboldened gradually or even be morphed in more sophisticated ways? I haven’t tried it).

The result of all this is a table with subtle row highlights wherever a new reading comes in. Triggered by some Angular magic, but the actual highlighting is a (not-yet-100%-standardised) browser effect.

Working with a CSS grid

In Software on Sep 22, 2013 at 00:01

As promised yesterday, here is some information on how you can get a page layouts off the ground real quick with the Zurb Foundation 4 CSS framework. If you’re more into Twitter Bootstrap, see this excellent comparison and overview. They are very similar.

It took a little while to sink in, until I read this article about how it’s supposed to be used, and skimmed through a 10-part series on webdesign tuts+. Then the coin dropped:

  • you create rows of content, using a div with class row
  • each row has 12 columns to put stuff in
  • you put the sections of your page content in divs with class large-6 and columns
  • each section goes next to the preceding one, so 6 and 6 puts the split in the middle
  • any number from 1 to 12 goes, as long as you use at most 12 columns per row

So far so good, and pretty trivial. But here’s where it gets interesting:

  • each column can have more rows, and each row again offers 12 columns to lay out
  • for mobile use, you can add small-N classes as well, with other column allocations

I haven’t done much with this yet, but this page was trivial to generate:

Screen Shot 2013-09-20 at 00.57.16

The complete definition of that inner layout is in app/status/view.jade:

.row
  .large-9.columns
    .row
      .large-8.columns
        h3 Status
      .large-4.columns
        h3
        input(type='text',ng-model='statusQuery',placeholder='Search...')
    .row
      .large-12.columns
        table(width='100%')
          tr
            th Driver
            th Origin
            th Name
            th Title
            th Value
            th Unit
            th Time
          tr(ng-repeat='row in status | filter:statusQuery | orderBy:"key"')
            td {{row.type}}
            td {{row.tag}}
            td {{row.name}}
            td {{row.title}}
            td(align='right') {{row.value}}
            td {{row.unit}}
            td {{row.time | date:"MM-dd &nbsp; HH:mm:ss"}}

In prose:

  • the information shown uses the first 9 columns, (the remaining 3 are unused)
  • in those 9 columns, we have a header row and then the actual data table as row
  • the header is a h1 header on the left and an input on the right, split as 2:1
  • the table takes the full width (of those 9 columns in which it resides)
  • the rest is standard HTML in Jade notation, and an Angular ng-repeat loop
  • there’s also live row filtering in there, using some Angular conventions
  • the alternating row backgrounds and general table style are Foundation’s defaults

Actually, I left out one thing: the entries automatically get highlighted whenever the time stamp changes. This animation is a nice way to draw attention to updates – stay tuned.

Responsive CSS layouts

In Software on Sep 21, 2013 at 00:01

One of the things today’s web adds to the mix, is the need to address devices with widely varying screen sizes. Seeing a web site made for an “old” 1024×768 screen layout squished onto a mobile phone screen is not really great if everything just ends up smaller on-screen.

Enter grid-based CSS frameworks and responsive CSS.

There’s a nice site to quickly try designs at different sizes, at cybercrab.com/screencheck. Here’s how my current HouseMon screen looks on two different simulated screens:

Screen Shot 2013-09-19 at 23.45.02

Screen Shot 2013-09-20 at 00.17.54

It’s not just reflowing CSS elements (the bottom line stacks vertically on a small screen), it also completely alters the behaviour of the navigation bar at the top. In a way which works well on small screens: click in the top right corner to fold out the different menu choices.

Both Zurb Foundation and Twitter Bootstrap are “CSS frameworks” which provide a huge set of professionally designed UI elements – and now also do this responsive thing.

The key point here is that you don’t need to do much to get all this for free. It takes some fiddling and googling to set all the HTML sections up just right, but that’s it. In my case, I’m using Angular’s “ng-repeat” to generate the navigation bar menu entries on-the-fly:

nav.top-bar(ng-controller='NavCtrl')
  ul.title-area
    li.name
      h1
        a(ng-href='{{appInfo.home}}')
          img(src='/main/logo.png',height=22,width=22)
          | &nbsp; {{appInfo.name}}
    li.toggle-topbar.menu-icon
      a(href='#')
        span
  section.top-bar-section
    ul.left
      li.myNav(ng-repeat='nav in navbar')
        a(ng-href='{{nav.route}}') {{nav.title}}
      li.divider
    ul.right
      li.myNav
        a(href='/admin') Admin

Yeah, ok, pretty tricky stuff. But it’s all documented, and nowhere near the effort it’d take to do this sort of thing from scratch. Let alone get the style and visual layout consistent and working across all the different browsers and versions.

Tomorrow, I’ll describe the way grid layouts work with Zurb Foundation 4 – it’s sooo easy!

In praise of AngularJS

In Software on Sep 20, 2013 at 00:01

This is how AngularJS announces itself on its home page:

HTML enhanced for web apps!

Hm, ok, but not terribly informative. Here’s the next paragraph:

HTML is great for declaring static documents, but it falters when we try to use it for declaring dynamic views in web-applications. AngularJS lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop.

That’s better. Let me try to ease you into this “framework” on which HouseMon is based:

  • Angular (“NG”) is client-side only. It doesn’t exist on the server. It’s all browser stuff.
  • It doesn’t take over. You could use NG for parts of a page. HouseMon is 100% NG.
  • It comes with many concepts, conventions, and terms you must get used to. They rock.

I can think of two main stumbling blocks: “Dependency Injection” and all that “service”, “factory”, and “controller” stuff (not to mention “directives”). And Model-View-Something.

It’s not hard. It wasn’t designed to be hard. The terminology is pretty clever. I’d say that it was invented to produce a mental model which can be used and extended. And like a beautiful clock, it doesn’t start ticking until all the little pieces are positioned just right. Then it will run (and evolve!) forever.

Dependency injection is nothing new, it’s essentially another name for “require” (Node.js), “import” (Python), or “#include” (C/C++). Let’s take a fictional piece of CoffeeScript code:

value = 123
myCode = (foo, bar) -> foo * value + bar

console.log myCode(100, 45)

The output will be “12345”. But with dependency injection, we can do something else:

value = 123
myCode = (foo, bar) -> foo * value + bar

define 'foo', 100
define 'bar', 45
console.log inject(myCode)

This is not working code, but it illustrates how we could define things in some more global context, then call a function and give it access to that context. Note that the “inject” call does something very magical: it looks at the names of myCode’s arguments, and then somehow looks them up and finds the values to pass to myCode.

That’s dependency injection. And Angular uses it all over the place. It will often call functions through a special “injector” and get hold of all args expected by the function. One reason this works is because closures can be used to get local scope information into the function. Note how myCode had access to “value” via a normal JavaScript closure.

In short: dependency injection is a way for functions to import all the pieces they need. This makes them extremely modular (and it’s great for test suites, because you can inject special versions and mock objects to test each piece of code outside its normal context).

The other big hurdle I had to overcome when starting out with Angular, is all those services, providers, factories, controllers, and directives. As it turns out, that too is much simpler than it might seem:

  • Angular code is packaged as “modules” (I actually only use a single one in HouseMon)
  • in these modules, you define services, etc – more on this in a moment
  • the whole startup is orchestrated in a specific way: boot, config, run, fly
  • ok, well, not “fly” – that’s just my name for it…

The startup sequence is key – to understand what goes where, and what gets called when:

  • the browser starts by loading an HTML page with <script> tags in it (doh)
  • all the code needed on the browser side has to be loaded up front
  • what this code does is define modules, services, and so on
  • nothing is really “running” at this point, these are all function definitions
  • the last step is to call angular.bootstrap(document, ['myApp']);
  • at this point, all the config sections defined in the modules are run
  • once that is over, all the run sections are run, this is the real application start
  • when that is done, we in essence enter The Big Event Loop in the browser: lift off!

So on the client side, i.e. the browser, the entire application needs to be structured as a set of definitions. A few things need to be done early on (config), a few more once everything has been defined and Angular’s “$rootScope” has been put in place (read: the main page context is ready, a bit like the DOM’s “document.onload”), and then it all starts doing real work through the controllers tied to various HTML elements in the DOM.

Providers, factories, services, and even constants and values, are merely variations on a theme. There is an excellent page by Matt Haggard describing these differences. It’s all smoke and mirrors, really. Everything except directives is a provider in Angular.

Angular is like the casing of a watch, with a powerful pre-loaded spring already in place. You “just” have to figure out the role of the different pieces, put a few of them in place, and it’ll run all by itself from then on. It’s one of the most elegant frameworks I’ve ever seen.

With HouseMon, I’ve put the main pieces in place to show a web page in the browser with a navigation top bar, and the hooks to tie into Primus and RPC. The rest is all up to the modules, i.e. the subfolders added to the app/ folder. Each client.coffee file is an Angular module. HouseMon will automatically load them all into the browser (via a /primus/primus.js file generated on-the-fly by Primus). This has turned out to be extremely modular and flexible – and it couldn’t have been done without Angular.

If you’re looking for a “way in” for the HouseMon client-side code, then the place to start is app/index.jade, which uses Jade’s extend mechanism to tie into main/layout.jade, so that would be the second place to start. After that, it’s all Angular-structured code, i.e. views, view templates with controllers, and the services they call.

I’m quite excited by all this, because I’ve never before been able to develop software in such a truly modular fashion. The interfaces are high-level, clean (and clear) and all future functionality can now be added, extended, replaced, or even removed again – step by step.

Ground control to… browser

In Software on Sep 19, 2013 at 00:01

After yesterday’s RPC example, which makes the server an easy-to-use service in client code, let’s see how to do things in the other direction.

I should note that there is usually much less reason to do this, since at any point in time there can be zero, one, or occasionally more clients connected to the server in the first place. Still, it could be useful for a server to collect client status info, or to periodically save some state. The benefit of server-initiated requests being that it turns authentication around, assuming we trust the server.

Ok, so here’s some silly code on the client which I’d like to be able to call from the server:

ng = angular.module 'myApp'

ng.config ($stateProvider, navbarProvider, primus) ->
  $stateProvider.state 'view1',
    url: '/'
    templateUrl: 'view1/view.html'
    controller: 'View1Ctrl'
  navbarProvider.add '/', 'View1', 11
  
  primus.client.view1_twice = (x) ->
    2 * x

ng.controller 'View1Ctrl', ->

This is the “View 1” module, similar to the Admin module, exposing code for server use:

  primus.client.view1_twice = (x) ->
    2 * x

On the server side, the first thing to note is that we can’t just call this at any point in time. It’s only available when a client is connected. In fact, if more than one client is connected, then there will be multiple instances of this code. Here is app/view1/host.coffee:

module.exports = (app, plugin) ->
  app.on 'running', (primus) ->
    primus.on 'connection', (spark) ->
      
      spark.client('view1_twice', 123).then (res) ->
        console.log 'double', res

The output on the server console, whenever a client connects, is – rather unsurprisingly:

    double 246

What’s more interesting here, is the logic of that server-side code:

  • the app/view1/host.coffee exports one function, accepting two args
  • when loaded on startup, it will be called with the app and Primus plugin object
  • we’re in setup mode at this point, so this is a good time to define event handlers
  • the app.on 'running' handler gets called when all server setup is complete
  • now we can set up a handler to capture all new client connections
  • when such a connection comes in, Primus will hand us a “spark” to talk to
  • a spark is essentially a WebSocket session, it supports a few calls and streaming
  • now we can call the ‘view1_twice’ code on the client via RPC
  • this creates and returns a promise, since the RPC will take a little bit of time
  • we set it up so that when the promise is fulfilled, console.log ... gets called

And that’s it. As with yesterday’s setup, all the communication and handling of asynchronous callbacks is taken care of behind the scenes.

There is some asymmetry in naming conventions and the uses of promises, but here’s the summary:

  • to define a function ‘foo’ on the host, define it as app.host.foo = …
  • to call that function on the client, use host ‘foo’, …
  • the result is an Angular promise, use “.then …” to pick it up if needed
  • to define a function ‘bar’ on the client, define it as primus.client = …
  • to call that function from the server, use spark.client ‘bar’, …
  • the result is a q promise, use “.then …” to pick it up

Ok, so there are some slight differences between the two promise API’s. But the main benefit of this all is that the passing of time and networking involved is completely abstracted away. Maybe I’ll be able to unify this further, some day.

Promise-based RPC

In Software on Sep 18, 2013 at 00:01

Or maybe this post should have been titled: “Taming asynchronous RPC”.

Programming with asynchronous I/O is tricky. The proper solution IMO, is to have full coroutine or generator support in the browser and in the server. This has been in the works from some time now (see ES6 Harmony). In short: coroutines and generators can “yield”, which means something that takes time can suspend the current stack state until it gets resumed. A bit like green (non-preemptive) threads, and with very little overhead.

But we ain’t there yet, and waiting for all the major browsers to reach that point might be a bit of a stretch (see the “Generators (yield)” entry on this page). The latest Node.js v0.11.7 development release does seem to support it, but that’s only half the story.

So promises it is for now. On the server side, this is available via Kris Kowal’s q package. On the client side AngularJS includes promises as $q. And as mentioned before, I’m also using q-connection as promise-based Remote Procedure Call RPC mechanism.

The result is really neat, IMO. First, RPC from the client, calling a function on the server. This is app/admin/client.coffee for the client side:

ng = angular.module 'myApp'

ng.config ($stateProvider) ->
  $stateProvider.state 'admin',
    url: '/admin'
    templateUrl: 'admin/view.html'
    controller: 'AdminCtrl'

ng.controller 'AdminCtrl', ($scope, host) ->
  $scope.hello = host 'admin_dbinfo'

This Angular boilerplate defines an Admin page with app/admin/view.jade template:

h3 Storage use
table
  tr
    th Group
    th Size
  tr(ng-repeat='(group,size) in hello')
    td {{group}}
    td(align='right') ≈ {{size}} b

The key is this one line:

$scope.hello = host 'admin_dbinfo'

That’s all there is to doing RPC with a round-trip over the network. And here’s the result, as shown in the browser (formatted with the Zurb Foundation CSS framework):

Screen Shot 2013-09-17 at 13.37.33

There’s more to it than meets the eye, but it’s all nicely tucked away: the host call does the request, and returns a promise. Angular knows how to deal with promises, so it will fill in the “hello” field when the reply arrives, asynchronously. IOW, the above code looks like synchronous code, but does lots of things at another point in time than you might think.

On the server, this is the app/admin/host.coffee code:

Q = require 'q'

module.exports = (app, plugin) ->
  app.on 'setup', ->

    app.host.admin_dbinfo = ->
      q = Q.defer()
      app.db.getPrefixDetails q.resolve
      q.promise

It’s slightly more involved because promises are more exposed. First, the RPC call is defined on application setup. When called, it creates a new promise, calls getPrefixDetails with a callback, and immediately returns this promise to fill in the result later.

At some point, getPrefixDetails will call the q.resolve callback with the result as argument, and the promise becomes fulfilled. The reply is then sent to the client by the q-connection package.

Note that there are three asynchronous calls involved in total:

    rpc call -> network send -> database action -> network reply -> show
                ============    ===============    =============

Each of the underlined steps is asynchronous and based on callbacks. Yet the only step we had to deal with explicitly on the server was that custom database action.

Tomorrow, I’ll show RPC in the other direction, i.e. the server calling client code.

Flow – the inside perspective

In Software on Sep 17, 2013 at 00:01

This is the last of a four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

The current 0.8 version of HouseMon is my first foray into the world of streams and promises, on the host server as well as in the browser client.

First a note about source code structure: HouseMon consists of a set of subfolders in the app folder, and differs from the previous SocketStream-based design in that dependency info, client-side code, client-side layout files + images, and host-side code are now all grouped per module, instead of strewn across client, static, and server folders.

The “main” module starts the ball rolling, the other modules mostly just register themselves in a global app “registry”, which is simply a tree of nested key/value mappings. Here’s a somewhat stripped down version of app/main/host.coffee:

module.exports = (app, plugin) ->
  app.register 'nodemap.rf12-868,42,2', 'testnode'
  app.register 'nodemap.rf12-2', 'roomnode'
  app.register 'nodemap.rf12-3', 'radioblip'
  # etc...

  app.on 'running', ->
    Serial = @registry.interface.serial
    Logger = @registry.sink.logger
    Parser = @registry.pipe.parser
    Dispatcher = @registry.pipe.dispatcher
    ReadingLog = @registry.sink.readinglog

    app.db.on 'put', (key, val) ->
      console.log 'db:', key, '=', val

    jeelink = new Serial('usb-A900ad5m').on 'open', ->

      jeelink # log raw data to file, as timestamped lines of text
        .pipe(new Logger) # sink, can't chain this further

      jeelink # log decoded readings as entries in the database
        .pipe(new Parser)
        .pipe(new Dispatcher)
        .pipe(new ReadingLog app.db)

This is all server-side stuff and many details of the API are still in flux. It’s reading data from a JeeLink via USB serial, at which point we get a stream of messages of the form:

{ dev: 'usb-A900ad5m', msg: 'OK 2 116 254 1 0 121 15' }

(this can be seen by inserting “.on('data', console.log)” in the above pipeline)

These messages get sent to a “Logger” stream, which saves things to file in raw form:

L 17:18:49.618 usb-A900ad5m OK 2 116 254 1 0 121 15

One nice property of Node streams is that you can connect them to multiple outputs, and they’ll each receive these messages. So in the code above, the messages are also sent to a pipeline of “transform” streams.

The “Parser” stream understands RF12demo’s text output lines, and transforms it into this:

{ dev: 'usb-A900ad5m',
  msg: <Buffer 02 74 fe 01 00 79 0f>,
  type: 'rf12-868,42,2' }

Now each message has a type and a binary payload. The next step takes this through a “Dispatcher”, which looks up the type in the registry to tag it as a “testnode” and then processes this data using the definition stored in app/drivers/testnode.coffee:

module.exports = (app) ->

  app.register 'driver.testnode',
    in: 'Buffer'
    out:
      batt:
        title: 'Battery status', unit: 'V', scale: 3, min: 0, max: 5

    decode: (data) ->
      { batt: data.msg.readUInt16LE 5 }

A lot of this is meta information about the results decoded by this particular driver. The result of the dispatch process is a message like this:

{ dev: 'usb-A900ad5m',
  msg: { batt: 3961 },
  type: 'testnode',
  tag: 'rf12-868,42,2' }

The data has now been decoded. The result is one parameter in this case. At the end of the pipeline is the “ReadingLog” write stream which timestamps the data and saves it into the database. To see what’s going on, I’ve added an “on ‘put'” handler, which shows all changes to the database, such as:

db: reading~testnode~rf12-868,42,2~1379353286905 = { batt: 3961 }

The ~ separator is a convention with LevelDB to partition the key namespace. Since the timestamp is included in the key, this storage will store each entry in the database, i.e. as historical storage.

This was just a first example. The “StatusTable” stream takes a reading such as this:

db: reading~roomnode~rf12-868,5,24~1379344595028 = {
  "light": 20,
  "humi": 68,
  "moved": 1,
  "temp": 143
}

… and stores each value separately:

db: status~roomnode/rf12-868,5,24/light = {
  "key":"roomnode/rf12-868,5,24/light",
  "name":"light",
  "value":20,
  "type":"roomnode",
  "tag":"rf12-868,5,24",
  "time":1379344595028
}
// and 3 more...

Here, the information is placed inside the message, including the time stamp. Keys must be unique, so in this case only the last value will be kept in the database. In other words: this fills a “status” table with the latest value of each measurement value.

As last example, here is a pipeline which replays a saved log file, decodes it, and treats it as new readings (great for testing, but useless in production, clearly):

createLogStream = @registry.source.logstream

createLogStream('app/replay/20121130.txt.gz')
  # .pipe(new Replayer)
  .pipe(new Parser)
  .pipe(new Dispatcher)
  .pipe(new StatusTable app.db)

Unlike the previous HouseMon design, this now properly deals with back-pressure.

The “Replayer” stream is a fun one: it takes each message and delays passing it through (with an updated timestamp), so that this stream will simulate a real-time feed with new messages trickling in. Without it, the above pipeline processes the file as fast as it can.

The next step will be to connect a change stream through the WebSocket to the client, so that live status information can be displayed on-screen, using exactly the same techniques.

As you can see, it’s streams all the way down. Onwards!

Flow – the application perspective

In Software on Sep 16, 2013 at 00:01

This is the third of a four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Because I couldn’t fit it in three parts after all.

Let’s talk about the big picture, in terms of technologies. What goes where, and such.

A decade or so ago, this would have been a decent model of how web apps work, I think:

app1

All the pages, styles, images, and some code fetched as HTTP requests, rendered on the server, which sits between the persistent state on the server (files and databases), and the network-facing end.

A RESTful design would then include a clean structure w.r.t how that state on the server is accessed and altered. Then we throw in some behind-the-secnes logic with Ajax to make pages more responsive. This evolved to general-purpose client-side JavaScript libraries like jQuery to get lots of the repetitiveness out of the developer’s workflow. Great stuff.

The assumption here is that servers are big and fast, and that they work reliably, whereas clients vary in capabilities, versions, and performance, and that network connection stability needs to stay out of the “essence” of the application logic.

In a way, it works well. This is the context in which database-backed websites became all the rage: not just a few files and templating to tweak the website, but the essence of the application is stored in a database, with “widgets”, “blocks”, “groups”, and “panes” sewn together by an ever-more elaborate server-side framework – Ruby on Rails, Django, Drupal, etc. WordPress and Redmine too, which I gratefully rely on for the JeeLabs sites.

But there’s something odd going on: a few days ago, the WordPress server VM which runs this daily weblog here at JeeLabs crashed on an out-of-memory error. I used to reserve 512 MB RAM for it, but had scaled it back to 256 MB before the summer break. So 256 MB is not enough apparently, to present a couple of weblog pages and images, and handle some text searches and a simple commenting system.

(We just passed 6000 comments the other day. It’s great to see y’all involved – thanks!)

Ok, so what’s a measly 512 MB, eh?

Well, to me it just doesn’t add up. Ok, so there’s a few hundred MB of images by now. Total. The server is running off an SSD disk, so we should be able to easily serve images with say 25 MB of RAM. But why does WP “need” hundreds of MB’s of RAM to serve a few dozen MB’s of daily weblog posts? Total.

It doesn’t add up. Self-inflicted overhead, for what is 95% a trivial task: serving the same texts and images over and over again to visitors of this website (and automated spiders).

The Redmine project setup is even weirder: currently consuming nearly 700 MB of RAM, yet this ought to be a trivial task, which could probably be served entirely out of say 50 MB of RAM. A tenfold resource consumption difference.

In the context of the home monitoring and automation I’d like to take a bit further, this sort of resource waste is no longer a luxury I can afford, since my aim is to run HouseMon on a very low-end Linux board (because of its low power consumption, and because it really doesn’t do much in terms of computation). Ok, so maybe we need a bit more than 640 KB, but hey… three orders of magnitude?

In this context, I think we can do better. Instead of a large server-side byte-shuffling engine, we can now do this – courtesy of modern browsers, Node.js, and AngularJS:

app2

The server side has been reduced to its bare minimum: every “asset” that doesn’t change gets served as is (from files, a database, whatever). This includes HTML, CSS, JavaScript, images, plain text, and any other “document” type blob of information. Nginx is a great example of how a tiny web server based on async I/O and written in C can take care of any load we down-to-earth netizens are likely to encounter.

Let me stress this point: there is no dynamic “templating” or “page generation” on the server side. This takes place in the browser – abstracted away by AngularJS and the DOM.

In parallel, there’s a “Real-Time Server” running, which handles the application logic and everything that needs to be dynamic: configuration! status! measurements! commands! I’m using Node.js, and I’ve started using the fascinating LevelDB database system for all data persistence. The clever bit of LevelDB is that it doesn’t just fetch and store data, it actually lets you keep flows running with changes streamed in and out of it (see the level-livefeed and multilevel projects).

So instead of copying data in and out of a persistent data store, this also becomes a way of staying up to date with all changes on that data. The fusion of a database and pubsub.

On the client side, AngularJS takes care of the bi-directional flow between state (model) and display (view), and Primus acts as generic connection library for getting all this activity across the net – with connection keep-alive and automatic reconnect. Lastly, I’m thinking of incorporating the q-connection library for managing all asynchronicity. It’s fascinating to see network round trip delays being woven into the fabric of an (async / event-driven) JavaScript application. With promises, client/server apps are starting to feel like a single-system application again. It all looks quite, ehm… promising :)

Lots of bits that need to work together. So far, I’m really delighted by the flexibility this offers, and the fact that the server side is getting simpler: just the autonomous data acquisition, a scalable database, a responsive WebSocket server to make the clients (i.e. browsers) come alive, and everything else served as static data. State is confined to the essence of the app. The rest doesn’t change (other than during development of course), needs no big backup solution, and the whole kaboodle should easily fit on a Raspberry Pi – maybe something even leaner than that, one day.

The last instalment tomorrow will be about how HouseMon is structured internally.

PS. I’m not saying this is the only way to go. Just glad to have found a working concoction.

Flow – the developer perspective

In Software on Sep 15, 2013 at 00:01

This is the second of a three-part four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Software development is all about “flow”, in two very different ways:

I’ve been looking at lots of options on how to address that first bit, i.e. how to set up an environment which can help me through the edit-run-debug cycle as smoothly and quickly as possible. I didn’t find anything that really fits my needs, so as any good ol’ software developer would do, I scratched this itch recently and ended up creating a new tool.

It’s all about flow. Iterating through the coding process without pushing more buttons or keys than needed. Here’s how it works – there are three kinds of files I need to deal with:

  • HTML – app pages and docs, in the form of Jade and Markdown text files
  • CSS – layout and visual design, in the form of Stylus text files
  • CODE – the logic of it all, host- and client-side, in the form of CoffeeScript text files

Each of these are not the real thing, in the sense that they are all dialects which need to be translated to HTML, CSS, and JavaScript, respectively. But I don’t want a solution which introduces lots of extra steps into that oh-so treasured edit-run-debug cycle of software development. No temporary files, please – let’s generate everything as needed on-the-fly, and let’s set up a single task to take care of it all.

The big issue here is when to do all that pre- (and re-) processing. Again, since staying in the flow is the goal, that really should happen whenever needed and as quickly as possible. The workflow I’ve found to suit me well is as follows:

  • place all the development source files into one nested area (doh!)
  • have the system watch for changes to any files in this area
  • if it’s a HTML file change: reload the browser
  • if it’s a CSS file change: update the browser (no need to reload)
  • if it’s a CODE file change: restart the server and reload the browser

The term for this is “live reload”. It’s been in use for ages by web developers. You save a file, and BAM … the browser updates itself. But that’s only half the story with a client + server setup, as you also may have to restart the server. In the past, I’ve done so using the nodemon utility. It works, but along with other tools too watch for changes and pre-compile to the desired format, it all got a bit clunky.

In the new HouseMon project, it’s all done behind the scenes in a single process. Well, two actually: when in development mode, HouseMon forks itself into a supervisor and a worker child. The supervisor just restarts the worker whenever it exits. And the worker watches for files changes and either reloads or adjusts the browser, or exits. The latter becomes a way for the worker to quickly restart itself from scratch due to a code change.

The result works phenomenally well: I edit in MacVim, multiple files even, and nothing happens. I can bring up the Dash reference guides to look up something. Still nothing happens. But when I switch away from MacVim, it’s set up to save all changes, and everything then kicks in on auto-pilot. Instantly or a few seconds later (depending on the change), the latest changes will be running. The browser may lose a little of its context, but the URL stays the same, so I’m still looking at the same page. The console is open, and I can see what’s going on there. Without ever switching to the browser (ok, I need to switch away from MacVim to something else, but often that will be the command line).

It’s not yet the same as live in-app development (as was possible with Tcl), but it’s getting mighty close, because the app returns to where it was before without pushing any buttons.

There’s one other trick in this setup which is a huge time-saver: whenever the supervisor is launched (“node .“), it scans through all the folders inside the main ./app folder, and collects npm and bower package files as well as Makefile files. All dependencies and requirements are then processed as needed, making the use of 3rd party packages virtually effortless – on the server and in the client. Total automation. Look ma, no hands!

None of this machinery needs to be present in production mode. All this reloading and package dependency handling stuff can then be ignored. Deployment just needs a bunch of static files, served through Node.js, Nginx, Apache, whatever – plus a light-weight Node.js server for the host-side logic, using WebSockets or some fallback mechanism.

As for the big picture on how HouseMon itself is structured… more coming, tomorrow.

Flow – the user perspective

In Software on Sep 14, 2013 at 00:01

This is the first of a three-part four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Ok, so maybe what follows is not the user perspective, but my user perspective?

I’m not a fan of pushing clicking buttons. It’s a computer invention, and it sucks.

Interfaces (I’m not keen on the term user interfaces either, but it’s more to the point than just “interfaces”) – anyway, interfaces need interaction to indicate what information you want to see, and to perform a real action, i.e. something that has effect on some permanent state. From setting up preferences or a configuration, to turning on a light.

But apart from that, interaction to fetch or update something is tedious, unwanted, silly, and a waste of time. Pushing a button to see the time is 2,718,281 times more inconvenient than looking at the time. Just like asking for something involves a lot more interaction (and social subtleties) than having the liberty to just do it.

When I look outside the window, I expect to see the actual weather. And when I see information on a screen, I expect it to be up-to-date. I don’t care if the browser does constant refreshes to pull changes from a server or whether it’s set up to respond to push notifications coming in behind the scenes. When I see “22.7°C”, it needs to be a real reading, with a known and acceptable lag and accuracy – unless it’s historical data.

This goes a lot further than numbers and graphs. When I see a button, then to me that is a promise (and invitation) that some action will be taken when I push it. Touch on mobile devices has the advantage here, although clicking via the mouse or typing a power-user keyboard shortcut is often perfectly acceptable – even if slightly more indirect.

What I don’t want is a button which does nothing, or generates an error message. If that button isn’t intended to be used, then it should be disabled or it shouldn’t be there at all:

Screen Shot 2013-09-13 at 11.56.54

Screen Shot 2013-09-13 at 11.57.17

That’s the “Graphs” install page in HouseMon (the wording could have been better).

When a list or graph of readings is shown on-screen, this needs to auto-update in real time. That’s the whole point of HouseMon – access to specific information (that choice is a preference, and will require interaction) to see what’s going on. Now – or within the last few minutes, in the case of sensors which are periodically sending out their readings. A value without an indication of its “up-to-date-ness” is useless. That could be either implicit by displaying it in some special way if the information is stale (slowly turning red?), or explicit, by displaying a time stamp next to it (a bit ugly and tedious).

Would you want it any other way when doing online banking? Would you accept seeing your account balance without guarantee that it is recent?nah, me neither :) – so why do we put up with web pages with copies of some information, at some point in time, getting obsolete the very moment that page is shown?

When “on the web” – as a user – I want to deal with accurate information. When I see something tagged as “updated 2 minutes ago”, then I want to see that “2” change into a “3” within the next minute. See GitHub for an example of this. It works, it makes sense, and it makes the whole technology get out of the way: the web should be an intermediary between some information and me. All the stuff in between is secondary.

Information needs to update – we need to stop copying it into a web page, and send that copy off to the browser. All the lipstick in the world won’t help if what we see is a copy of what we’re looking for. As developers, we need to stop making web sites fancy – we should make them live first and then make them gorgeous. That’s what RIA‘s and SPA‘s are about.

One more thing – not only does information need to flow, these flows should be visualised:

Screen Shot 2013-09-13 at 11.02.15

Such a user interface can be used during development to manage data flows and to design software which ties each result shown on-screen back to its origin. In tomorrow’s apps, every change should propagate – all the way to the pixels shown on-screen by the browser.

Now let’s go back to static web servers to make it happen. Stay tuned…

Gearing up for the new web

In Software on Sep 13, 2013 at 00:01

Well… new for me, anyway. What a fascinating world in motion we live in:

  • first there was the “pre-web”, with email BBS’es and very clunky modem links
  • then came Nestcape Navigator – and the world would never be the same again
  • hey, we can make the results dynamic, let’s generate pages through CGI
  • and not just outgoing, we can have forms and page refreshes to interact!
  • we need lots of ways to generate page variations, let’s add modules to the server
  • nah, we can do better than that: let’s add PHP / Python / Ruby inside the server!
  • and so the first major web explosion of the internet was born…

It took a decade, and another decade to make fast internet access mainstream, which is where we are today. There are lots and lots and LOTS of “web frameworks” in existence now, for any programming language you care about.

The second major wave of evolution came with a whole bunch of new acronyms, such as RIA and SPA. This is very much Google’s turf, and this is the technology which powers Gmail, for example. Rich and responsive interactivity. It’s going everywhere by now.

It was made possible by the super-trio of HTML, CSS, and JS (JavaScript). They’ve all evolved to define the structure, style, and logic of everything you see in the browser.

It’s hard to keep up, but as you know, I’ve picked two very active technologies for doing all my new software development: AngularJS and Node.js. Both by Google, by the way – but also both freely available as wide-open source (here and here). They’ve changed my life :)

It’s been a steep learning curve. I don’t mean just getting something going. I mean getting comfortable with it all. Understanding the idioms and quirks (there always are), and figuring out how to debug stuff (async is nasty, especially when not using promises). Can’t say I’m in the clear yet, but the fog is lifting and the fun level is rising!

I started coding HouseMon many months ago and it’s been running here for a long time now, giving me a chance to let it all sink in. The reality is: I’ve been doing it all wrong.

That’s not necessarily a bad thing, by the way: ya gotta learn stuff by doin’, right?

Ok, so I’ve been using Events as the main decoupling mechanism in HouseMon, i.e. publish and subscribe inside the app. The beauty of this is that it lets you keep the functionality of the application highly modular. Each subsystem subscribes to what it is interested in, and publishes results when they are available. No need to consider who needs this, or even how many subscribers there will be for what gets published.

It sounds great, and in a way it is, but it is a bit like a loose canon: events fire all over the place, with little insight in what is going on and when. For some apps, this is probably great, but with fairly stable continuous processing flows as in a home monitoring setup, it’s more chaotic than need be. It all works fine “in the small”, i.e. when writing little modules (“Briqs” in HouseMon-speak) and adding minor drivers / decoders, but I constantly felt lost with respect to the big picture. And then there’s this nasty back pressure dilemma.

But now the pieces are starting to fall into place. I’ve figured out the “big” structure of the HouseMon application as well as the “little” inner organisation of it all.

It’s all about flow. Three Four stories coming up – stay tuned…

My development setup – utilities

In Software on Sep 12, 2013 at 00:01

As promised, one final post about my development setup – these are all utilities for Mac OSX. There are no doubt a number of equivalents for other platforms. Here’s my top 10:

  • Alfred – Keyboard commands for what normally requires the mouse. I have shortcuts such as cmd-opt-F1 for iTerm, cmd-opt-F2 for MacVim. It launches the app if needed, and then either brings it to the front or hides it. Highly configurable, although I don’t really use that many features – just things like “f blah” to see the list of all folder names matching “blah”. Apart from that, it’s really just an alternative for Spotlight (local disk searches) + Google (or DuckDuckGo) + app launcher. Has earned the #1 keyboard shortcut spot: cmd+space. Top time-saver.

  • HomeBrew – This is the package manager which handles (almost) anything from the Unix world. Installing node.js (w/ npm) is brew install node. Installing MacVim is brew install macvim. Updates too. The big win is that all these installs are compartmentalised, so they can be uninstalled individually as well. Empowering.

  • Dash – Technical reference guides at your fingertips. New guides and updates added a few times a week. Looking for the Date API in JavaScript is a matter of typing cmd-opt-F8 to bring up Dash, and then “js:date”. My developer-pedia.

  • GitHub – Actually, it’s called “GitHub for Mac” – a tool to let you manage all the daily git tasks without ever having to learn the intricacies of git. It handles the 90% most common actions, and it does those well and safely. Push early, push often!

  • SourceTree – Since I’m no git guru, I use SourceTree to help me out with more advanced git actions, with the GUI helping me along and (hopefully) preventing silly mistakes. Every step is trackable, and undoable, but when you don’t know what you’re doing with git, it can become a real mess in the project history. This holds my hand.

  • Dropbox – I’m not into iCloud (too much of a walled garden for me), but for certain things I still need a way to keep things in sync between different machines, and with a few people around the world. Dropbox does that well (though BitTorrent Sync may well replace it one day). Mighty convenient.

  • 1Password – One of the many things that get synced through Dropbox are all my passwords, logins, bank details, and other little secrets. Securely protected by a strong password of course. Main benefit of 1P is that it integrates well with all the browsers, so logging in anywhere, or even coming up with new arbitrary passwords is a piece of cake. Also syncs with mobile. Indispensable.

  • DevonThink – This is my heavyweight external brain. I dump everything in there which needs to be retained. Although plain local search is getting so good and fast that DT is no longer an absolute must, it partitions and organises my files in separate databases (all plain files on disk, so no extra risk of data loss). Intelligent storage.

  • CrashPlan – Nobody should be without (automated, offline) backups. CrashPlan takes care of all the machines around the house. I sleep much better at night because of it. Its only big drawback is that there’s always a 500 MB background process to make this happen (fat Java worker, apparently), and that it burns some serious CPU cycles when checking, or catching up, or cleaning up – whatever. Still, essential.

  • Textual – Yeah, well, ok, this one’s an infrequent member on this list: IRC chat, on the #jeelabs channel for example. Not used much, but when it’s needed, it’s there to get some collaboration done. I’m thinking of hooking this into some sort of personal agents, maybe even have a channel where my house chats about the more public readings coming in. Social, on demand.

The point of all the things I’ve been mentioning in these past few posts, is that it’s all about forming habits, so that a different part of the brain starts doing some of the work for you. There is a perfect French term for getting tools to work for you that way:

  • prolongements du corps (a great example of this is driving a car)

This is my main motivation for adopting touch typing and in particular for using the command-line shell and a keyboard-command based text editor. It (still) doesn’t come easy, but every day I notice some little improvement. As always: Your Mileage May Vary.

Ok, that wraps it up for now. Any other tools you consider really useful? Let me know…

My development setup – hardware

In Hardware on Sep 11, 2013 at 00:01

After yesterday’s notes about my development software, some comments about hardware.

As you may have noticed, I use Apple’s hardware with Mac OS/X. Have gone from big (clunky!) setups to the svelte 1 kg 11″ MacBook Air, and now I’m back on a 15″ model. It’s unlikely that I’ll ever go back to a desktop-only setup, and even the 1920 x 1200 pixel 24″ screen on my desk has been sitting idle for many months now (apart from its use in the FanBot project). One reason is that screen switching with multiple physical screens has not been convenient so far (the next OS revision this fall promises to fix that), but even as is, I find the constant switching between different pixel sizes disruptive. Nowadays, I’m often in house-nomad mode: switching places several times a day around the house – from the desk, to the couch, to a big comfy rotating chair, and back. Heck, even outside, at times!

So one setup it is for me. And these days, that’s a 15″ Retina MacBook Pro (“RMBP15”):

Screen Shot 2013-09-04 at 14.48.48

There’s a lot to say about this, but the essence boils down to: 1920 x 1200 with VM’s.

To me it’s not essential which operating runs on the host: pick one which you feel really at home with, and go for a laptop size and build quality that suits you and your budget.

Now the crazy thing about the MBPR15 is that its screen is not 1920 x 1200, but 2880 x 1800 pixels. And out of the box, the machine comes set to a 1440 x 900 “logical” screen size, i.e. doubling up all the pixels. Which, in my view, is too small as main development environment – at least by today’s measures (hey, we’ve all been there – but it really is worth stepping up whenever you can).

So there’s this curious 1.5x magnification setting on this Mac laptop – does this mean that a 1-pixel thick line will end up getting drawn as “one pixel and a half”?

Obviously not. It’ll all be anti-aliased, as you can see in this close-up:

Screen Shot 2013-09-04 at 15.04.40

If you click on the image above, you’ll see a larger version. There is some interesting stuff going on behind the scenes when using a RMBP15 in 1920 x 1200 “interpolated” mode:

  • to the applications, the screen is reported as being 1920 x 1200
  • so that’s the way apps deal with for screen area and placement
  • for lines, the Mac OSX graphics engine will do its anti-aliasing thing
  • for text, the graphics engine will draw the fonts at their optimal resolution

That last one does the trick: when drawing a 12-point text, the graphics engine will actually draw an 18-point version, using the full screen resolution. As a result, text comes out as sharp as the LCD display will allow, without the application having to do a thing.

I tend to go for the smallest font sizes in editors and command-line shells which my glass-assisted (no, not that one) eyes can still read well (Menlo 10, for monospaced fonts). This gives me a 100 line window height in MacVim, which is perfect. But on a real 1920 x 1200 display, that would actually be pushing it. However, due to the rendering going on inside a Retina Mac, what you actually get to see is text drawn in a 15-point font on a display with over 200 dpi resolution. The result is excellent.

These choices for screen, window, and font sizes are really hitting the sweet spot for me. An 100 x 80 character editing window with splits and tabs as needed, a decent area to see the browser’s console log (and command line), and a main HMTL display area which is still almost exactly the 1366 x 768 size of a common small laptop screen. I rarely use the Mac’s multiple-desktop feature (called “spaces” in OSX), because I don’t have the patience to wait through its sweep-left-and-right animations. And because there’s no need: one “mode” with carefully-positioned windows, and other windows which can be moved to the front and back – a bit disruptively, but that’s because IMO that’s exactly what they are!

The second part of my laptop story is that since everything is running on an Intel 64-bit chip, virtualisation comes easy. This means both Windows and Linux can be run at the same time on the same machine (assuming you have enough RAM). I regularly fire up a Linux VM, and then ssh into it from the command line. For editing, I don’t even have to leave MacVim: just opening a file as scp://debianvm/housemon/ will open a directory window in MacVim and let me navigate from there. With the entire editor environment intact for fast file selection, tags, folding, etc. With Windows, it’s a matter of “mounting” my home directory on Windows, and all the local command line tools can be used for editing, git, diff, and so on (there’s a lot more possible, but I don’t use Windows much).

I’ll finish off tomorrow with some other handy software utilities I’ve come to rely on.

PS. No, this wasn’t intentionally made to coincide with Apple’s latest publicity blitz :)

My development setup – software

In Software on Sep 10, 2013 at 00:01

To follow up on yesterday’s post, here are the different software packages in my laptop-based development setup:

  • MacVim is my editor of choice these days. Now that I’ve switched to touch-typing mode, and given that I’ve used vi(m) for as long as I can remember, this is now growing on me every day. It’s all about muscle memory, and off-loading activities to have more spare brain cycles left for the task at hand.

    I chose MacVim over pure vim in the command line for one reason: it can be set to save all files on focus loss, i.e. when switching another app to the foreground. Tried a dark background for a while, but settled on a light one in the end. Dark backgrounds caused trouble with screen reflections when sitting with my back to a window.

  • TextMate is the second editor I use, but exclusively in browse mode. It can be fired up from the command line (e.g. cd housemon && mate .) and makes it very easy to skim and search across large source collections. Having two entirely different contexts means I can keep vim and its history focused on the code I’m actually working on. I only started using this dual-editor approach a few months ago, but by now it’s become a habit to start them both up – on different areas of the screen.

  • Google Chrome is my main development web browser. I settled on it (over Safari) for mostly one reason: it supports having the developer console pane open to the side of the main screen. There are plenty of dev tools included, from browsing the DOM structure to various performance graphs. Using a dedicated browser for development lets me flush the cache and muck with settings without disrupting other web activities.

  • Safari is my secondary browser. This is the one which keeps history, knows how to log into sites, and ends up having lots of tabs and windows open to keep track of all the distractions on the web. Reference docs, searching for information, and all the other time sinks the web has to offer. It drops nicely to the background when I want to get back to the real thing, but it’s there alongside MacVim whenever I need it.

  • iTerm 2 is what I use as command line. The only reason I’m using this instead of the built-in Terminal application, is that it supports split panes (I don’t use screen or tmux much). It’s set up to start up with two windows side-by-side, because tabs are not always convenient: sometimes you really need to see a few things at the same time.

    I do use tabs in all these apps, dragging them around between windows when needed. Sometimes, I’ll open a new iTerm window and make it much larger (all from the keyboard), to see all the output of a ps or grep command, for example.

  • nvAlt is a fork of Notational Velocity – a note-taking app which stores each note as a plain text file. Its main feature is that it starts up and searches everything instantly. Really. In the blink of an eye, I can find all the notes I wrote about anything. I use it for logging how I installed some tricky software and for lots and lots of lists with design ideas. Plain text. Instant. Clickable URLs. Shared through Dropbox. Perfect.

There’s a lot more to it, but these are the six tools I now use day in day out for software development. Even for embedded work, I’ll use the Arduino IDE only as compile + upload step, editing code in MacVim nowadays – because it has so much more functionality.

All of the above are either included in Mac OSX or open source software. I’m inclined to use OSS as a way to keep my learning investments safe. If ever one of these tools ends up going nowhere, or in an unwanted direction, then chances are that some developer(s) will fork the project to keep things on track (as has already happened with TextMate, iTerm, and nvAlt). In the long term, these core tools are simply too important…

My development setup

In Software on Sep 9, 2013 at 00:01

The tools I use for software development have evolved greatly over the years. Some have been replaced, other have simply gotten better, much better, over time. I’d like to describe the setup a bit in the next few posts – and why / how I made those choices. “YMMV”, as they say, but I think it could be of interest to other developers, whether soon-to-be or seasoned-switchers. Context: I develop in (for?) JavaScript (server- and browser-side).

Ok, here goes… the main tools I use are:

  • 2 editors – yes, two…
  • 2 browser – yes, two…
  • 2 command-line windows, yes, two…
  • a note-taking app (sorry, just one :)

Here is a snapshot of all the windows, taken at a fairly arbitrary moment in time:

dev-expose-small

And this is the setup in actual use:

dev-layout-annotated

Tomorrow, I’ll go over the different bits and pieces, and how they work out for me.

3 years on one set of batteries

In AVR, Hardware on Sep 8, 2013 at 00:01

Ok, so maybe it’s getting a bit boring to report these results, but one of the JeeNodes I installed long ago has just reached a milestone:

Screen Shot 2013-09-06 at 23.30.55

That “buro JC” node has been running on a single battery charge for 3 years now:

And it’s not even close to empty: this is a JeeNode USB with a 1300 mAh LiPo battery tied to its back, and (as I just measured) it’s still running at 3.74 V, go figure.

Let’s do the math on what’s going on here:

  • the battery is specified as 1300 mAh, i.e. 1300 mA for one hour and then it’s empty
  • in this case, it has been running for some 1096 x 24 = 26,304 hours total
  • so the average current consumption must have been under 1300 / 26304 = 50 µA
  • well… in that case, the battery should be empty by now, but it isn’t
  • in fact, I suspect that the average power consumption is more like 10..25 µA

Two things to note: 1) LiPo batteries pack a lot of energy, and 2) they have a really low self-discharge rate, so they are able to store that energy for a long time.

The other statistic worth working out, is the amount of energy consumed by a single packet transmission. Again, first assuming that the battery would be dead by now, and that the microcontroller and the rest of the circuit are not drawing any current:

  • 1,479,643 packets have been sent out, i.e. ≈ 1300 / 1500000 = under 1 µAh per packet
  • since 60 packets are sent out per hour: about 60 µAh per hour, i.e. 60 µA continuous
  • energy can also be expressed in coulombs, i.e. 60 µC gets used per packet transmission (3,600 seconds to the hour, but there were 60 packets sent out during that period)
  • so despite the fact that the RFM12B draws a substantial 25 mA of current during transmission, it does it so briefly that overall it’s still extremely low-power (a few ms every 1s, so that’s a truly minute duty cycle)

The conclusion here is: for these types of uses, with occasional brief wireless sensor data transmissions, the power consumption of the wireless module is not the main issue. It’s far more important to keep the idle (i.e. sleep mode) of the entire circuit under control.

The 2nd result is also a record, a JeeNode Micro, running over 6 months on a coin cell:

This one is running the newer radioBlip2 sketch, which also measures and reports the battery voltage before and after packet transmission. As you can see, the coin cell is struggling a bit, but its voltage level is still fine: it drops to 2.74 V right after sending out a packet (drawing 25 mA), and then recovers the rest of the time to a fairly high 2.94 V. This battery sure isn’t empty yet. Let’s see how many more months it can keep this up.

The 3rd result (penlight test), is this setup, based on the latest JNµ v3:

The timing values are way off though: it has also been running for over 6 months, but I accidentally caused it to reset when moving things around earlier this summer. This one is running with a switching boost regulator. The Eneloop battery started out at 1.3 V and has now dropped slightly to 1.28 V – it should be fine for quite some time, as these batteries tend to run down gradually to 1.2V before they start getting depleted. This is a rechargeable battery, but Eneloop is known to hold on to its charge for a surprisingly long time (losing 20% over 2 years due to self-discharge, if I remember correctly).

You can see the boost regulator doing its thing, as the output voltage sent to the ATtiny is the same 3.04 V as it was on startup. That’s the whole idea: it regulates to a fixed level, while sucking the battery dry along the way…

Note that all these nodes are not sensing anything. They’re just bleeping once a minute.

Anyway… so much for the progress report on a pretty long-running experiment :)

My software

In Software on Sep 7, 2013 at 00:01

Most of the software I write, and most of what I’ve written in the past, has gradually been making its way into GitHub. It’s a fantastic free resource for managing code (with no risk of lock-in – ever – since all git history gets copied), and it’s becoming more and more effective at collaboration as well, with a superb issue tracker and excellent searching and statistics facilities. Over the years, each developer home page has evolved into an excellent intro on what he/she has been up to lately:

Screen Shot 2013-09-03 at 14.58.03

As of recently, I’ve turned things up to 11 by giving everyone who submits a pull request full developer access (it’s a manual process, but effortless). As a result, if you submit a pull request, then whenever I merge it into the official repository, I’ll also add you as developer so that you no longer have to fork to get changes in. Just clone the original repo, and push changes back in.

It may sound crazy, but it really isn’t (and it’s not my idea, although I forgot where I first read about it). The point being that contributors are scarce, and good contributors who actually help take a project forward are scarce as gold. Not because there aren’t any, but because few people take the time to actually contribute and participate in a project created by someone else (shame on me for often being like that!). Time is the new scarce resource, so helping others become more productive by not having to go through the pull-request tango to add and tweak stuff really makes sense to me. Let alone giving real credit to people, by not only accepting their gifts but also giving back visibility and responsibility.

I don’t think there is a risk. Every serious developer will check with the team members before committing a contentious or disruptive change, and even then, all it takes is to create a branch and do the work there (while still letting others see what it’s about).

The other change is that I recently discovered the DocumentUp site, which can generate a very nice front page for any project on GitHub, from just its README page – like this one.

This is so convenient that I’ve set up a website with pointers for most of my code:

Screen Shot 2013-09-03 at 13.45.33

There! That URL ought to be fairly easy to remember :)

Another site (service?) which presents (remixes?) GitHub issues in a neat way is waffle.io:

Screen Shot 2013-09-06 at 00.16.16

There’s a lot more going on with GitHub. Did you know that all issue trackers support live chat? Just keep an issue page open, and you’ll see new comments come in – in real time! Perfect for a quick back-and-forth discussion about some intricate problem. Adding images like screen shots is equally easy: just drag and drop the image into the discussion page.

Did you know that each GitHub project can manage releases, either as is or with detailed information per release? Either way, you get a list of versioned ZIP and TAR archives, simply by adding public tags in Git. And even without tagged releases, there’s always a “Download ZIP” link on the right side of each project page, so you don’t even have to install or use git at all to obtain the latest version of that code.

Screen Shot 2013-09-03 at 15.23.37

There are some 4 million public projects on GitHub. Not all exciting, but tons of them are!

Decoding bit fields – part 2

In Software on Sep 6, 2013 at 00:01

To follow up on yesterday’s post, here is some code to extract a bit field. First, a refresher:

bytes5

The field marked in blue is the value we’d like to extract. One way to do this is to “manually” pick each of the pieces and assemble them into the desired form:

bytes4

Here is a little CoffeeScript test version:

buf = new Buffer [23, 125, 22, 2]

for i in [0..3]
  console.log "buf[#{i}] = #{buf[i]} = #{buf[i].toString(2)} b"

x = buf.readUInt32LE 0
console.log "read as 32-bit LE = #{x.toString 2} b"

b1 = buf[1] >> 7
b2 = buf[2]
b3 = buf[3] & 0b111
w = b1 + (b2 << 1) + (b3 << 9)
console.log "extracted from buf: #{w} = #{w.toString 2} b"

v = (x >> 15) & 0b111111111111
console.log "extracted from int: #{v} = #{v.toString 2} b"

Or, if you prefer JavaScript, the same thing:

var buf = new Buffer([23, 125, 22, 2]);

for (var i = 0; i <= 3; ++i) {
  console.log("buf["+i+"] = "+buf[i]+" = "+buf[i].toString(2)+" b");
}

var x = buf.readUInt32LE(0);
console.log("read as 32-bit LE = "+x.toString(2)+" b");

var b1 = buf[1] >> 7;
var b2 = buf[2];
var b3 = buf[3] & 0x7;
var w = b1 + (b2 << 1) + (b3 << 9);
console.log("extracted from buf: "+w+" = "+w.toString(2)+" b");

var v = (x >> 15) & 0xfff;
console.log("extracted from int: "+v+" = "+v.toString(2)+" b");

The output is:

    buf[0] = 23 = 10111 b
    buf[1] = 125 = 1111101 b
    buf[2] = 22 = 10110 b
    buf[3] = 2 = 10 b
    read as 32-bit LE = 10000101100111110100010111 b
    extracted from buf: 1068 = 10000101100 b
    extracted from int: 1068 = 10000101100 b

This illustrates two ways to extract the data: “w” was extracted as described above, but “v” used a trick: first use built-in logic to extract an integer field which is too big (but with all the bits in the right order), then use bit shifting and masking to pull out the required field. The latter is much simpler, more concise, and usually also a little faster.

Here’s an example in Python, illustrating that second approach via “unpack”:

import struct

buf = struct.pack('BBBB', 23, 125, 22, 2)
(x,) = struct.unpack('<l', buf)

v = (x >> 15) & 0xFFF
print(v)

And lastly, the same in Lua, another popular language:

require 'struct'
require 'bit'

buf = struct.pack('BBBB', 23, 125, 22, 2)
x = struct.unpack('<I4', buf)

v = bit.band(bit.rshift(x, 15), 0xFFF)
print(v)

So there you go, lots of ways to extract useful data from the RF12demo sketch output!

Decoding bit fields

In Software on Sep 5, 2013 at 00:01

Prompted by a recent discussion on the forum, and because it’s a recurring theme, I thought I’d go a bit (heh…) into how bit fields work.

One common use-case in Jee-land, is with packets coming in from a roomNode sketch, which are reported by the RF12demo sketch on USB serial as something like:

OK 3 23 125 22 2

The roomNode sketch tightly packs its results into a packet using this C structure:

struct {
    byte light;     // light sensor: 0..255
    byte moved :1;  // motion detector: 0..1
    byte humi  :7;  // humidity: 0..100
    int temp   :10; // temperature: -500..+500 (tenths)
    byte lobat :1;  // supply voltage dropped under 3.1V: 0..1
} payload;

So how do you get the values back out, given just those 4 bytes (plus a header byte)?

To illustrate, I’ll use a slightly more complicated example. But first, some diagrams:

bytes1

bytes2

That’s 4 consecutive bytes in memory, and two ways a 32-bit long int can be mapped to it:

  • little-endian (LE) means that the first byte (at #00) points to the little end of the int
  • big-endian (BE) means that the first byte (at #00) points to the big end of the int

Obvious, right? On x86 machines, and also the ATmega, everything is stored in LE format, so that’s what I’ll focus on from now on (PIC and ARM are also LE, but MIPS is BE).

Let’s use this example, which is a little more interesting (and harder) to figure out:

struct {
    unsigned long before :15;
    unsigned long value :12;  // the value we're interested in
    unsigned long after :3;
} payload;

That’s 30 bits of payload. On an ATmega the compiler will use the layout shown on the left:

bytes3

But here’s where things start getting hairy. If you had used the definition below instead, then you’d have gotten a different packed structure, as shown above on the right:

struct {
    unsigned int before :15;
    unsigned int value :12;  // the value we're interested in
    unsigned int after :3;
} payload;

Subtle difference, eh? The reason for this is that the declared size (long vs int) determines the unit of memory in which the compiler tries to squeeze the bit field. Since an int is 16 bits, and 15 bits were already assigned to the “before” field, the next “value” field won’t fit so the compiler will restart on a fresh byte boundary! IOW, be sure to understand what the compiler does before trying to decode stuff. One way to figure out what’s going on, is to set the values to some distinctive bit patterns and see what comes out as bytes:

payload.before = 0b1;  // bit 0 set
payload.value = 0b11;  // bits 0 and 1 set
payload.after = 0b111; // bits 0, 1, and 2 set

Tomorrow, I’ll show a couple of ways to extract this bit field.

Back pressure

In Software on Sep 4, 2013 at 00:01

No, not the medical term, and not this carambolage either (though it’s related):

52-car-pile-up

What I’m talking about is the software kind: back pressure is what you need in some cases to avoid running into resource limits. There are many scenario’s where this can be an issue:

  • In the case of highway pile-ups, the solution is to add road signalling, so drivers can be warned to slow down ahead of time – before that decision is taken from them…
  • Sending out more data than a receiver can handle – this is why traditional serial links had “handshake” mechanisms, either software (XON/XOFF) or hardware (CTS/RTS).
  • On the I2C bus, hardware-based clock stretching is often supported as mechanism for a slave to slow down the master, i.e. an explicit form of back pressure.
  • Sending out more packets than (some node in) a network can handle – which is why backpressure routing was invented.
  • Writing out more data to the disk than the driver/controller can handle – in this case, the OS kernel will step in and suspend your process until things calm down again.
  • Bringing a web server to its knees when it gets more requests than it can handle – crummy sites often suffer from this, unless the front end is clever enough to reject requests at some point, instead of trying to queue more and more work which it can’t possibly ever deal with.
  • Filling up memory in some dynamic programming languages, where the garbage collector can’t keep up and fails to release unused memory fast enough (assuming there is any memory to release, that is).

That last one is the one that bit me recently, as I was trying to reprocess my 5 years of data from JeeMon and HouseMon, to feed it into the new LevelDB storage system. The problem arises, because so much in Node.js is asynchronous, i.e. you can send off a value to another part of the app, and the call will return immediately. In a heavy loop, it’s easy to send off so much data that the callee never gets a chance to process it all.

I knew that this sort of processing would be hard in HouseMon, even for a modern laptop with oodles of CPU power and gigabytes of RAM. And even though it should all run on a Raspberry Pi eventually, I didn’t mind if reprocessing one year of log files would take, say, an entire day. The idea being that you only need to do this once, and perhaps repeat it when there is a major change in the main code.

But it went much worse than I expected: after force-feeding about 4 months of logs (a few hundred thousand converted data readings), the Node.js process RAM consumption was about 1.5 GB, and Node.js was frantically running its garbage collector to try and deal with the situation. At that point, all processing stopped with a single CPU thread stuck at 100%, and things locked up so hard that Node.js didn’t even respond to a CTRL-C interrupt.

Now 1.5 GB is a known limit in the V8 engine used in Node.js, and to be honest it really is more than enough for the purposes and contexts for which I’m using it in HouseMon. The problem is not more memory, the problem is that it’s filling up. I haven’t solved this problem yet, but it’s clear that some sort of back pressure mechanism is needed here – well… either that, or there’s some nasty memory leak in my code (not unlikely, actually).

Note that there are elegant solutions to this problem. One of them is to stop having a producer push data and calls down a processing pipeline, and switch to a design where the consumer pulls data when it is ready for it. This was in fact one of the recent big changes in Node.js 0.10, with its streams2 redesign.

Even on an embedded system, back pressure may cause trouble in software. This is why there is an rf12_canSend() call in the RF12 driver: because of that, you cannot ever feed it more packets than the (relatively slow) wireless RFM12B module can handle.

Soooo… in theory, back pressure is always needed when you have some constraint further down the processing pipeline. In practice, this issue can be ignored most of the time due to the slack present in most systems: if we send out at most a few messages per minute, as is common with home monitoring and automation, then it is extremely unlikely that any part of the system will ever get into any sort of overload. Here, back pressure can be ignored.

Electricity usage patterns

In Hardware, Software on Sep 3, 2013 at 00:01

Given that electricity usage here is monitored with a smart meter which periodically phones home to the electricity company over GPRS, this is the sort of information they get to see:

Screen Shot 2013-09-02 at 11.20.38

Consumption in blue, production in green. Since these are the final meter readings, those two data series will never overlap – ya can’t consume and produce at the same time!

I’m reading out the P1 data and transmitting it wirelessly to my HouseMon monitoring setup (be sure to check the develop branch, which is where all new code is getting added).

There’s a lot of information to be gleaned from that. The recurring 2000+ W peaks are from a 7-liter kitchen boiler (3 min every 2..3 hours). Went out for dinner on Aug 31st, so no (inductive) home cooking, and yours truly burning lots of midnight oil into Sep 1st. Also, some heavy-duty cooking on the evening of the 1st (oven dish + stove).

During the day, it’s hard to tell if anyone is at home, but evenings and nights are fairly obvious (if only by looking at the lights lit in the house!). Here’s Sep 2nd in more detail:

Screen Shot 2013-09-02 at 11.23.50

This one may look a bit odd, but that double high-power blip is the dish washer with its characteristic two heating cycles (whoops, colours reversed: consumption is green now).

Note that whenever there is more sun, there would be fewer consumption cycles, and hence less information to glean from this single graph. But by matching this up with other households nearby, you’d still get the same sort of information out, i.e. from known solar power vs. returned power from this household. Cloudy patterns will still match up across a small area (you can even determine the direction of the clouds!).

I don’t think there’s a concern for (what little) privacy (we have left), but it’s quite intriguing how much can be deduced from this.

Here’s yet more detail, now including the true house usage and solar production values, as obtained from some pulse counters, this hardware and the homePower driver:

Screen Shot 2013-09-02 at 11.52.21

There is a slight lag in smart meter reporting (a value on the P1 port every 10s). This is not an issue of the smart meter though: the DyGraphs package is only able to plot step lines with values at the start of the step, even though these values pertain to the past 10 seconds.

Speaking of which – there was a problem with the way data got stored in Redis. This is no longer an issue in this latest version of HouseMon, because I’m switching over to LevelDB, a fascinating time- and space-efficient database engine.

Making software choices

In Musings on Sep 2, 2013 at 00:01

If there were just one video in the field of software and technology which I’ve watched this summer and would really like to recommend, then it’s this one from about a year ago:

Brian Ford: Is Node.js Better?

It’s 40 minutes long. It’s a presentation by Brian Ford, who has earned his marks in the Ruby world (not to be confused with another Brian Ford from the Angular.js community). He gets on stage at JSConf US 2012, a major conference on JavaScript and Node.js, and spends almost half an hour talking about everything but JavaScript.

At the end, he voices some serious concerns about Node.js in the high-end networking arena w.r.t. its single event-loop without threading, and how the Ruby community hit that wall long ago and made different choices. Interesting, also on a technical level.

But this is not really about language X vs language Y (yawn) – it’s about how we make choices in technology. No doubt your time is limited, and you may not even be interested in Node.js or Ruby, but all I can say is: it made me re-evaluate my positions and convictions, taught me about the power of honest argumentation, and got me to read that brilliant book by Daniel Kahneman, titled Thinking ,Fast and Slow (heavy read – I’m halfway).

Elevating stuff…

It’s been a while…

In Musings on Sep 1, 2013 at 00:01

… since that last blog post. Time to get back into the fun – and I can’t wait!

This summer was spent in comfort and relaxation, a lot of it in and around the house, as everyone had left Houten, leaving behind a totally calm and quiet village, with lots of really nice summer days. As usual, when things are well, time passes quickly…

From a nice bike-visit to around Zwolle, to a very pleasant stay in Copenhagen with a trip to the splendid Louisiana Museum, we had a delightfully “slow” summer for a change.

Blaeu_1652_-_Zwolle

I spent days on end reading (eReader/iPad) and also had a great time at the OHM2013 hacker event. Made me feel old and young at the same time… I suppose that’s good :)

With a fantastic new discovery: a presentation and workshop on Molecular Cooking. Some pretty amazing stuff with food, such as spherification – we made “apple-juice caviar”, and a soup which makes one side of the mouth warm and the other side cold! (using fluid gels)

Here are some of the other things you can do with this (from the Molécule-R site):

Screen Shot 2013-08-31 at 16.29.46

Lots of fun and ideas to try out. It’s a charming mix between exploring original new tastes and playing tricks with the senses. It’s also called food hacking, which could explain why this topic came up at OHM2013 (alongside activism, banking, and fablabs).

A summer of contrasts… from a Hanseatic city to modern gastronomic creativity!