Computing stuff tied to the physical world

Posts Tagged ‘HouseMon’

LevelDB, MQTT, and Redis

In Software on Sep 27, 2013 at 00:01

To follow up on yesterday’s post, here are some more interesting properties of LevelDB, in the context of Node.js and HouseMon:

  • The levelup wrapper for Node supports streaming, i.e. traversing a set of entries for reading, as well as sending an entire set of PUTs and/or DELs as a stream of requests. This is a very nice match for Node.js and for interfaces which need to be sent across the network (including to/from browsers).

  • Events are generated for each change to the database. These can easily be translated into a “live stream” of changes. This is really nothing other than publish / subscribe, but again in a form which matches Node’s asynchronous design really well.

  • Standard encoding conventions can be defined for keys and or values. In the case of HouseMon, I’m using UTF-8 as key encoding default and JSON as value encoding. The latter means that the default “stuff” stored in the database is simply a JavaScript object, and this is also what you get back on access and traversal. Individual cases can still override these defaults.

Together, all these features and conventions end up playing together very nicely. One particularly interesting aspect is the “persistent pub-sub” model you can implement with this. This is an area where the “Message Queue Telemetry Transport” MQTT has been gaining a lot of attention for things like home automation and the “Internet of Things” IoT.

MQTT is a message-based system. There are channels, named in a hierarchical manner, e.g. “/location/device/sensor”, to which one can publish measurement values (“publish /kitchen/roomnode/temp 17”). Other devices or software can subscribe to such channels, and get notified each time a value is published.

One interesting aspect of MQTT is its “RETAIN” functionality: for each channel, the last value can be retained, meaning that anyone subscribing to a channel will immediately be sent the last retained value, as if it just happened. This turns an ephemeral message-based setup into a state-based mechanism with change propagation, because you can subscribe to a channel after devices have published new values on it, yet still see what the best known value for that channel is, currently. With RETAIN, the MQTT system becomes a little database, with one value per channel. This can be very convenient after power-up as well.

LevelDB comes from the other side, i.e. a database with change tracking, but the effect is exactly the same: you can subscribe to a set of keys, get all their latest values and be notified from that point on whenever any keys in this range changes. This makes LevelDB and MQTT functionally equivalent, although LevelDB can handle much larger key spaces (note also that MQTT has features which LevelDB lacks, such as QoS).

It’s also interesting to compare this with the Redis database: Redis is another key-value store, but with more data structure options. The main difference with LevelDB, is that Redis is memory-based, with periodic snapshots to disk, whereas LevelDB is disk based, with caching in memory (and thus able to handle datasets which far exceed available RAM). Redis, LevelDB, and MQTT all support pub-sub in some form.

One reason for me to select LevelDB for HouseMon, is that it’s an in-process embedded database. This means that there is no need for a separate process to run alongside Node.js – and hence to need to install anything, or keep a background daemon running.

By switching to LevelDB, the HouseMon installation procedure has been simplified to:

  • download a recent version of HouseMon from GitHub
  • install Node (which includes the “npm” package manager)
  • run “npm install” to get everything needed by HouseMon

And that’s it. At this point, you just start up Node.js and everything will work.

But it’s not just convenience. LevelDB is also so compact, performant, and scalable, that HouseMon can now store all raw data as well as a complete set of aggregated per-hour values without running into resource limits, even on a Raspberry Pi. According to my rough estimates, it should all end up using less than 1 GB per year, so everything will easily fit on an 8 or 16 GB SD card.

The LevelDB database

In Software on Sep 26, 2013 at 00:01

In recent posts, I’ve mentioned LevelDB a few times – a database with some quite interesting properties.

Conceptually, LevelDB is ridiculously simple: a “key-value” store which stores all entries in sorted key order. Keys and values can be anything (including empty). LevelDB is not a toy database: it can handle hundreds of millions of keys, and does so very efficiently.

It’s completely up to the application how to use this storage scheme, and how to define key structure and sub-keys. I’m using a simple prefix-based mechanism: some name, a “~” character, and the rest of the key, of which each field can be separated with more “~” characters. The implicit sorting then makes it easy to get all the keys for any prefix.

LevelDB is good at only a few things, but these work together quite effectively:

  • fast traversal in sorted and reverse-sorted order, over any range of keys
  • fast writing, regardless of which keys are being written, for add / replace / delete
  • compact key storage, by compressing common prefix bytes
  • compact data storage, by a periodic compaction using a fast compression algorithm
  • modifications can be batched, with a guaranteed all-or-nothing success on completion
  • traversal is done in an isolated manner, i.e. freezing the state w.r.t. further changes

Note that there are essentially just a few operations: GET, PUT, DEL, BATCH (of PUTs and DELs), and forward/backward traversal.

Perhaps the most limiting property of LevelDB is that it’s an in-process embedded library, and that therefore only a single process can have the database open. This is not a multi-user or multi-process solution, although one could be created on top by adding a server interface to it. This is the same as for Redis, which runs as one separate process.

To give you a rough idea of how the underlying log-structured merge-tree algorithm works, assume you have a datafile with a sorted set of key-value pairs, written sequentially to file. In the case of LevelDB, these files are normally capped at 2 MB each, so there may be a number of files which together constitute the entire dataset.

Here’s a trivial example with three key-value pairs:

one-level

It’s easy to see that one can do fast read traversals either way in this data, by locating the start and end position (e.g. via binary search).

The writing part is where things become interesting. Instead of modifying the files, we collect changes in memory for a bit, and then write a second-level file with changes – again sorted, but now not only containing the PUTs but also the DELs, i.e. indicators to ignore the previous entries with the same keys.

Here’s the same example, but with two new keys, one replacement, and one deletion:

two-levels

A full traversal would return the entries with keys A, AB, B, and D in this last example.

Together, these two levels of data constitute the new state of the database. Traversal now has to keep track of two cursors, moving through each level at the appropriate rate.

For more changes, we can add yet another level, but evidently things will degrade quickly if we keep doing this. Old data would never get dropped, and the number of levels would rapidly grow. The cleverness comes from the occasional merge steps inserted by LevelDB: once in a while, it takes two levels, runs through all entries in both of them, and produces a new merged level to replace those two levels. This cleans up all the obsolete and duplicate entries, while still keeping everything in sorted order.

The consequence of this process, is that most of the data will end up getting copied, a few times even. It’s very much like a generational garbage collector with from- and to-spaces. So there’s a trade-off in the amount of writing this leads to. But there is also a compression mechanism involved, which reduces the file sizes, especially when there is a lot of repetition across neighbouring key-value pairs.

So why would this be a good match for HouseMon and time series data in general?

Well, first of all, in terms of I/O this is extremely efficient for time-series data collection: a bunch of values with different keys and a timestamp, getting inserted all over the place. With traditional round-robin storage and slots, you end up touching each file in some spot all the time – this leads to a few bytes written in a block, and requires a full block read-modify-write cycle for every single change. With the LevelDB aproach, all the changes will be written sequentially, with the merge process delayed (and batched!) to a different moment in time. I’m quite certain that storage of large amounts of real-time measurement data will scale substantially better this way than any traditional RRDB, while producing the same end results for access. The same argument applies to indexed B-Trees.

The second reason, is that data sorted on key and time is bound to be highly compressible: temperature, light levels, etc – they all tend to vary slowly in time. Even a fairly crude compression algorithm will work well when readings are laid out in that particular order.

And thirdly: when the keys have the time stamp at the end, we end up with the data laid out in an optimal way for graphing: any graph series will be a consecutive set of entries in the database, and that’s where LevelDB’s traversal works at its best. This will be the case for raw as well as aggregated data.

One drawback could be the delayed writing. This is the same issue with Redis, which only saves its data to file once in a while. But this is no big deal in HouseMon, since all the raw data is being appended to plain-text log files anyway – so on startup, all we have to do is replay the end of the last log file, to make sure everything ends up in LevelDB as well.

Stay tuned for a few other reasons on why LevelDB works well in Node.js …

In search of a good graph

In Software on Sep 24, 2013 at 00:01

I’ve been using the DyGraphs package for some time now in HouseMon, but it’s not perfect. Here is a generated graph for this morning, comparing two different ways to measure the same solar power generation:

Screen Shot 2013-09-23 at 09.20.52

The green series comes from the a pulse counter which generates a pulse every 0.5 Wh. These values then get sent to HouseMon every 3 seconds. A fairly detailed set of readings, as you can see – this is a typical morning in autumn on a cloudy day.

The blue series comes from the SMA inverter, which I’m reading every 5..6 minutes. It might be more accurate, but there is far less data coming in.

The key is that the area (i.e. Wh) under these two lines should be identical, if both devices are accurately calibrated. But this doesn’t quite seem to be the case here. (see update)

One known problem is that DyGraphs is plotting these “step charts” incorrectly: the value of the step should be at the end of the step, but DyGraphs can only plot steps given a starting value. So the way to read these blue steps, I think, is to keep the points, but to think of the step lines as being drawn horizontally preceding that point, not trailing it. I’m not fully convinced that this explains all the differences in the above examples, but it would seem to be a better match.

I’ve been looking at Highcharts and Highstocks as an alternative graphing library to use. They seem to support steps-with-end-values. But this sort of use really has quite a few other requirements as well, such as: being able to draw large datasets efficiently, support for multiple independent time series (DyGraphs was a bit limited here as well), interactive and intuitive zooming, good automatic choice of axis scales and fonts.

It would also be nice to be able to efficiently redraw a graph when new data points come in. Graphs used to be live in HouseMon, but at some point I ran into a snag and never bothered to fix it. For me, right now, live updating is less important than being able to generate accurate graphs. What’s the point of showing results on screen after all, if those results are not really represented truthfully…

I’ll stay on the lookout for better graphing solution. All suggestions welcome!

Update – See @ward’s comments below for the proper explanation of what’s going on here. It’s not a graphing problem after all (or rather: a different one…).

Animation in the browser

In Software on Sep 23, 2013 at 00:01

If you’ve programmed for Mac OS X (my actual experience in that area is fairly minimal), then you may know that it has a feature called Core Animation – as a way to implement all sorts of flashy smooth UI transitions – sliding, fading, colour changes, etc.

The basic idea of animation in this context is quite simple: go from state A to state B by varying one or more values gradually over a specified amount of time.

It looks like web browsers are latching onto this trend as well, with CSS Animations. It has been possible to implement all sorts of animations for some time, using timed JavaScript loops running in the background to simulation this sort of stuff, as jQuery has been doing. Then again, getting such flashy features into a whole set of independently-developed (and competing) web browsers is bound to take more time, but yeah… it’s happening. Here’s an illustration of what text can do in today’s browsers.

In Angular, browser-side animation has recently become quite a bit simpler.

I’m no big fan of attention-drawing screen effects. There was a time when HTML’s BLINK tag got so much, ehm, attention that every page tried to look like a neon sign. Oh, look, a new toy tag, and it’s BLINKING… neat, let’s use it everywhere!

Even slide transitions (will!) get boring after a while.

But with dynamic web pages updating in real time, it seems to me that there is a case to be made for subtle hints that something has just happened. As trial, I used a fade transition (from jQuery) in the previous version of HouseMon to briefly change the background of a changed row to very light yellow, which then fades back to white:

Screen Shot 2013-09-22 at 20.26.32

The effect seems to work well – for me anyway: it’s easy to spot the latest changes in a table, without getting too distracted when looking at some specific data.

In the latest version of HouseMon, I wanted to try the same effect using the new Angular animation feature, and I must say: the basic mechanism is really simple – you make animations happen by defining the CSS start settings, the finish settings, and the time period plus transition style to use. In the case of adding a row to a table or deleting a row, it’s all built-in and done by just adding some CSS definitions, as described here.

In this particular case, things are slightly more involved because it’s not the addition or deletion of a row I wanted to animate, but the change of a timestamp on a row. This requires defining a custom Angular “directive”. Directives are the cornerstone of Angular: they can (re-) define the behaviour of HTML elements or attributes, and let you invent new ones to do just about anything possible in the DOM.

Unfortunately, directives can also be a bit obscure at times. Here is the one I ended up defining for HouseMon:

ng.directive 'highlightOnChange', ($animate) ->                                  
  (scope, elem, attrs) ->                                                        
    scope.$watch attrs.highlightOnChange, ->                                     
      $animate.addClass elem, 'highlight', ->                                    
        attrs.$removeClass 'highlight'   

With it, we can now use a new custom highlightOnChange attribute to an element to implement highlighting when a value on the page changes.

Here is how it gets used (in Jade) on the status table rows shown in yesterday’s post:

tr(highlight-on-change='row.time')

In prose: when row.time changes, start the highlight transition, which adds a custom highlight class and removes it again at the end of the animation sequence.

The CSS then defines the actual animation (using Stylus notation):

.highlight-add
  transition 1s linear all
  background lighten(yellow, 75%)
.highlight-add.highlight-add-active
  background white

I’m leaving out some finicky details, but that’s the essence of it all: set up an $animate call to trigger on the proper conditions, and then define what is going go happen in CSS.

Note that Angular picks name suffixes for the CSS classes during the different phases of animation, but then it can all be used for numerous types of animation: you pick the CSS setting(s) at start and finish, and the browser’s CSS engine will automatically “run” the selected animation – covering motions, colour changes, and lots more (could fonts be emboldened gradually or even be morphed in more sophisticated ways? I haven’t tried it).

The result of all this is a table with subtle row highlights wherever a new reading comes in. Triggered by some Angular magic, but the actual highlighting is a (not-yet-100%-standardised) browser effect.

Working with a CSS grid

In Software on Sep 22, 2013 at 00:01

As promised yesterday, here is some information on how you can get a page layouts off the ground real quick with the Zurb Foundation 4 CSS framework. If you’re more into Twitter Bootstrap, see this excellent comparison and overview. They are very similar.

It took a little while to sink in, until I read this article about how it’s supposed to be used, and skimmed through a 10-part series on webdesign tuts+. Then the coin dropped:

  • you create rows of content, using a div with class row
  • each row has 12 columns to put stuff in
  • you put the sections of your page content in divs with class large-6 and columns
  • each section goes next to the preceding one, so 6 and 6 puts the split in the middle
  • any number from 1 to 12 goes, as long as you use at most 12 columns per row

So far so good, and pretty trivial. But here’s where it gets interesting:

  • each column can have more rows, and each row again offers 12 columns to lay out
  • for mobile use, you can add small-N classes as well, with other column allocations

I haven’t done much with this yet, but this page was trivial to generate:

Screen Shot 2013-09-20 at 00.57.16

The complete definition of that inner layout is in app/status/view.jade:

.row
  .large-9.columns
    .row
      .large-8.columns
        h3 Status
      .large-4.columns
        h3
        input(type='text',ng-model='statusQuery',placeholder='Search...')
    .row
      .large-12.columns
        table(width='100%')
          tr
            th Driver
            th Origin
            th Name
            th Title
            th Value
            th Unit
            th Time
          tr(ng-repeat='row in status | filter:statusQuery | orderBy:"key"')
            td {{row.type}}
            td {{row.tag}}
            td {{row.name}}
            td {{row.title}}
            td(align='right') {{row.value}}
            td {{row.unit}}
            td {{row.time | date:"MM-dd   HH:mm:ss"}}

In prose:

  • the information shown uses the first 9 columns, (the remaining 3 are unused)
  • in those 9 columns, we have a header row and then the actual data table as row
  • the header is a h1 header on the left and an input on the right, split as 2:1
  • the table takes the full width (of those 9 columns in which it resides)
  • the rest is standard HTML in Jade notation, and an Angular ng-repeat loop
  • there’s also live row filtering in there, using some Angular conventions
  • the alternating row backgrounds and general table style are Foundation’s defaults

Actually, I left out one thing: the entries automatically get highlighted whenever the time stamp changes. This animation is a nice way to draw attention to updates – stay tuned.

Responsive CSS layouts

In Software on Sep 21, 2013 at 00:01

One of the things today’s web adds to the mix, is the need to address devices with widely varying screen sizes. Seeing a web site made for an “old” 1024×768 screen layout squished onto a mobile phone screen is not really great if everything just ends up smaller on-screen.

Enter grid-based CSS frameworks and responsive CSS.

There’s a nice site to quickly try designs at different sizes, at cybercrab.com/screencheck. Here’s how my current HouseMon screen looks on two different simulated screens:

Screen Shot 2013-09-19 at 23.45.02

Screen Shot 2013-09-20 at 00.17.54

It’s not just reflowing CSS elements (the bottom line stacks vertically on a small screen), it also completely alters the behaviour of the navigation bar at the top. In a way which works well on small screens: click in the top right corner to fold out the different menu choices.

Both Zurb Foundation and Twitter Bootstrap are “CSS frameworks” which provide a huge set of professionally designed UI elements – and now also do this responsive thing.

The key point here is that you don’t need to do much to get all this for free. It takes some fiddling and googling to set all the HTML sections up just right, but that’s it. In my case, I’m using Angular’s “ng-repeat” to generate the navigation bar menu entries on-the-fly:

nav.top-bar(ng-controller='NavCtrl')
  ul.title-area
    li.name
      h1
        a(ng-href='{{appInfo.home}}')
          img(src='/main/logo.png',height=22,width=22)
          |   {{appInfo.name}}
    li.toggle-topbar.menu-icon
      a(href='#')
        span
  section.top-bar-section
    ul.left
      li.myNav(ng-repeat='nav in navbar')
        a(ng-href='{{nav.route}}') {{nav.title}}
      li.divider
    ul.right
      li.myNav
        a(href='/admin') Admin

Yeah, ok, pretty tricky stuff. But it’s all documented, and nowhere near the effort it’d take to do this sort of thing from scratch. Let alone get the style and visual layout consistent and working across all the different browsers and versions.

Tomorrow, I’ll describe the way grid layouts work with Zurb Foundation 4 – it’s sooo easy!

In praise of AngularJS

In Software on Sep 20, 2013 at 00:01

This is how AngularJS announces itself on its home page:

HTML enhanced for web apps!

Hm, ok, but not terribly informative. Here’s the next paragraph:

HTML is great for declaring static documents, but it falters when we try to use it for declaring dynamic views in web-applications. AngularJS lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop.

That’s better. Let me try to ease you into this “framework” on which HouseMon is based:

  • Angular (“NG”) is client-side only. It doesn’t exist on the server. It’s all browser stuff.
  • It doesn’t take over. You could use NG for parts of a page. HouseMon is 100% NG.
  • It comes with many concepts, conventions, and terms you must get used to. They rock.

I can think of two main stumbling blocks: “Dependency Injection” and all that “service”, “factory”, and “controller” stuff (not to mention “directives”). And Model-View-Something.

It’s not hard. It wasn’t designed to be hard. The terminology is pretty clever. I’d say that it was invented to produce a mental model which can be used and extended. And like a beautiful clock, it doesn’t start ticking until all the little pieces are positioned just right. Then it will run (and evolve!) forever.

Dependency injection is nothing new, it’s essentially another name for “require” (Node.js), “import” (Python), or “#include” (C/C++). Let’s take a fictional piece of CoffeeScript code:

value = 123
myCode = (foo, bar) -> foo * value + bar

console.log myCode(100, 45)

The output will be “12345”. But with dependency injection, we can do something else:

value = 123
myCode = (foo, bar) -> foo * value + bar

define 'foo', 100
define 'bar', 45
console.log inject(myCode)

This is not working code, but it illustrates how we could define things in some more global context, then call a function and give it access to that context. Note that the “inject” call does something very magical: it looks at the names of myCode’s arguments, and then somehow looks them up and finds the values to pass to myCode.

That’s dependency injection. And Angular uses it all over the place. It will often call functions through a special “injector” and get hold of all args expected by the function. One reason this works is because closures can be used to get local scope information into the function. Note how myCode had access to “value” via a normal JavaScript closure.

In short: dependency injection is a way for functions to import all the pieces they need. This makes them extremely modular (and it’s great for test suites, because you can inject special versions and mock objects to test each piece of code outside its normal context).

The other big hurdle I had to overcome when starting out with Angular, is all those services, providers, factories, controllers, and directives. As it turns out, that too is much simpler than it might seem:

  • Angular code is packaged as “modules” (I actually only use a single one in HouseMon)
  • in these modules, you define services, etc – more on this in a moment
  • the whole startup is orchestrated in a specific way: boot, config, run, fly
  • ok, well, not “fly” – that’s just my name for it…

The startup sequence is key – to understand what goes where, and what gets called when:

  • the browser starts by loading an HTML page with <script> tags in it (doh)
  • all the code needed on the browser side has to be loaded up front
  • what this code does is define modules, services, and so on
  • nothing is really “running” at this point, these are all function definitions
  • the last step is to call angular.bootstrap(document, ['myApp']);
  • at this point, all the config sections defined in the modules are run
  • once that is over, all the run sections are run, this is the real application start
  • when that is done, we in essence enter The Big Event Loop in the browser: lift off!

So on the client side, i.e. the browser, the entire application needs to be structured as a set of definitions. A few things need to be done early on (config), a few more once everything has been defined and Angular’s “$rootScope” has been put in place (read: the main page context is ready, a bit like the DOM’s “document.onload”), and then it all starts doing real work through the controllers tied to various HTML elements in the DOM.

Providers, factories, services, and even constants and values, are merely variations on a theme. There is an excellent page by Matt Haggard describing these differences. It’s all smoke and mirrors, really. Everything except directives is a provider in Angular.

Angular is like the casing of a watch, with a powerful pre-loaded spring already in place. You “just” have to figure out the role of the different pieces, put a few of them in place, and it’ll run all by itself from then on. It’s one of the most elegant frameworks I’ve ever seen.

With HouseMon, I’ve put the main pieces in place to show a web page in the browser with a navigation top bar, and the hooks to tie into Primus and RPC. The rest is all up to the modules, i.e. the subfolders added to the app/ folder. Each client.coffee file is an Angular module. HouseMon will automatically load them all into the browser (via a /primus/primus.js file generated on-the-fly by Primus). This has turned out to be extremely modular and flexible – and it couldn’t have been done without Angular.

If you’re looking for a “way in” for the HouseMon client-side code, then the place to start is app/index.jade, which uses Jade’s extend mechanism to tie into main/layout.jade, so that would be the second place to start. After that, it’s all Angular-structured code, i.e. views, view templates with controllers, and the services they call.

I’m quite excited by all this, because I’ve never before been able to develop software in such a truly modular fashion. The interfaces are high-level, clean (and clear) and all future functionality can now be added, extended, replaced, or even removed again – step by step.

Ground control to… browser

In Software on Sep 19, 2013 at 00:01

After yesterday’s RPC example, which makes the server an easy-to-use service in client code, let’s see how to do things in the other direction.

I should note that there is usually much less reason to do this, since at any point in time there can be zero, one, or occasionally more clients connected to the server in the first place. Still, it could be useful for a server to collect client status info, or to periodically save some state. The benefit of server-initiated requests being that it turns authentication around, assuming we trust the server.

Ok, so here’s some silly code on the client which I’d like to be able to call from the server:

ng = angular.module 'myApp'

ng.config ($stateProvider, navbarProvider, primus) ->
  $stateProvider.state 'view1',
    url: '/'
    templateUrl: 'view1/view.html'
    controller: 'View1Ctrl'
  navbarProvider.add '/', 'View1', 11
  
  primus.client.view1_twice = (x) ->
    2 * x

ng.controller 'View1Ctrl', ->

This is the “View 1” module, similar to the Admin module, exposing code for server use:

  primus.client.view1_twice = (x) ->
    2 * x

On the server side, the first thing to note is that we can’t just call this at any point in time. It’s only available when a client is connected. In fact, if more than one client is connected, then there will be multiple instances of this code. Here is app/view1/host.coffee:

module.exports = (app, plugin) ->
  app.on 'running', (primus) ->
    primus.on 'connection', (spark) ->
      
      spark.client('view1_twice', 123).then (res) ->
        console.log 'double', res

The output on the server console, whenever a client connects, is – rather unsurprisingly:

    double 246

What’s more interesting here, is the logic of that server-side code:

  • the app/view1/host.coffee exports one function, accepting two args
  • when loaded on startup, it will be called with the app and Primus plugin object
  • we’re in setup mode at this point, so this is a good time to define event handlers
  • the app.on 'running' handler gets called when all server setup is complete
  • now we can set up a handler to capture all new client connections
  • when such a connection comes in, Primus will hand us a “spark” to talk to
  • a spark is essentially a WebSocket session, it supports a few calls and streaming
  • now we can call the ‘view1_twice’ code on the client via RPC
  • this creates and returns a promise, since the RPC will take a little bit of time
  • we set it up so that when the promise is fulfilled, console.log ... gets called

And that’s it. As with yesterday’s setup, all the communication and handling of asynchronous callbacks is taken care of behind the scenes.

There is some asymmetry in naming conventions and the uses of promises, but here’s the summary:

  • to define a function ‘foo’ on the host, define it as app.host.foo = …
  • to call that function on the client, use host ‘foo’, …
  • the result is an Angular promise, use “.then …” to pick it up if needed
  • to define a function ‘bar’ on the client, define it as primus.client = …
  • to call that function from the server, use spark.client ‘bar’, …
  • the result is a q promise, use “.then …” to pick it up

Ok, so there are some slight differences between the two promise API’s. But the main benefit of this all is that the passing of time and networking involved is completely abstracted away. Maybe I’ll be able to unify this further, some day.

Promise-based RPC

In Software on Sep 18, 2013 at 00:01

Or maybe this post should have been titled: “Taming asynchronous RPC”.

Programming with asynchronous I/O is tricky. The proper solution IMO, is to have full coroutine or generator support in the browser and in the server. This has been in the works from some time now (see ES6 Harmony). In short: coroutines and generators can “yield”, which means something that takes time can suspend the current stack state until it gets resumed. A bit like green (non-preemptive) threads, and with very little overhead.

But we ain’t there yet, and waiting for all the major browsers to reach that point might be a bit of a stretch (see the “Generators (yield)” entry on this page). The latest Node.js v0.11.7 development release does seem to support it, but that’s only half the story.

So promises it is for now. On the server side, this is available via Kris Kowal’s q package. On the client side AngularJS includes promises as $q. And as mentioned before, I’m also using q-connection as promise-based Remote Procedure Call RPC mechanism.

The result is really neat, IMO. First, RPC from the client, calling a function on the server. This is app/admin/client.coffee for the client side:

ng = angular.module 'myApp'

ng.config ($stateProvider) ->
  $stateProvider.state 'admin',
    url: '/admin'
    templateUrl: 'admin/view.html'
    controller: 'AdminCtrl'

ng.controller 'AdminCtrl', ($scope, host) ->
  $scope.hello = host 'admin_dbinfo'

This Angular boilerplate defines an Admin page with app/admin/view.jade template:

h3 Storage use
table
  tr
    th Group
    th Size
  tr(ng-repeat='(group,size) in hello')
    td {{group}}
    td(align='right') ≈ {{size}} b

The key is this one line:

$scope.hello = host 'admin_dbinfo'

That’s all there is to doing RPC with a round-trip over the network. And here’s the result, as shown in the browser (formatted with the Zurb Foundation CSS framework):

Screen Shot 2013-09-17 at 13.37.33

There’s more to it than meets the eye, but it’s all nicely tucked away: the host call does the request, and returns a promise. Angular knows how to deal with promises, so it will fill in the “hello” field when the reply arrives, asynchronously. IOW, the above code looks like synchronous code, but does lots of things at another point in time than you might think.

On the server, this is the app/admin/host.coffee code:

Q = require 'q'

module.exports = (app, plugin) ->
  app.on 'setup', ->

    app.host.admin_dbinfo = ->
      q = Q.defer()
      app.db.getPrefixDetails q.resolve
      q.promise

It’s slightly more involved because promises are more exposed. First, the RPC call is defined on application setup. When called, it creates a new promise, calls getPrefixDetails with a callback, and immediately returns this promise to fill in the result later.

At some point, getPrefixDetails will call the q.resolve callback with the result as argument, and the promise becomes fulfilled. The reply is then sent to the client by the q-connection package.

Note that there are three asynchronous calls involved in total:

    rpc call -> network send -> database action -> network reply -> show
                ============    ===============    =============

Each of the underlined steps is asynchronous and based on callbacks. Yet the only step we had to deal with explicitly on the server was that custom database action.

Tomorrow, I’ll show RPC in the other direction, i.e. the server calling client code.

Flow – the inside perspective

In Software on Sep 17, 2013 at 00:01

This is the last of a four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

The current 0.8 version of HouseMon is my first foray into the world of streams and promises, on the host server as well as in the browser client.

First a note about source code structure: HouseMon consists of a set of subfolders in the app folder, and differs from the previous SocketStream-based design in that dependency info, client-side code, client-side layout files + images, and host-side code are now all grouped per module, instead of strewn across client, static, and server folders.

The “main” module starts the ball rolling, the other modules mostly just register themselves in a global app “registry”, which is simply a tree of nested key/value mappings. Here’s a somewhat stripped down version of app/main/host.coffee:

module.exports = (app, plugin) ->
  app.register 'nodemap.rf12-868,42,2', 'testnode'
  app.register 'nodemap.rf12-2', 'roomnode'
  app.register 'nodemap.rf12-3', 'radioblip'
  # etc...

  app.on 'running', ->
    Serial = @registry.interface.serial
    Logger = @registry.sink.logger
    Parser = @registry.pipe.parser
    Dispatcher = @registry.pipe.dispatcher
    ReadingLog = @registry.sink.readinglog

    app.db.on 'put', (key, val) ->
      console.log 'db:', key, '=', val

    jeelink = new Serial('usb-A900ad5m').on 'open', ->

      jeelink # log raw data to file, as timestamped lines of text
        .pipe(new Logger) # sink, can't chain this further

      jeelink # log decoded readings as entries in the database
        .pipe(new Parser)
        .pipe(new Dispatcher)
        .pipe(new ReadingLog app.db)

This is all server-side stuff and many details of the API are still in flux. It’s reading data from a JeeLink via USB serial, at which point we get a stream of messages of the form:

{ dev: 'usb-A900ad5m', msg: 'OK 2 116 254 1 0 121 15' }

(this can be seen by inserting “.on('data', console.log)” in the above pipeline)

These messages get sent to a “Logger” stream, which saves things to file in raw form:

L 17:18:49.618 usb-A900ad5m OK 2 116 254 1 0 121 15

One nice property of Node streams is that you can connect them to multiple outputs, and they’ll each receive these messages. So in the code above, the messages are also sent to a pipeline of “transform” streams.

The “Parser” stream understands RF12demo’s text output lines, and transforms it into this:

{ dev: 'usb-A900ad5m',
  msg: <Buffer 02 74 fe 01 00 79 0f>,
  type: 'rf12-868,42,2' }

Now each message has a type and a binary payload. The next step takes this through a “Dispatcher”, which looks up the type in the registry to tag it as a “testnode” and then processes this data using the definition stored in app/drivers/testnode.coffee:

module.exports = (app) ->

  app.register 'driver.testnode',
    in: 'Buffer'
    out:
      batt:
        title: 'Battery status', unit: 'V', scale: 3, min: 0, max: 5

    decode: (data) ->
      { batt: data.msg.readUInt16LE 5 }

A lot of this is meta information about the results decoded by this particular driver. The result of the dispatch process is a message like this:

{ dev: 'usb-A900ad5m',
  msg: { batt: 3961 },
  type: 'testnode',
  tag: 'rf12-868,42,2' }

The data has now been decoded. The result is one parameter in this case. At the end of the pipeline is the “ReadingLog” write stream which timestamps the data and saves it into the database. To see what’s going on, I’ve added an “on ‘put'” handler, which shows all changes to the database, such as:

db: reading~testnode~rf12-868,42,2~1379353286905 = { batt: 3961 }

The ~ separator is a convention with LevelDB to partition the key namespace. Since the timestamp is included in the key, this storage will store each entry in the database, i.e. as historical storage.

This was just a first example. The “StatusTable” stream takes a reading such as this:

db: reading~roomnode~rf12-868,5,24~1379344595028 = {
  "light": 20,
  "humi": 68,
  "moved": 1,
  "temp": 143
}

… and stores each value separately:

db: status~roomnode/rf12-868,5,24/light = {
  "key":"roomnode/rf12-868,5,24/light",
  "name":"light",
  "value":20,
  "type":"roomnode",
  "tag":"rf12-868,5,24",
  "time":1379344595028
}
// and 3 more...

Here, the information is placed inside the message, including the time stamp. Keys must be unique, so in this case only the last value will be kept in the database. In other words: this fills a “status” table with the latest value of each measurement value.

As last example, here is a pipeline which replays a saved log file, decodes it, and treats it as new readings (great for testing, but useless in production, clearly):

createLogStream = @registry.source.logstream

createLogStream('app/replay/20121130.txt.gz')
  # .pipe(new Replayer)
  .pipe(new Parser)
  .pipe(new Dispatcher)
  .pipe(new StatusTable app.db)

Unlike the previous HouseMon design, this now properly deals with back-pressure.

The “Replayer” stream is a fun one: it takes each message and delays passing it through (with an updated timestamp), so that this stream will simulate a real-time feed with new messages trickling in. Without it, the above pipeline processes the file as fast as it can.

The next step will be to connect a change stream through the WebSocket to the client, so that live status information can be displayed on-screen, using exactly the same techniques.

As you can see, it’s streams all the way down. Onwards!

Flow – the application perspective

In Software on Sep 16, 2013 at 00:01

This is the third of a four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Because I couldn’t fit it in three parts after all.

Let’s talk about the big picture, in terms of technologies. What goes where, and such.

A decade or so ago, this would have been a decent model of how web apps work, I think:

app1

All the pages, styles, images, and some code fetched as HTTP requests, rendered on the server, which sits between the persistent state on the server (files and databases), and the network-facing end.

A RESTful design would then include a clean structure w.r.t how that state on the server is accessed and altered. Then we throw in some behind-the-secnes logic with Ajax to make pages more responsive. This evolved to general-purpose client-side JavaScript libraries like jQuery to get lots of the repetitiveness out of the developer’s workflow. Great stuff.

The assumption here is that servers are big and fast, and that they work reliably, whereas clients vary in capabilities, versions, and performance, and that network connection stability needs to stay out of the “essence” of the application logic.

In a way, it works well. This is the context in which database-backed websites became all the rage: not just a few files and templating to tweak the website, but the essence of the application is stored in a database, with “widgets”, “blocks”, “groups”, and “panes” sewn together by an ever-more elaborate server-side framework – Ruby on Rails, Django, Drupal, etc. WordPress and Redmine too, which I gratefully rely on for the JeeLabs sites.

But there’s something odd going on: a few days ago, the WordPress server VM which runs this daily weblog here at JeeLabs crashed on an out-of-memory error. I used to reserve 512 MB RAM for it, but had scaled it back to 256 MB before the summer break. So 256 MB is not enough apparently, to present a couple of weblog pages and images, and handle some text searches and a simple commenting system.

(We just passed 6000 comments the other day. It’s great to see y’all involved – thanks!)

Ok, so what’s a measly 512 MB, eh?

Well, to me it just doesn’t add up. Ok, so there’s a few hundred MB of images by now. Total. The server is running off an SSD disk, so we should be able to easily serve images with say 25 MB of RAM. But why does WP “need” hundreds of MB’s of RAM to serve a few dozen MB’s of daily weblog posts? Total.

It doesn’t add up. Self-inflicted overhead, for what is 95% a trivial task: serving the same texts and images over and over again to visitors of this website (and automated spiders).

The Redmine project setup is even weirder: currently consuming nearly 700 MB of RAM, yet this ought to be a trivial task, which could probably be served entirely out of say 50 MB of RAM. A tenfold resource consumption difference.

In the context of the home monitoring and automation I’d like to take a bit further, this sort of resource waste is no longer a luxury I can afford, since my aim is to run HouseMon on a very low-end Linux board (because of its low power consumption, and because it really doesn’t do much in terms of computation). Ok, so maybe we need a bit more than 640 KB, but hey… three orders of magnitude?

In this context, I think we can do better. Instead of a large server-side byte-shuffling engine, we can now do this – courtesy of modern browsers, Node.js, and AngularJS:

app2

The server side has been reduced to its bare minimum: every “asset” that doesn’t change gets served as is (from files, a database, whatever). This includes HTML, CSS, JavaScript, images, plain text, and any other “document” type blob of information. Nginx is a great example of how a tiny web server based on async I/O and written in C can take care of any load we down-to-earth netizens are likely to encounter.

Let me stress this point: there is no dynamic “templating” or “page generation” on the server side. This takes place in the browser – abstracted away by AngularJS and the DOM.

In parallel, there’s a “Real-Time Server” running, which handles the application logic and everything that needs to be dynamic: configuration! status! measurements! commands! I’m using Node.js, and I’ve started using the fascinating LevelDB database system for all data persistence. The clever bit of LevelDB is that it doesn’t just fetch and store data, it actually lets you keep flows running with changes streamed in and out of it (see the level-livefeed and multilevel projects).

So instead of copying data in and out of a persistent data store, this also becomes a way of staying up to date with all changes on that data. The fusion of a database and pubsub.

On the client side, AngularJS takes care of the bi-directional flow between state (model) and display (view), and Primus acts as generic connection library for getting all this activity across the net – with connection keep-alive and automatic reconnect. Lastly, I’m thinking of incorporating the q-connection library for managing all asynchronicity. It’s fascinating to see network round trip delays being woven into the fabric of an (async / event-driven) JavaScript application. With promises, client/server apps are starting to feel like a single-system application again. It all looks quite, ehm… promising :)

Lots of bits that need to work together. So far, I’m really delighted by the flexibility this offers, and the fact that the server side is getting simpler: just the autonomous data acquisition, a scalable database, a responsive WebSocket server to make the clients (i.e. browsers) come alive, and everything else served as static data. State is confined to the essence of the app. The rest doesn’t change (other than during development of course), needs no big backup solution, and the whole kaboodle should easily fit on a Raspberry Pi – maybe something even leaner than that, one day.

The last instalment tomorrow will be about how HouseMon is structured internally.

PS. I’m not saying this is the only way to go. Just glad to have found a working concoction.

Flow – the developer perspective

In Software on Sep 15, 2013 at 00:01

This is the second of a three-part four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Software development is all about “flow”, in two very different ways:

I’ve been looking at lots of options on how to address that first bit, i.e. how to set up an environment which can help me through the edit-run-debug cycle as smoothly and quickly as possible. I didn’t find anything that really fits my needs, so as any good ol’ software developer would do, I scratched this itch recently and ended up creating a new tool.

It’s all about flow. Iterating through the coding process without pushing more buttons or keys than needed. Here’s how it works – there are three kinds of files I need to deal with:

  • HTML – app pages and docs, in the form of Jade and Markdown text files
  • CSS – layout and visual design, in the form of Stylus text files
  • CODE – the logic of it all, host- and client-side, in the form of CoffeeScript text files

Each of these are not the real thing, in the sense that they are all dialects which need to be translated to HTML, CSS, and JavaScript, respectively. But I don’t want a solution which introduces lots of extra steps into that oh-so treasured edit-run-debug cycle of software development. No temporary files, please – let’s generate everything as needed on-the-fly, and let’s set up a single task to take care of it all.

The big issue here is when to do all that pre- (and re-) processing. Again, since staying in the flow is the goal, that really should happen whenever needed and as quickly as possible. The workflow I’ve found to suit me well is as follows:

  • place all the development source files into one nested area (doh!)
  • have the system watch for changes to any files in this area
  • if it’s a HTML file change: reload the browser
  • if it’s a CSS file change: update the browser (no need to reload)
  • if it’s a CODE file change: restart the server and reload the browser

The term for this is “live reload”. It’s been in use for ages by web developers. You save a file, and BAM … the browser updates itself. But that’s only half the story with a client + server setup, as you also may have to restart the server. In the past, I’ve done so using the nodemon utility. It works, but along with other tools too watch for changes and pre-compile to the desired format, it all got a bit clunky.

In the new HouseMon project, it’s all done behind the scenes in a single process. Well, two actually: when in development mode, HouseMon forks itself into a supervisor and a worker child. The supervisor just restarts the worker whenever it exits. And the worker watches for files changes and either reloads or adjusts the browser, or exits. The latter becomes a way for the worker to quickly restart itself from scratch due to a code change.

The result works phenomenally well: I edit in MacVim, multiple files even, and nothing happens. I can bring up the Dash reference guides to look up something. Still nothing happens. But when I switch away from MacVim, it’s set up to save all changes, and everything then kicks in on auto-pilot. Instantly or a few seconds later (depending on the change), the latest changes will be running. The browser may lose a little of its context, but the URL stays the same, so I’m still looking at the same page. The console is open, and I can see what’s going on there. Without ever switching to the browser (ok, I need to switch away from MacVim to something else, but often that will be the command line).

It’s not yet the same as live in-app development (as was possible with Tcl), but it’s getting mighty close, because the app returns to where it was before without pushing any buttons.

There’s one other trick in this setup which is a huge time-saver: whenever the supervisor is launched (“node .“), it scans through all the folders inside the main ./app folder, and collects npm and bower package files as well as Makefile files. All dependencies and requirements are then processed as needed, making the use of 3rd party packages virtually effortless – on the server and in the client. Total automation. Look ma, no hands!

None of this machinery needs to be present in production mode. All this reloading and package dependency handling stuff can then be ignored. Deployment just needs a bunch of static files, served through Node.js, Nginx, Apache, whatever – plus a light-weight Node.js server for the host-side logic, using WebSockets or some fallback mechanism.

As for the big picture on how HouseMon itself is structured… more coming, tomorrow.

Flow – the user perspective

In Software on Sep 14, 2013 at 00:01

This is the first of a three-part four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).

Ok, so maybe what follows is not the user perspective, but my user perspective?

I’m not a fan of pushing clicking buttons. It’s a computer invention, and it sucks.

Interfaces (I’m not keen on the term user interfaces either, but it’s more to the point than just “interfaces”) – anyway, interfaces need interaction to indicate what information you want to see, and to perform a real action, i.e. something that has effect on some permanent state. From setting up preferences or a configuration, to turning on a light.

But apart from that, interaction to fetch or update something is tedious, unwanted, silly, and a waste of time. Pushing a button to see the time is 2,718,281 times more inconvenient than looking at the time. Just like asking for something involves a lot more interaction (and social subtleties) than having the liberty to just do it.

When I look outside the window, I expect to see the actual weather. And when I see information on a screen, I expect it to be up-to-date. I don’t care if the browser does constant refreshes to pull changes from a server or whether it’s set up to respond to push notifications coming in behind the scenes. When I see “22.7°C”, it needs to be a real reading, with a known and acceptable lag and accuracy – unless it’s historical data.

This goes a lot further than numbers and graphs. When I see a button, then to me that is a promise (and invitation) that some action will be taken when I push it. Touch on mobile devices has the advantage here, although clicking via the mouse or typing a power-user keyboard shortcut is often perfectly acceptable – even if slightly more indirect.

What I don’t want is a button which does nothing, or generates an error message. If that button isn’t intended to be used, then it should be disabled or it shouldn’t be there at all:

Screen Shot 2013-09-13 at 11.56.54

Screen Shot 2013-09-13 at 11.57.17

That’s the “Graphs” install page in HouseMon (the wording could have been better).

When a list or graph of readings is shown on-screen, this needs to auto-update in real time. That’s the whole point of HouseMon – access to specific information (that choice is a preference, and will require interaction) to see what’s going on. Now – or within the last few minutes, in the case of sensors which are periodically sending out their readings. A value without an indication of its “up-to-date-ness” is useless. That could be either implicit by displaying it in some special way if the information is stale (slowly turning red?), or explicit, by displaying a time stamp next to it (a bit ugly and tedious).

Would you want it any other way when doing online banking? Would you accept seeing your account balance without guarantee that it is recent?nah, me neither :) – so why do we put up with web pages with copies of some information, at some point in time, getting obsolete the very moment that page is shown?

When “on the web” – as a user – I want to deal with accurate information. When I see something tagged as “updated 2 minutes ago”, then I want to see that “2” change into a “3” within the next minute. See GitHub for an example of this. It works, it makes sense, and it makes the whole technology get out of the way: the web should be an intermediary between some information and me. All the stuff in between is secondary.

Information needs to update – we need to stop copying it into a web page, and send that copy off to the browser. All the lipstick in the world won’t help if what we see is a copy of what we’re looking for. As developers, we need to stop making web sites fancy – we should make them live first and then make them gorgeous. That’s what RIA‘s and SPA‘s are about.

One more thing – not only does information need to flow, these flows should be visualised:

Screen Shot 2013-09-13 at 11.02.15

Such a user interface can be used during development to manage data flows and to design software which ties each result shown on-screen back to its origin. In tomorrow’s apps, every change should propagate – all the way to the pixels shown on-screen by the browser.

Now let’s go back to static web servers to make it happen. Stay tuned…

Gearing up for the new web

In Software on Sep 13, 2013 at 00:01

Well… new for me, anyway. What a fascinating world in motion we live in:

  • first there was the “pre-web”, with email BBS’es and very clunky modem links
  • then came Nestcape Navigator – and the world would never be the same again
  • hey, we can make the results dynamic, let’s generate pages through CGI
  • and not just outgoing, we can have forms and page refreshes to interact!
  • we need lots of ways to generate page variations, let’s add modules to the server
  • nah, we can do better than that: let’s add PHP / Python / Ruby inside the server!
  • and so the first major web explosion of the internet was born…

It took a decade, and another decade to make fast internet access mainstream, which is where we are today. There are lots and lots and LOTS of “web frameworks” in existence now, for any programming language you care about.

The second major wave of evolution came with a whole bunch of new acronyms, such as RIA and SPA. This is very much Google’s turf, and this is the technology which powers Gmail, for example. Rich and responsive interactivity. It’s going everywhere by now.

It was made possible by the super-trio of HTML, CSS, and JS (JavaScript). They’ve all evolved to define the structure, style, and logic of everything you see in the browser.

It’s hard to keep up, but as you know, I’ve picked two very active technologies for doing all my new software development: AngularJS and Node.js. Both by Google, by the way – but also both freely available as wide-open source (here and here). They’ve changed my life :)

It’s been a steep learning curve. I don’t mean just getting something going. I mean getting comfortable with it all. Understanding the idioms and quirks (there always are), and figuring out how to debug stuff (async is nasty, especially when not using promises). Can’t say I’m in the clear yet, but the fog is lifting and the fun level is rising!

I started coding HouseMon many months ago and it’s been running here for a long time now, giving me a chance to let it all sink in. The reality is: I’ve been doing it all wrong.

That’s not necessarily a bad thing, by the way: ya gotta learn stuff by doin’, right?

Ok, so I’ve been using Events as the main decoupling mechanism in HouseMon, i.e. publish and subscribe inside the app. The beauty of this is that it lets you keep the functionality of the application highly modular. Each subsystem subscribes to what it is interested in, and publishes results when they are available. No need to consider who needs this, or even how many subscribers there will be for what gets published.

It sounds great, and in a way it is, but it is a bit like a loose canon: events fire all over the place, with little insight in what is going on and when. For some apps, this is probably great, but with fairly stable continuous processing flows as in a home monitoring setup, it’s more chaotic than need be. It all works fine “in the small”, i.e. when writing little modules (“Briqs” in HouseMon-speak) and adding minor drivers / decoders, but I constantly felt lost with respect to the big picture. And then there’s this nasty back pressure dilemma.

But now the pieces are starting to fall into place. I’ve figured out the “big” structure of the HouseMon application as well as the “little” inner organisation of it all.

It’s all about flow. Three Four stories coming up – stay tuned…

My development setup – utilities

In Software on Sep 12, 2013 at 00:01

As promised, one final post about my development setup – these are all utilities for Mac OSX. There are no doubt a number of equivalents for other platforms. Here’s my top 10:

  • Alfred – Keyboard commands for what normally requires the mouse. I have shortcuts such as cmd-opt-F1 for iTerm, cmd-opt-F2 for MacVim. It launches the app if needed, and then either brings it to the front or hides it. Highly configurable, although I don’t really use that many features – just things like “f blah” to see the list of all folder names matching “blah”. Apart from that, it’s really just an alternative for Spotlight (local disk searches) + Google (or DuckDuckGo) + app launcher. Has earned the #1 keyboard shortcut spot: cmd+space. Top time-saver.

  • HomeBrew – This is the package manager which handles (almost) anything from the Unix world. Installing node.js (w/ npm) is brew install node. Installing MacVim is brew install macvim. Updates too. The big win is that all these installs are compartmentalised, so they can be uninstalled individually as well. Empowering.

  • Dash – Technical reference guides at your fingertips. New guides and updates added a few times a week. Looking for the Date API in JavaScript is a matter of typing cmd-opt-F8 to bring up Dash, and then “js:date”. My developer-pedia.

  • GitHub – Actually, it’s called “GitHub for Mac” – a tool to let you manage all the daily git tasks without ever having to learn the intricacies of git. It handles the 90% most common actions, and it does those well and safely. Push early, push often!

  • SourceTree – Since I’m no git guru, I use SourceTree to help me out with more advanced git actions, with the GUI helping me along and (hopefully) preventing silly mistakes. Every step is trackable, and undoable, but when you don’t know what you’re doing with git, it can become a real mess in the project history. This holds my hand.

  • Dropbox – I’m not into iCloud (too much of a walled garden for me), but for certain things I still need a way to keep things in sync between different machines, and with a few people around the world. Dropbox does that well (though BitTorrent Sync may well replace it one day). Mighty convenient.

  • 1Password – One of the many things that get synced through Dropbox are all my passwords, logins, bank details, and other little secrets. Securely protected by a strong password of course. Main benefit of 1P is that it integrates well with all the browsers, so logging in anywhere, or even coming up with new arbitrary passwords is a piece of cake. Also syncs with mobile. Indispensable.

  • DevonThink – This is my heavyweight external brain. I dump everything in there which needs to be retained. Although plain local search is getting so good and fast that DT is no longer an absolute must, it partitions and organises my files in separate databases (all plain files on disk, so no extra risk of data loss). Intelligent storage.

  • CrashPlan – Nobody should be without (automated, offline) backups. CrashPlan takes care of all the machines around the house. I sleep much better at night because of it. Its only big drawback is that there’s always a 500 MB background process to make this happen (fat Java worker, apparently), and that it burns some serious CPU cycles when checking, or catching up, or cleaning up – whatever. Still, essential.

  • Textual – Yeah, well, ok, this one’s an infrequent member on this list: IRC chat, on the #jeelabs channel for example. Not used much, but when it’s needed, it’s there to get some collaboration done. I’m thinking of hooking this into some sort of personal agents, maybe even have a channel where my house chats about the more public readings coming in. Social, on demand.

The point of all the things I’ve been mentioning in these past few posts, is that it’s all about forming habits, so that a different part of the brain starts doing some of the work for you. There is a perfect French term for getting tools to work for you that way:

  • prolongements du corps (a great example of this is driving a car)

This is my main motivation for adopting touch typing and in particular for using the command-line shell and a keyboard-command based text editor. It (still) doesn’t come easy, but every day I notice some little improvement. As always: Your Mileage May Vary.

Ok, that wraps it up for now. Any other tools you consider really useful? Let me know…

My development setup – hardware

In Hardware on Sep 11, 2013 at 00:01

After yesterday’s notes about my development software, some comments about hardware.

As you may have noticed, I use Apple’s hardware with Mac OS/X. Have gone from big (clunky!) setups to the svelte 1 kg 11″ MacBook Air, and now I’m back on a 15″ model. It’s unlikely that I’ll ever go back to a desktop-only setup, and even the 1920 x 1200 pixel 24″ screen on my desk has been sitting idle for many months now (apart from its use in the FanBot project). One reason is that screen switching with multiple physical screens has not been convenient so far (the next OS revision this fall promises to fix that), but even as is, I find the constant switching between different pixel sizes disruptive. Nowadays, I’m often in house-nomad mode: switching places several times a day around the house – from the desk, to the couch, to a big comfy rotating chair, and back. Heck, even outside, at times!

So one setup it is for me. And these days, that’s a 15″ Retina MacBook Pro (“RMBP15”):

Screen Shot 2013-09-04 at 14.48.48

There’s a lot to say about this, but the essence boils down to: 1920 x 1200 with VM’s.

To me it’s not essential which operating runs on the host: pick one which you feel really at home with, and go for a laptop size and build quality that suits you and your budget.

Now the crazy thing about the MBPR15 is that its screen is not 1920 x 1200, but 2880 x 1800 pixels. And out of the box, the machine comes set to a 1440 x 900 “logical” screen size, i.e. doubling up all the pixels. Which, in my view, is too small as main development environment – at least by today’s measures (hey, we’ve all been there – but it really is worth stepping up whenever you can).

So there’s this curious 1.5x magnification setting on this Mac laptop – does this mean that a 1-pixel thick line will end up getting drawn as “one pixel and a half”?

Obviously not. It’ll all be anti-aliased, as you can see in this close-up:

Screen Shot 2013-09-04 at 15.04.40

If you click on the image above, you’ll see a larger version. There is some interesting stuff going on behind the scenes when using a RMBP15 in 1920 x 1200 “interpolated” mode:

  • to the applications, the screen is reported as being 1920 x 1200
  • so that’s the way apps deal with for screen area and placement
  • for lines, the Mac OSX graphics engine will do its anti-aliasing thing
  • for text, the graphics engine will draw the fonts at their optimal resolution

That last one does the trick: when drawing a 12-point text, the graphics engine will actually draw an 18-point version, using the full screen resolution. As a result, text comes out as sharp as the LCD display will allow, without the application having to do a thing.

I tend to go for the smallest font sizes in editors and command-line shells which my glass-assisted (no, not that one) eyes can still read well (Menlo 10, for monospaced fonts). This gives me a 100 line window height in MacVim, which is perfect. But on a real 1920 x 1200 display, that would actually be pushing it. However, due to the rendering going on inside a Retina Mac, what you actually get to see is text drawn in a 15-point font on a display with over 200 dpi resolution. The result is excellent.

These choices for screen, window, and font sizes are really hitting the sweet spot for me. An 100 x 80 character editing window with splits and tabs as needed, a decent area to see the browser’s console log (and command line), and a main HMTL display area which is still almost exactly the 1366 x 768 size of a common small laptop screen. I rarely use the Mac’s multiple-desktop feature (called “spaces” in OSX), because I don’t have the patience to wait through its sweep-left-and-right animations. And because there’s no need: one “mode” with carefully-positioned windows, and other windows which can be moved to the front and back – a bit disruptively, but that’s because IMO that’s exactly what they are!

The second part of my laptop story is that since everything is running on an Intel 64-bit chip, virtualisation comes easy. This means both Windows and Linux can be run at the same time on the same machine (assuming you have enough RAM). I regularly fire up a Linux VM, and then ssh into it from the command line. For editing, I don’t even have to leave MacVim: just opening a file as scp://debianvm/housemon/ will open a directory window in MacVim and let me navigate from there. With the entire editor environment intact for fast file selection, tags, folding, etc. With Windows, it’s a matter of “mounting” my home directory on Windows, and all the local command line tools can be used for editing, git, diff, and so on (there’s a lot more possible, but I don’t use Windows much).

I’ll finish off tomorrow with some other handy software utilities I’ve come to rely on.

PS. No, this wasn’t intentionally made to coincide with Apple’s latest publicity blitz :)

My development setup – software

In Software on Sep 10, 2013 at 00:01

To follow up on yesterday’s post, here are the different software packages in my laptop-based development setup:

  • MacVim is my editor of choice these days. Now that I’ve switched to touch-typing mode, and given that I’ve used vi(m) for as long as I can remember, this is now growing on me every day. It’s all about muscle memory, and off-loading activities to have more spare brain cycles left for the task at hand.

    I chose MacVim over pure vim in the command line for one reason: it can be set to save all files on focus loss, i.e. when switching another app to the foreground. Tried a dark background for a while, but settled on a light one in the end. Dark backgrounds caused trouble with screen reflections when sitting with my back to a window.

  • TextMate is the second editor I use, but exclusively in browse mode. It can be fired up from the command line (e.g. cd housemon && mate .) and makes it very easy to skim and search across large source collections. Having two entirely different contexts means I can keep vim and its history focused on the code I’m actually working on. I only started using this dual-editor approach a few months ago, but by now it’s become a habit to start them both up – on different areas of the screen.

  • Google Chrome is my main development web browser. I settled on it (over Safari) for mostly one reason: it supports having the developer console pane open to the side of the main screen. There are plenty of dev tools included, from browsing the DOM structure to various performance graphs. Using a dedicated browser for development lets me flush the cache and muck with settings without disrupting other web activities.

  • Safari is my secondary browser. This is the one which keeps history, knows how to log into sites, and ends up having lots of tabs and windows open to keep track of all the distractions on the web. Reference docs, searching for information, and all the other time sinks the web has to offer. It drops nicely to the background when I want to get back to the real thing, but it’s there alongside MacVim whenever I need it.

  • iTerm 2 is what I use as command line. The only reason I’m using this instead of the built-in Terminal application, is that it supports split panes (I don’t use screen or tmux much). It’s set up to start up with two windows side-by-side, because tabs are not always convenient: sometimes you really need to see a few things at the same time.

    I do use tabs in all these apps, dragging them around between windows when needed. Sometimes, I’ll open a new iTerm window and make it much larger (all from the keyboard), to see all the output of a ps or grep command, for example.

  • nvAlt is a fork of Notational Velocity – a note-taking app which stores each note as a plain text file. Its main feature is that it starts up and searches everything instantly. Really. In the blink of an eye, I can find all the notes I wrote about anything. I use it for logging how I installed some tricky software and for lots and lots of lists with design ideas. Plain text. Instant. Clickable URLs. Shared through Dropbox. Perfect.

There’s a lot more to it, but these are the six tools I now use day in day out for software development. Even for embedded work, I’ll use the Arduino IDE only as compile + upload step, editing code in MacVim nowadays – because it has so much more functionality.

All of the above are either included in Mac OSX or open source software. I’m inclined to use OSS as a way to keep my learning investments safe. If ever one of these tools ends up going nowhere, or in an unwanted direction, then chances are that some developer(s) will fork the project to keep things on track (as has already happened with TextMate, iTerm, and nvAlt). In the long term, these core tools are simply too important…

My development setup

In Software on Sep 9, 2013 at 00:01

The tools I use for software development have evolved greatly over the years. Some have been replaced, other have simply gotten better, much better, over time. I’d like to describe the setup a bit in the next few posts – and why / how I made those choices. “YMMV”, as they say, but I think it could be of interest to other developers, whether soon-to-be or seasoned-switchers. Context: I develop in (for?) JavaScript (server- and browser-side).

Ok, here goes… the main tools I use are:

  • 2 editors – yes, two…
  • 2 browser – yes, two…
  • 2 command-line windows, yes, two…
  • a note-taking app (sorry, just one :)

Here is a snapshot of all the windows, taken at a fairly arbitrary moment in time:

dev-expose-small

And this is the setup in actual use:

dev-layout-annotated

Tomorrow, I’ll go over the different bits and pieces, and how they work out for me.

MPPT hunting

In Hardware on May 21, 2013 at 00:01

Solar panels are funny power sources: for each panel, if you draw no power, the voltage will rise to 15..40 V (depending on the type of panel), and when you short them out, a current of 5..12 A will flow (again, depending on type). My panels will do 30V @ 8A.

Note that in both cases just described, the power output will be zero: power = volts x amps, so when either one is zero, there’s no energy transfer! – to get power out of a solar panel, you have to adjust those parameters somewhere in between. And what’s even worse, that optimal point depends on the amount of sunlight hitting the panels…

That’s where MPPT comes in, which stands for Maximum Power Point Tracking. Here’s a graph, copied from www.kitenergie.com, with thanks:

MPPT_knee_diagram

As you draw more current, there’s a “knee” at which the predominantly voltage-controlled output drops, until the panel is asked to supply more than it has, after which the output voltage drops very rapidly.

Power is the product of V and A, which is equivalent to the surface of the area left of and under the current output point on the curve.

But how do you adjust the power demand to match that optimal point in the first place?

The trick is to vary the demand a bit (i.e. the current drawn) and then to closely watch what the voltage is doing. What we’re after is the slope of the line – or in mathematical terms, its derivative. If it’s too flat, we should increase the load current, if it’s too steep, we should back off a bit. By oscillating, we can estimate the slope – and that’s exactly what my inverter seems to be doing here (but only on down-slopes, as far as I can tell):

Screen Shot 2013-05-14 at 15.35.03

As the PV output changes due to the sun intensity and incidence angle changing, the SMA SB5000TL inverter adjusts the load it places on the panels to get the most juice out of ’em.

Neat, eh?

Update – I just came across a related post at Dangerous Prototypessynchronicity!

Tricky graphs – part 2

In Software on May 20, 2013 at 00:01

As noted in a recent post, graphs can be very tricky – “intuitive” plots are a funny thing!

Screen Shot 2013-05-14 at 10.35.43

I’ve started implementing the display of multiple series in one graph (which is surprisingly complex with Dygraphs, because of the way it wants the data points to be organised). And as you can see, it works – here with solar power generation measured two different ways:

  • using the pulse counter downstairs, reported up to every 3 seconds (green)
  • using the values read out from the SB5000TL inverter via Bluetooth (blue)

In principle, these represent the same information, and they do match more-or-less, but the problem mentioned before makes the rectangles show up after each point in time:

Screen Shot 2013-05-14 at 10.40.11

Using a tip from a recent comment, I hacked in a way to shift each rectangle to the left:

Screen Shot 2013-05-14 at 10.40.34

Now the intuition should be that the area in the coarser green rectangles should be the same as the blue area they overlap. But as you can see, that’s not the case. Note that the latter is the proper way to represent what was measured: a measurement corresponds to the area preceding it, i.e. that hack is an improvement.

So what’s going on, eh?

Nice puzzle. The explanation is that the second set of readings is incomplete: the smaRelay sketch only obtains a reading from the inverter ever 5 minutes or so, but that’s merely an instantaneous power estimate, not an averaged reading since the previous readout!

So there’s not really enough data in the second series to produce an estimate over each 5-minute period. All we have is a sample of the power generation, taken once every so often.

Conclusion: there’s no way to draw the green rectangles to match our intuition – not with the data as it is being collected right now anyway. In fact, it’ probably more appropriate to draw an interpolated line graph for that second series, or better still: dots

Update – Oops, the graphs are coloured differently, my apologies for the extra confusion.

Measurement intervals and graphs

In Software on May 13, 2013 at 00:01

One more post about graphs in this little mini-series … have a look at this example:

Screen Shot 2013-05-10 at 19.33.51

Apart from some minor glitches, it’s in fact an accurate representation of the measurement data. Yet there’s something odd: some parts of the graph are more granular than others!

The reason for this is actually completely unrelated to the issues described yesterday.

What is happening is that the measurement is based on measuring the time between pulses in my pulse counters, which generate 2000 pulses per kWh. A single pulse represents 0.5 Wh of energy. So if we get one pulse per hour, then we know that the average consumption during that period was 0.5 W. And when we get one pulse per second, then the average (but near-instantaneous) power consumption over that second must have been 1800 W.

So the proper way to calculate the actual power consumption, is to use this formula:

    power = 1,800,000 / time-since-last-pulse-in-milliseconds

Which is exactly how I’ve been measuring power for the past few years here at JeeLabs. The pulses represent energy consumption (i.e. kWh), whereas the time between pulses represents estimated actual power use (i.e. W).

This type of measurement has the nice benefit of being more accurate at lower power levels (because we then divide by a larger and relatively more accurate number of milliseconds).

But this is at the same time also somewhat of a drawback: at low power levels, pulses are not coming in very often. In fact, at 100 W, we can expect one pulse every 18 seconds. And that’s exactly what the above graph is showing: less frequent pulses at low power levels.

Still, the graph is absolutely correct: the shaded area corresponds exactly to the energy consumption (within the counter’s measurement tolerances, evidently). And line drawn as boundary at the top of the area represents the best estimate we have of instantaneous power consumption across the entire time period.

Odd, but accurate. This effect goes away once aggregated over longer period of time.

Tricky graphs

In Software on May 12, 2013 at 00:01

Yesterday’s post introduced the new graph style I’ve added to HouseMon. As mentioned, I am using step-style graphs for the power (i.e. Watt) based displays. Zooming in a bit on the one shown yesterday, we get:

Screen Shot 2013-05-10 at 19.21.18

There’s something really odd going on here, as can be seen in the blocky lines up until about 15:00. The blocks indicate that there are many missing data points in the measurement data (normally there should be several measurements per minute). But in itself these rectangular shapes are in fact very accurate: the area is Watt times duration, which is the amount of energy (usually expressed in Joules, Wh, or kWh).

So rectangles are in fact exactly right for graphing power levels: we want to represent each measurement as an area which matches the amount of energy used. If there are fewer measurements, the rectangles will be wider, but the area will still represent our best estimate of the energy consumption.

But not all is well – here’s a more detailed example:

Screen Shot 2013-05-10 at 19.17.03

The cursor in the screenshot indicates that the power usage is 100.1 W at 19:16:18. The problem is that the rectangle drawn is wrong: it should precede the cursor position, not follow it. Because the measurement is the result of sampling power and returning the cumulative power level at the end of that measurement period. So the rectangle which is 100.1 W high should really be drawn from 19:18:00 to 19:18:16, i.e. from the time of the previous measurement.

I have yet to find a graphing system which gets this right.

Tomorrow: another oddity, not in the graph, but related to how power is measured…

New (Dy)graphs in HouseMon

In Software on May 11, 2013 at 00:01

I’ve switched to the Dygraphs package in HouseMon (see the development branch). Here’s what it now looks like:

Screen Shot 2013-05-10 at 19.15.08

(click for the full-scale resolution images)

Several reasons to move away from Flotr2, actually – it supports more line-chart modes, it’s totally geared towards time-series data (no need to adjust/format anything), and it has splendid cursor and zooming support out of the box. Oh, and swipe support on tablets!

Zooming in both X and Y directions can be done by dragging the mouse inside the graph:

Screen Shot 2013-05-10 at 19.16.19

You’re looking at step-style line drawing, by the way. There are some quirks in this dataset which I need to figure out, but this is a much better representation for energy consumption than connecting dots by straight lines (or what would be even worse: bezier curves!).

Temperatures and other “non-rate” entities are better represented by interpolation:

Screen Shot 2013-05-10 at 19.17.40

BTW, it’s surprisingly tricky to graphically present measurement data – more tomorrow…

HouseMon 0.5.1

In Software on Feb 27, 2013 at 00:01

It feels a bit odd to give this piece a software a version number, since it’s all so early still, but I’m releasing it as HouseMon 0.5.1 anyway, to set a baseline for future development.

Note that the instructions so far were all about development. To get an idea of the true performance on a Raspberry Pi, you should start up as follows (make sure HouseMon is not running already): cd ~/housemon && SS_ENV=production node app.js – this’ll take a few minutes while SocketStream combines and “minifies” everything the first time around. After that, all web accesses will be a lot snappier than in development mode.

But let’s not get ahead of ourselves… tons of work to do before HM becomes useful!

I suspect that this project will lead to more questions than answers at this stage, but I guess that’s what you get when following the “release early, release often” approach of open source software. There seems to be no other way to go through this than just bite bullet, get a basic release out in the wild, and see how it holds up when installed in more places.

Here’s an attempt to map out what’s in the 0.5.x release:

hm050

The dotted lines indicate that logged and archived data is being saved but not used yet (logged data is the raw incoming text from serial interfaces, etc – archived data is per-sensor information, aggregated hourly).

This diagram is still too complicated for my tastes. Part of the process is getting to grips with complexity: multiple sensor values per reading, “de-multiplexing” different types of packets from the ookRelay, mapping node ID’s to locations, ignoring some repeated data which contains no extra information, and more real-world messiness…

There is currently very little error checking and error messages can (will!) be cryptic.

Which is all a long-winded way of saying that HouseMon is still in its very early stages. Let’s see how it evolves, and please do comment and hack on it if you are interested.

Note that HouseMon is a moving target. The “develop” branch is already moving ahead:

Screen Shot 2013-02-24 at 10.11.57

As you can see, it’s all made up of “briqs” which can be added and enabled as needed.

PS. I’ve added some documentation on how to track changes in the development branch.

DIJN.12 – Final checks and unattended use

In Uncategorized on Feb 22, 2013 at 00:01

Welcome to the last instalment of Dive Into JeeNodes. Let’a make an unattended setup!

Everything is working now. The only step that remains is to automate things a bit further, so that the RPi will automatically start up HouseMon when powered up. This turns it into a fire-and-forget system, so that it becomes a permanent service on your LAN.

Auto-startup is convenient, but it means we also have to think a bit about how to upgrade HouseMon. This is where the nodemon utility comes in: it can be used to start up a Node.js application, and restart it whenever certain source files change. This is mostly intended as development tool, but at this stage where HouseMon is still so young and evolving rapidly, it’s actually going to be quite practical – even in an unattended mode.

Install the nodemon package by entering the command: npm install nodemon -g

The “-g” flag causes nodemon to be installed in a central location instead of as part of HouseMon, so that we can type in “nodemon” from the command line.

Now we’re ready to configure an unattended setup. Copy and paste these commands:

    cd ~/housemon
    echo 'cd ~/housemon' >go.sh
    echo 'PATH=/usr/local/bin:$PATH' >>go.sh
    echo 'nodemon >nohup.out 2>nohup.err &' >>go.sh
    chmod +x go.sh
    echo '@reboot ~/housemon/go.sh' | crontab

This creates a “go.sh” script which will be used to start up HouseMon while you’re not around and sets up an entry in the “cron” table which will run that script right after reboot, even when you are not logged in.

Warning: this loses any previous crontab entries, if this is not a fresh Raspbian install.

Reboot now, using sudo reboot to start the ball rolling. After a few minutes, the RPi will be up and running again, and you will be able to visit the HouseMon server via your web browser – no need to log in for that!

Screen Shot 2013-02-10 at 13.57.11

Congratulations: you have created your own personal Wireless Sensor Network, with a JeeNode sending out light readings once a second over wireless, to a JeeLink connected to a stand-alone Raspberry Pi, and via HouseMon running on Node.js, you’re able to watch the current light level in real time, from any web browser with access to your LAN.

Is this a home monitoring / home automation system? Heh.. not quite, but it is definitely an important first step towards such a system. All the foundations are in place, yearning to be filled-in and extended in numerous directions. The load on a Raspberry Pi looks fine:

Screen Shot 2013-02-21 at 15.36.54

This also concludes this initial series of “Dive Into JeeNodes”. The goal was to set up a basic – but fully functional – system, as a baseline for lots and lots of further explorations. As far as I’m concerned, there will be many more posts building upon everything that has been accomplished so far. For now, I’d like to leave this to settle down a bit, and to reconcile loose ends – such as going through these 12 instalments using Windows. Since I don’t use Windows myself, I’m hoping that someone else will chime in with details, so that the exact steps to get going can be documented in a follow-up post or how-to page in the project.

Cheers for now, I hope you’ve enjoyed this “PhysComp+WSN fun pack” DIJN series!

(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

PS – I’ve set up an SD card image pre-configured with Raspbian and HouseMon 0.5.1, so if you want to bypass all the setup work, download the hm051.img.gz file (550 MB!), and follow the instructions in DIJN.05 to set up that SD card. Then insert the JeeLink and SD card, and power up the RPi. It’ll start with HouseMon running – including the demo page.

PPS – Another milestone, this is weblog post #1250. Onwards! :)

Who needs a database?

In Software on Feb 18, 2013 at 00:01

Yesterday’s post was about the three (3!) ways HouseMon manages data. Today I’ll describe the initial archive design in HouseMon. Just to reiterate its main properties:

  • archives are redundant – they can be reconstructed from scratch, using the log files
  • data in archives is aggregated, with one data point per hour, and optimised for access
  • each hourly aggregation contains: a count, a sum, a minimum, and a maximum value

The first property implies that the design of this stuff is absolutely non-critical: if a better design comes up later, we can easily migrate HouseMon to it: just implement the new code, and replay all the log files to decode and store all readings in the new format.

The second decision was a bit harder to make, but I’ve chosen to save only hourly values for all data older than 48 hours. That means today and yesterday remain accessible from Redis with every single detail still intact, and for things like weekly, monthly, and yearly graphs it always resolves to hourly values. The simplification over progressive RRD-like aggregation, is that there really are only two types of data, and that the archived data access is instant, based on time range: simple file seeking.

Which brings me to the data format: I’m going to use Plain Old Binary Data Files for all archive data. No database, nothing – although a filesystem is just as much a database as anything, really (as “git” also illustrates).

Note that this doesn’t preclude also storing the data in a “real” database, but as far as HouseMon is concerned, that’s an optional export choice (once the code for it exists).

Here’s a sample “data structure” of the archive on disk:

    archive/
      index.json
      p123/
        p123-1.dat
        p123-2.dat
      p124/
        p124-1.dat

The “index.json” file is a map from parameter names to a unique ID (id’s 1 and 2 are used in this example). The “p123” directory has the data for time slot 123. This is in mult1ples of of 1024 hours, as each data files holds 1024 hourly slots. So “p123” would be unix time 123 x 1024 x 3600 = 453427200 (or “Tue May 15 00:00:00 UTC 1984”, if you want to know).

Each file has 1024 hourly data points and each data point uses 16 bytes, so each data file is 16,384 bytes long. The reason they have been split up is to make varying dataset collections easy to manage, and to allow optional compression. Conceptually, the archive is simply one array per parameter, indexed by the absolute hour since Jan 1st, 1970 at midnight UTC.

The 16 bytes of each data point contain the following information in little-endian order:

  • a 4-byte header
  • the 32-bit sum of all measurement values in this hour
  • the 32-bit minimum value of all measurements in this hour
  • the 32-bit maximum value of all measurements in this hour

The header contains a 12-bit count of all measurement values (i.e. 0..4095). The other 20 bits are reserved for future use (such a few extra bits for the sum, if we ever need ’em).

The average value for an hour is of course easy to calculate from this: the sum divided by the count. Slots with a zero count represent missing values.

Not only are these files trivial to create, read, modify, and write – it’d also be very easy to implement code which does this in any programming language. Simplicity is good!

Note that there are several trade-offs in this design. There can be no more than 4095 measurements per hour, i.e. one per second, for example. And measurements have to be 32-bit signed ints. But I expect that these choices will work out just fine, and as mentioned before: if the format isn’t good enough, we can quickly switch to a better one. I’d rather start out really simple (and fast), than to come up with a sophisticated setup which adds too little value and too much complexity.

As far as access goes, you can see that no searching at all is involved to access any time range for any parameter. Just open a few files, send the extracted data points to the browser (or maybe even entire files), and process the data there. If this becomes I/O bound, then per-file compression could be introduced – readings which are only a small integer or which hardly ever change will compress really well.

Time will tell. Premature optimisation is one of those pitfalls I’d rather avoid for now!

Data, data, data

In Software on Feb 17, 2013 at 00:01

If all good things come in threes, then maybe everything that is a triple is good?

This post is about some choices I’ve just made in HouseMon for the way data is managed, stored, and archived (see? threes!). The data in this case is essentially everything that gets monitored in and around the house. This can then be used for status information, historical charts, statistics, and ultimately also some control based on automated rules.

The measurement “readings” exist in the following three (3!) forms in HouseMon:

  • raw – serial streams and byte packets, as received via interfaces and wireless networks
  • decoded – actual values (e.g “temperature”), as integers with a fixed decimal point
  • formatted – final values as shown on-screen, formatted and with units, i.e. “17.8 °C”

For storage, I’ll be using three (3!) distinct mechanisms: log files, Redis, and archive files.

The first decision was to store everything coming in as raw text in daily log files. With rollover at midnight (UTC), and tagged with the interface and a time stamp in milliseconds. This format has been in use here at JeeLabs since 2008, and has served me really well. Here is an example, taken from the file called “20130211.txt”:

    L 01:29:55.605 usb-AH01A0GD OK 19 115 113 98 25 39 173 123 1
    L 01:29:58.435 usb-AH01A0GD OK 9 4 50 68 235 251 166 232 234 195 72 251 24
    L 01:29:58.435 usb-AH01A0GD DF S 6373 497 68
    L 01:29:59.714 usb-AH01A0GD OK 19 96 13 2 11 2 30 0

Easy to read and search through, but clearly useless for seeing the actual values, since these are the RF12 packets before being decoded. The benefit of this format is precisely that it is as raw as it gets: by storing this on file, I can improve the decoders and fix the inevitable bugs which will crop up from time to time, then simply re-parse the files and run them through the decoders again. Given that the data comes from sketches which change over time, and which can also contain bugs, the mapping of which decoders to apply to which packets is an important one, and is in fact going to depend on the timeline: the same node may have been re-used for a different sketch, with a different packet format over time (has rarely happened here, once a node has been put to permanent use).

In HouseMon, the “logger” briq now ties into serial I/O and creates such daily log files.

The second format is more meaningful. This holds “readings” such as:

    { id: 1, group:5, band: 868, type: 'roomNode', value: 123, time: ... }

… which might represent a 12.3°C reading in the living room, for example.

This is now stored in Redis, using the new “history” briq (a “briq” is simply an installable module in HouseMon). There is one sorted set per parameter, to which new readings are added (i.e. integers) as they come in. To support staged archive storage, the sorted sets are segmented per 32 hours, i.e. there is one sorted set per parameter per 32-hour period. At most two periods are needed to store full details of every reading from at least the past 24 hours. With two periods saved in Redis for each parameter, even a setup with a few hundred parameters will require no more than a few dozen megabytes of RAM. This is essential, given that Redis keeps all its data in memory.

And lastly, there is a process which runs periodically, to move data older than two periods ago into “archival storage”. These are not round-robin databases, in the sense of a circular buffer which gets overwritten as new data comes in and wraps around, but they do use a somewhat similar format on disk. Archival storage can grow infinitely, here I expect to end up with about 50..100 MB per year once all the log files have been re-processed. Less, if compression is used (to be decided once speed trade-offs of the RPi have been measured).

The files produced by the “archive” briq have the following three (3!) properties:

  • archives are redundant – they can be reconstructed from scratch, using the log files
  • data in archives is aggregated, with one data point per hour, and optimised for access
  • each hourly aggregation contains: a count, a sum, a minimum, and a maximum value

I’ll describe the archive design choices and tentative file format in the next post.

DIJN.09 – Install the HouseMon server

In Uncategorized on Feb 16, 2013 at 00:01

Welcome to the ninth instalment of Dive Into JeeNodes. We’re going to install HouseMon!

If you’ve followed all the steps up to this point, then getting HouseMon running will be a piece of cake. We need to do exactly three things:

  1. Download the HouseMon source code
  2. Install the packages used by HouseMon
  3. Launch the HouseMon server

All of this can be accomplished by typing just a few commands, as described here:

    git clone https://git.jeelabs.org/housemon.git
    cd housemon
    git checkout 0.7.x    # <== this is ESSENTIAL
    npm install
    npm start

Here’s what the above will look like in the terminal:

Screen Shot 2013-02-08 at 23.33.55

I haven’t typed the final return yet, as this will generate lots of gibberish while npm does its thing. There is some package compilation involved, which means that the “npm install” step will in fact take a couple of about 10 minutes to complete.

The real fun begins when we enter that final command to start the server: “npm start”. Here’s what you’ll see after a few seconds, while the system initialises itself:

Screen Shot 2013-02-08 at 23.53.47

Now we can use the web browser to connect to HouseMon. In my case, the RPi is running as IP address 192.168.1.127, so I can point my browser to http://192.168.1.127:3333/, and something like this will appear:

Screen Shot 2013-02-08 at 23.59.45

You’ll need to replace the IP address by the one assigned to your own RPi setup, but apart from that it should all work in exactly the same way. HouseMon is now up and running.

Onwards!

(This series of posts is also available from the Dive Into JeeNodes page on the Café wiki.)

PS. Note that HouseMon is still evolving a lot (read: things change and things do break, sometimes). So be prepared to update it often, with: cd ~/housemon && git pull

HouseMon resources

In AVR, Hardware, Software, Musings, Linux on Feb 6, 2013 at 00:01

As promised, a long list of resources I’ve found useful while starting off with HouseMon:

JavaScript – The core of what I’m building now is centered entirely around “JS”, the language behind many sites on the web nowadays. There’s no way around it: you have to get to grips with JS first. I spent several hours watching most of the videos on Douglas Crockford’s site. The big drawback is the time it takes…

Best book on the subject, IMO, if you know the basics of JavaScript, is “JavaScript: The Good Parts” by the same author, ISBN 0596517742. Understanding what the essence of a language is, is the fastest way to mastery, and his book does exactly that.

CoffeeScript – It’s just a dialect of JS, really – and the way HouseMon uses it, “CS” automatically gets compiled (more like “expanded”, if you ask me) to JS on the server, by SocketStream.

The most obvious resource, http://coffeescript.org/, is also one of the best ways to understand it. Make sure you are comfortable with JS, even if not in practice, before reading that home page top to bottom. For an intruiging glimpse of how CS code can be documented, see this example from the CS compiler itself (pretty advanced stuff!).

But the impact of CS goes considerably deeper. To understand how Scheme-like functional programming plays a role in CS, there is an entertaining (but fairly advanced) book called CoffeeScript Ristretto by Reginald Braithwaite. I’ve read it front-to-back, and intend to re-read it in its entirety in the coming days. IMO, this is the book that cuts to the core of how functions and objects work together, and how CS lets you write on a high conceptual level. It’s a delightful read, but be prepared to scratch your head at times…

For a much simpler introduction, see The Little Book on CoffeeScript by Alex MacCaw, ISBN 1449321046. Also available on GitHub.

Node.js – I found the Node.js in Action book by Mike Cantelon, TJ Holowaychuk and Nathan Rajlich to be immensely useful, because of how it puts everything in context and introduces all the main concepts and libraries one tends to use in combination with “Node”. It doesn’t hurt that one of the most prolific Node programmers also happens to be one of the authors…

Another useful resource is the API documentation of Node itself.

SocketStream – This is what takes care of client-server communication, deployment, and it comes with many development conveniences and conventions. It’s also the least mature of the bunch, although I’ve not really encountered any problems with it. I expect “SS” to evolve a bit more than the rest, over time.

There’s a “what it is and what it does” type of demo tour, and there is a collection on what I’d call tech notes, describing a wide range of design docs. As with the code, these pages are bound to change and get extended further over time.

Redis – This a little database package which handles a few tasks for HouseMon. I haven’t had to do much to get it going, so the README plus Command Summary were all I’ve needed, for now.

AngularJS – This is the most framework-like component used in HouseMon, by far. It does a lot, but the challenge is to understand how it wants you to do things, and altough “NG” is not really an opinionated piece of software, there is simply no other way to get to grips with it, than to take the dive and learn, learn, learn… Let me just add that I really think it’s worth it – NG can be magic on the client side, and once you get the hang of it, it’s in fact an extremely expressive way to create a responsive app in the browser, IMO.

There’s an elaborate tutorial on the NG site. It covers a lot of ground, and left me a bit overwhelmed – probably because I was trying to learn too much as quickly as possible…

There’s also a video, which gives a very clear idea of NG, what it is, how it is used, etc. Only downside is that it’s over an hour long. Oh, and BTW, the NG FAQ is excellent.

For a broader background on this sort of JS frameworks, see Rich JavaScript Applications by Steven Sanderson. An eye opener, if you’ve not looked into RIA’s before.

Arduino – Does this need any introduction on this weblog? Let me just link to the Reference and the Tutorial here.

JeeNode – Again, not really much point in listing much here, given that this entire weblog is more or less dedicated to that topic. Here’s a big picture and the link to the hardware page, just for completeness.

RF12 – This is the driver used for HopeRF’s wireless radio modules, I’ll just mention the internals weblog posts, and the reference documentation page.

Vim – My editor of choice, lately. After many years of using TextMate (which I still use as code browser), I’ve decided to go back to MacVim, because of the way it can be off-loaded to your spine, so to speak.

There’s a lot of personal preference involved in this type of choice, and there are dozens of blog posts and debates on the web about the pro’s and con’s. This one by Steve Losh sort of matches the process I am going through, in case you’re interested.

Best way to get into vim? Install it, and type “vimtutor“. Best way to learn more? Type “:h<CR>” in vim. Seriously. And don’t try to learn it all at once – the goal is to gradually migrate vim knowledge into your muscle memory. Just learn the base concepts, and if you’re serious about it: learn a few new commands each week. See what sticks.

To get an idea of what’s possible, watch some videos – such as the vim entries on the DAS site by Gary Bernhardt (paid subscription). And while you’re at it: take the opportunity to see what Behaviour Driven Design is like, he has many fascinating videos on the subject.

For a book, I very much recommend Practical Vim by Drew Neil. He covers a wide range of topics, and suggests reading up on them in whatever order is most useful to you.

While learning, this cheatsheet and wallpaper may come in handy.

Raspberry Pi – The little “RPi” Linux board is getting a lot of attention lately. It makes a nice setup for HouseMon. Here are some links for the hardware and the software.

Linux – Getting around on the command line in Linux is also something I get asked about from time to time. This is important when running Linux as a server – the RPi, for example.

I found the linuxcommand.org resource which appears to do a good job of explaining all the basic and intermediate concepts. It’s also available as a book, called “The Linux Command Line” by William E. Shotts, Jr. (PDF).

There… the above list ought to get you a long way with all the technologies I’m currently messing around with. Please feel free to add pointers and tips in the comments, if you think of other resource which can be of use to fellow readers in this context.

Dive Into JeeNodes

In AVR, Hardware, Software, Linux on Feb 1, 2013 at 00:01

Welcome to a new series of limited-edition posts from JeeLabs! Read ’em while they last!

Heh… just kidding. They’ll last forever of course, as does everything on this thing called internet. But what I’m going to describe in probably a dozen posts or so is the following:

dijn-essence

Hm, that doesn’t quite explain it, I guess. Let me try again:

JC's Grid, page 63

So this is to announce a new “DIJN” series of weblog posts, describing how to set up your own Wireless Sensor Network with JeeNodes, as well as the infrastructure to report a measured light-level somewhere in your house, in real time. The end result will be fully automated and autonomous – you could take your mobile phone, point it to your web server via WiFi, and see the light level as it is that very moment, adjusting as it changes.

This is a far cry from a full-fledged home monitoring or home automation system, clearly – but on the other hand, it’ll have all the key pieces in place to explore whatever direction interests you: ready-made sensors, DIY sensors, your own firmware on the remote nodes, your own web pages and automation logic on the central server… it’s up to you!

Everything is open source, which in this context matters a lot, because that also means that you can really dive into any aspect of this to learn and explore the truly magical world of Physical Computing, Wireless Sensor Networks, environmental sensing and control, as well as state-of-the art web technologies.

The focus will be on describing every step needed to implement this from scratch. I’ll cover setting up all the necessary software and hardware, in such a way that if you know next to nothing about any of the domains involved, you can still follow along and try it out – whether your background is in software, electronics, wireless, or none of these.

If technology interests you, and if I can bring across even a small fraction of the fun there is in tinkering with this stuff and making new things up as you g(r)o(w) along, then that would be a very nice reward for everyone involved, as far as I’m concerned.

PS. “Dijn” is also old-Dutch for “your” (thy, to be precise). Quite a fitting name in my opinion, as this sort of knowledge is really yours for the taking – if you want it…

PPS. For reference: here is the first post in the series, and here is the overview.

Now what?

In Musings on Jan 31, 2013 at 00:01

(Warning, this post is a bit about playing the devil’s advocate…)

Ok, so now I have this table with incoming data, updated in real time – Ajax polling is so passé – with unit conversions, proper labeling, and locations associated with each device:

Screen Shot 2013-01-28 at 16.19.55 copy 2

As I’ve said before: neat as a gimmick, but… yawn, who wants to look at this sort of stuff?

Which sort of begs the question: what’s the point of home monitoring?

(Automation is a different story: rules and automation could certainly be convenient)

What’s the point? Flashy / fancy dashboards with clever fonts? Zoomable graphs? Statistics? Knowing which was the top day for the solar panels? Calculating average temperatures? Predicting the utility bill? Collecting bragging rights or brownie points?

Don’t get me wrong: I’ve got a ton of plans for all this stuff (some of them in other directions than what you’d probably expect, but it’s way too early to talk about that).

But for home monitoring… what’s the point? Just because we can?

The only meaningful use I’ve been able to come up with so far is to save on energy (oh wait, that’s now called “reducing the carbon footprint”). And that has already been achieved to a fairly substantial degree, here at JeeLabs. For that, all I need really, are a few indicators to see the main energy consumption patterns on a day-to-day basis. Heck, a couple of RGB LEDs might be enough – so who needs all these figures, once you’ve interpreted them, drawn some conclusions, and adjusted your behaviour?

Remote compilation

In Software on Jan 30, 2013 at 00:01

Now we’re cookin’ …

I’ve implemented the following with a new “compile” module in HouseMon – based on Arduino.mk, which is also available for the Raspberry Pi via Debian apt:

JC's Grid, page 62

In other words, this allows uploading the source code of an Arduino sketch through a web browser, manage these uploaded files from the browser, and compile + upload the sketch to a server-side JeeNode / Arduino. All interaction is browser based…

Only proof of concept, but it opens up some fascinating options. Here’s an example of use:

Screen Shot 2013-01-29 at 21.54.13

(note that both avr-gcc + avrdude output and the exit status show up in the browser!)

The processing flow is fairly complex, given the number of different tasks involved, yet the way Node’s asynchronous system calls, SocketStream’s client-server connectivity, and Angular’s two-way webpage bindings work together is absolutely mind-blowing, IMO.

Here’s the main code, running on the server as briqs/jcw-compile.coffee:

exports.info =
  name: 'jcw-compile'
  description: 'Compile for embedded systems'
  menus: [
    title: 'Compile'
    controller: 'CompileCtrl'
  ]

state = require '../server/state'
fs = require 'fs'
child_process = require 'child_process'
ss = require 'socketstream'

# TODO hardcoded paths for now
SKETCHDIR = switch process.platform
  when 'darwin' then '/Users/jcw/Tools/sketch'
  when 'linux' then '/home/pi/sketchbook/sketch'

# callable from client as: rpc.exec 'host.api', 'compile', path
ss.api.add 'compile', (path, cb) ->
  # TODO totally unsafe, will accept any path as input file
  wr = fs.createWriteStream "#{SKETCHDIR}/sketch.ino"
  fs.createReadStream(path).pipe wr
  wr.on 'close', ->
    make = child_process.spawn 'make', ['upload'], cwd: SKETCHDIR
    make.stdout.on 'data', (data) ->
      ss.api.publish.all 'ss-output', 'stdout', "#{data}"
    make.stderr.on 'data', (data) ->
      ss.api.publish.all 'ss-output', 'stderr', "#{data}"
    make.on 'exit', (code) ->
      cb? null, code

# triggered when bodyParser middleware finishes processing a file upload
state.on 'upload', (url, files) ->
  for file, info of files
    state.store 'uploads',
      key: info.path
      file: file
      name: info.name
      size: info.size
      date: Date.parse(info.lastModifiedDate)

# triggered when the uploads collection changes, used to clean up files
state.on 'store.uploads', (obj, oldObj) ->
  unless obj.key # only act on deletions
    fs.unlink oldObj.key

The client-side code needed to implement all this is even shorter: see the controller code and the page definition on GitHub.

There’s no error handling in here yet, so evidently this code is not ready for prime time.

PS – Verified to also work on the Raspberry Pi – yippie!

Blink Plug meets NG

In Hardware, Software on Jan 29, 2013 at 00:01

Last month, I described how to hook up a JeeNode with a Blink Plug to Node.js via SocketStream (“SS”) and a USB connection. Physical Computing with a web interface!

That was before AngularJS (“NG”), using traditional HTML and JavaCoffeeScript code.

With NG and SS, there’s a lot of machinery in place to interface a web browser with a central Node process. I’ve added a “blink-serial” module to HouseMon to explore this approach. Here’s the resulting web page (very basic, but fully dynamic in both directions):


Screen Shot 2013-01-28 at 19.17.40


The above web page is generated from this client/templates/blink.jade definition:

Screen Shot 2013-01-28 at 18.42.02

There are two more files involved – client/code/modules/blink.coffee on the client:

Screen Shot 2013-01-28 at 18.43.41

… and briqs/blink-serial.coffee, which drives the whole shebang on the server:

Screen Shot 2013-01-28 at 20.17.40

I’m not quite happy yet with the exact details of all this, but these 3 files are all there is to it, and frankly I can’t quite imagine how a bidirectional real-time interface could be made any simpler – given all the pieces involved: serial interface, server, and web browser.

The one thing that can hopefully be improved upon, is the terminology of it all and the actual API conventions. But that really is a matter of throwing lots of different stuff into HouseMon, to see what sticks and what gets messy.

Onwards!

PS – What’s your take on these screenshots? Ok? Or was the white background preferable?

PPS – Here’s another test, code inserted as HTML – suitable for copying and pasting:

# Define a single-page client called 'main'
ss.client.define 'main',
  view: 'index.jade'
  css: ['libs', 'app.styl']
  code: ['libs', 'app', 'modules']

Update – Yet another approach, using a a WordPress plugin (no Jade / Stylus support?):

# Define a single-page client called 'main'
ss.client.define 'main',
  view: 'index.jade'
  css: ['libs', 'app.styl']
  code: ['libs', 'app', 'modules']

Who needs numbers?

In Software on Jan 19, 2013 at 00:01

I’ve always been intrigued by those “control panels” with lots and lots of numbers on them. It seems so much like old science-fiction movies with unmarked buttons all over the place.

That and the terror of the linear scale felt like a good reason to try out something new:

Screen Shot 2013-01-17 at 18.02.20   Screen Shot 2013-01-17 at 18.04.34   Screen Shot 2013-01-17 at 18.09.28   Screen Shot 2013-01-17 at 18.08.48

Ok, so here’s the idea:

  • you’re looking at four copies of the display next to each other, with different readings
  • the vertical scale is power production (top half) and power consumption (bottom half)
  • green is production (+), red is consumption (-), and blue is the difference (+ or -)
  • it’s logarithmic, top and bottom are 5000 W produced and consumed, respectively
  • the green and red dots are also log-proportional, the blue circe has a fixed diameter
  • the center has a ± 50 W dead zone, i.e. everything in that range is collapsed to a line

Now the examples, from left to right:

  1. end of the afternoon, no solar power, about 260 W consumption
  2. induction cooking, consuming about 2500 W
  3. night-time, house consumption is 80 W
  4. daytime, 1500 W production, 200 W consumption

Those last two readouts were simulated since I didn’t want to wait for a sunny winter day :)

The data is shown LIVE, and I’m going to keep it around to see whether this is an intuitive way of presenting this sort of information. It’s all implemented using the HTML5 canvas.

PS – I’ve since switched to “sqrt” i.s.o. of “log” for scaling. Looks nicer with large values.

Still crude, but oh so cool…

In Software on Jan 18, 2013 at 00:01

Good progress on the HouseMon front. I’m still waggling from concept to concept like a drunken sailor, but every day it sinks in just a little bit more. Adding a page to the app is now a matter of adding a CoffeeScript and a Jade file, and then adding an entry to the “routes” list to make a new “Readings” page appear in the main menu:

Screen Shot 2013-01-17 at 01.01.00

The only “HTML” I wrote for this page is readings.jade, using Foundation classes:

Screen Shot 2013-01-17 at 01.03.14

And the only “JavaScript” I wrote, is in readings.coffee to embellish the output slightly:

Screen Shot 2013-01-17 at 00.18.51

The rest is a “readings” collection, which is updated automatically on the server, with all changes propagated to the browser(s). There’s already quite a bit of architecture in place, including a generic RF12demo driver which connects to a serial port, and a decoder for some of the many packets flying around the house here at JeeLabs.

But the exciting bit is that the server setup is managed from the browser. There’s an Admin page, just enough to install and remove components (“briqlets” in HouseMon-speak):

Screen Shot 2013-01-17 at 00.14.09

In this example, I installed a fake packet generator (which simply replays one of my logfiles in real time), as well as a small set of decoders for things like room node sketches.

So just to stress what there is: a basic way of managing server components via the browser, and a basic way of seeing the live data updates, again in the browser. Oh, and all sorts of RF12 capturing and decoding… even support for the Announcer idea, described recently.

Did I mention that the data shown is LIVE? Some of this stuff sure feels like magic… fun!

Gotta start somewhere…

In Software on Jan 9, 2013 at 00:01

Ok, well, baby steps or not – there’s just no way to avoid it… I need to start coding!

I’ve set up a new project on GitHub, called HouseMon – to have a place to put everything. It does almost nothing, other than what you’ll recognise after yesterday’s weblog post:

Screen Shot 2013-01-08 at 23.34.09

This was started with the idea of creating the whole setup as installable “Briqs” modules, which can then be configured for use. And indeed, some of that will be unavoidable right from the start – after all, to connect this to a JeeNode or JeeLink, I’m going to have to specify the serial port (even if the serial ports are automatically enumerated one day, the proper one will need to be selected in case there is more than one).

However, this immediately adds an immense amount of complexity over just hard-coding all the choices in the app for just my specific installation. Now, in itself, I’m not too worried by that – but with so little experience with Node.js and the rest, I just can’t make this kind of decisions yet. There’s too big a gap between what I know and understand today, and what I need to know to come up with a good modular system. The whole thing could be built completely out of npm packages, or perhaps even the lighter-weight Components mechanism. Too early to tell, and definitely too early to make such key decisions!

Just wanted to get this repository going to be able to actually build stuff. And for others to see what’s going on as this evolves over time. It’s real code, so you can play with it if you like. And of course make suggestions and point out things on the issue tracker. I’ll no doubt keep posting about this stuff here whenever the excitement gets too high, but this sort of concludes nearly two weeks of continuous immersion and blogging about this topic.

It’s time for me to get back to some other work. Never a dull moment here at JeeLabs! :)

Web page layouts

In Software on Jan 8, 2013 at 00:01

Wow – maybe I’m the last guy in the room to find out about this, but responsive CSS page layouts really have come a long way. Here are a bunch of screen shots at different widths:


Screen Shot 2013-01-07 at 21.32.15


Screen Shot 2013-01-07 at 21.32.53


Screen Shot 2013-01-07 at 21.33.44


Screen Shot 2013-01-07 at 21.34.00


Note that the relative scale of the above snapshots is all over the map, since I had to fit them into the size of this weblog, but as you can see, there’s a lot of automatic shuffling going on. With no effort whatsoever on my part!

Here’s the Jade definition to generate the underlying HTML:

Screen Shot 2013-01-07 at 21.47.21

How cool is that?

This is based on the Foundation HTML framework (and served from Node.js on on a Raspberry Pi). It looks like this might be an even nicer, eh, foundation to start from than Twitter Bootstrap (here’s a factual comparison, with examples of F and B).

Sigh… so many choices!

Plotting again, at last

In Software on Jan 4, 2013 at 00:01

The new code is progressing nicely. First step was to get a test table, updating in real time:

Screen Shot 2013-01-03 at 11.29.07

(etc…)

It was a big relief to figure out how to produce graphs again – e.g. power consumption:

Screen Shot 2013-01-03 at 10.01.55

The measurement resolution from the 2000 pulse/kWh counters is excellent. Here is an excerpt of power consumption vs solar production on a cloudy and wet winter morning:

Screen Shot 2013-01-03 at 12.00.02

There is a fascinating little pattern in there, which I suspect comes from the central heating – perhaps from the boiler and pump, switching on and off in a peculiar 9-minute cycle?

Here are a bunch of temperature sensors (plus the central heating set-point, in brown):

Screen Shot 2013-01-03 at 10.02.14

There is no data storage yet (I just left the browser running on the laptop collecting data overnight), nor proper scaling, nor any form of configurability – everything was hard-coded, just to check that the basics work. It’s a far cry from being able to define and configure such graphs in the browser, but hey – one baby step at a time…

Although the D3 and NVD3 SVG-based graphing packages look stunning, they are a bit overkill for simple graphing and consist of more code for the browser to load and run. Maybe some other time – for now I’m using the Flot package for these graphs, again.

Will share the code as soon as I can make up my mind about the basic app structure!