Computing stuff tied to the physical world

Archive for December 2012

Playing back logfiles

In Software on Dec 31, 2012 at 00:01

After yesterday’s reading and decoding exploration, here’s some code which will happily play back my daily log files, of which I now have over 4 years worth

Screen Shot 2012-12-29 at 15.55.11

Sample output:

Screen Shot 2012-12-29 at 15.59.52

As you can see, this supports scanning entire log files, both plain text and gzipped. In fact, JeeMonLogParser@parseStream should also work fine with sockets and pipes:

Screen Shot 2012-12-29 at 15.55.46

The beauty – again – is total modularity: both the real serial interface module and this log-replay module generate the same events, and can therefore be used interchangeably. As the decoders work independent of either one, there is no dependency (“coupling”) whatsoever between these modules.

Not to worry: from now on I won’t bore you with every new JavaScript / CoffeeScript snippet I come up with – just wanted to illustrate how asynchronous I/O and events are making this code extremely easy to develop and try out in small bites.

I wish you a Guten Rutsch ins neue Jahr and a safe, healthy, and joyful 2013!

Decoding RF12demo with Node.js

In Software on Dec 30, 2012 at 00:01

I’m starting to understand how things work in Node.js. Just wrote a little module to take serial output from the RF12demo sketch and decode its “OK …” output lines:

Screen Shot 2012-12-29 at 01.04.48

Sample output:

Screen Shot 2012-12-29 at 00.44.21

This quick demo only has decoders for nodes 3 and 9 so far, but it shows the basic idea.

This relies on the EventEmitter class, which offers a very lightweight mechanism of passing around objects on channels, and adding listeners to get called when such “events” happen. A very efficient in-process pub-sub mechanism, in effect!

Here is the “serial-rf12demo” module which does the rest of the magic:

Screen Shot 2012-12-29 at 00.46.13

And that’s really all there is to it – very modular!

RFM12B startup power consumption

In Hardware on Dec 29, 2012 at 00:01

For quite some time, I’ve wanted to know just how much current the RFM12B module draws on power-up. Well, time for a test using the power booster described recently:

JC's Grid, page 51

So the idea is to apply a sawtooth signal to the RFM12B, rising from 0 to 3V at the rate of say 10 Hz, and to measure the voltage drop across a 100 Ω resistor at the same time. This will have a slight effect on measurement accuracy – but no more than 2%, so I’m ok with it.

Here is the outcome:

SCR51

The yellow trace is VCC, the supply voltage – from 0..3V. The magenta trace is the current consumption, which turns out to be 0..650 µA. As you can see, the current draw quickly rises between 1 and 2V, and then continues to increase sort of linearly.

Note that this power consumption can’t be reduced: we don’t have the ability to send any commands to the RFM12B until it has started up!

This type of analysis can also be done using the X-Y mode on most oscilloscopes:

SCR48

It’s essentially the same picture as before, because the sawtooth is a straight line, and so voltage rise is the same thing as time in this case. Here’s what happens when the input signal is switched to a sine wave:

SCR49

As expected, the essence of the curve hasn’t changed one bit. Because it really doesn’t matter how we vary VCC over time. But there’s an intriguing split in the curve – this is most likely caused by a different current consumption when VCC is rising vs when it is dropping. Keep in mind that the changes are occurring at 10 Hz, so there’s bound to be some residual charge in the on-board capacitors of the RFM12B module.

Anyway. It’s a bit of a silly distraction to do things this way, but now I do have a better idea of how current consumption increases on startup. This relatively high 0.65 mA current draw was the main reason for including a MOSFET in the new JeeNode Micro v2, BTW.

Assembling the LED Node v2

In AVR, Hardware on Dec 28, 2012 at 00:01

After yesterday’s little mistake, here’s a walk-through of assembling the LED Node v2:

DSC_4332

Note that the LED Node comes with pre-soldered SMD MOSFETs so you don’t have to fiddle with ’em.

The LED Node is really just a JeeNode with a different layout and 3 high-power MOSFET drivers, to control up to 72W of RGB LED strips through the ATmega’s hardware PWM. Since there’s an RFM12B wireless module on board, as well as two free JeePorts, you can do all sorts of funky things with it.

As usual, the build progresses from the flattest to the highest components, so that you can easily flip the PCB over and press it down while soldering each wire and pin.

Let’s get started! So we begin with 7 resistors and 1 diode (careful, the diode is polarised):

DSC_4333

Be sure to get the values right: 3x 1 kΩ, 3x 1 MΩ, and 1x 10 kΩ (next to the ATmega).

(note: I used three 100 kΩ resistors i.s.o. of 1 MΩ, as that’s what I had lying around)

Next, add the 4x 0.1 µF capacitors and the IC socket – lots of soldering to do on that one:

DSC_4334

Then the MCP1702 regulator and the electrolytic capacitor (both are polarised, so here too, make sure you put them in the right way around), as well as the male 6-pin FTDI header:

DSC_4335

Soldering the RFM12B wireless radio module takes a bit of care. It’s easiest if you start off by adding a small solder dot and hold the radio while making the solder melt again:

DSC_4336

Then solder the remaining pins (I tend to get lazy and skip those which aren’t used, hence not all of them have solder). I also added the 3-pin orange 16 MHz ceramic resonator, the antenna wire, the two port headers, and the big screw terminal for connecting power:

DSC_4337

Celebration time – we’ve completed the assembly of the LED Node v2!

Here’s a side view, with the ATmega328 added – as you can see it’s much flatter than v1:

DSC_4338

And here’s a top view of the completed LED Node v2, in all its glory:

DSC_4339

You can now connect the FTDI header via a USB BUB, and you should see the greeting of the RF12demo sketch, which has been pre-loaded onto the ATmega328.

To get some really fancy effects, check out the Color-shifting LED Node post from a while back on this weblog. You can adjust it as needed and then upload it through FTDI.

Next step is to attach your RGB strip (it should match the 4-pin connector on the far left). Be sure to use fairly sturdy wires as there are up to 2 amps going through each color pin and a maximum of 6 amps total through the “+” connector pin!

Lastly, connect a 12V DC power supply (making absolutely sure to get the polarity right!) and you will have a remote-controllable LED strip. Enjoy!

Murphy strikes the silkscreen

In Hardware on Dec 27, 2012 at 00:01

Uh, oh – silly mistake time! Here’s an excerpt of the new LED Node v2:

      TOP:   top       BOTTOM:   bottom

These are the top and bottom view of the FTDI connector in the middle of the board, flipped horizontally.

The bottom view has the “GND” label on the wrong pin!

Drat. Will do a re-spin with the corrected silkscreen, but the first few units will be like this so make sure you use the alignment shown on the top of the board.

The good news is that connecting the FTDI cable or BUB the wrong way is harmless.

An eventful year

In Musings on Dec 26, 2012 at 00:01

Maybe it’s a bit soon’ish to talk about this, but I often like to go slightly against the grain, so with everybody planning to look back at 2012 a few days from now, and coming up with interesting things to say about 2013 – heck, why not travel through time a bit early, eh?

The big events for me this year were the shop hand-over to Martyn and Rohan Judd (who continue to do a magnificent job), and a gradual but very definitive re-focusing on home energy saving and software development. Product development, i.e. physical computing hardware, is taking place in somewhat less public ways, but let me just say that it’s still as much part of what I do as ever. The collaboration with Paul Badger of Modern Device is not something you hear from me about very much, but we’re in regular and frequent discussion about what we’re both doing and where we’d like to go. For 2012, I’m very pleased with how things have worked out, and mighty proud to be part of this team.

The year 2012 was also the year which brought us large-scale online courses, such as Udacity and Coursera. I have to admit that I signed up for several of their courses, but never completed them. Did enough to learn some really useful things, but also realised that it would take probably 2 full days per week to actually complete this (assuming it wouldn’t all end up being above my head…). At the time – in the summer – I just didn’t have the peace of mind to see it through. So this is back on the TODO list for now.

My shining light is Khan Academy, an initiative which was started in 2006 by one person:

Screen Shot 2012-12-25 at 00.07.58

Here’s an important initiative from 2012 which I’d really like to single out at this point:

Khan Academy Computer Science Launch with Salman Khan and John Resig

To me, this isn’t about the Khan Academy, Salman Khan, John Resig, or JavaScript. What is happening here, is that that education is changing in major ways, and now the tools are changing in equally fundamental ways. This world is becoming a place for people who take their future into their own hands. And there’s nothing better than the above to illustrate what that means for a domain such as Computer Science. This isn’t about a better teacher or a better book – this is about a new way of learning. On a global scale.

The message is loud and clear: “Wanna go somewhere? Go! What’s holding you back?” – and 2012 is where it all switched into a higher gear. There are more places to go and learn than ever, and the foundations of that learning are more and more based on open source – meaning that you can dive in as deep as you like. Given the time, I’d actually love to have a good look inside Node.js one day… but nah, not quite yet :)

I’ve been rediscovering this path recently, trying to understand even the most stupid basic aspects of this new (for me) programming language called JavaScript, iterating between total despair at the complexity and the breadth of all the material on the one hand, and absolute delight and gratitude as someone answered my question and helped me reach the next level. Wow. Everything is out there. BSD/MIT-licensed. Right in front of our nose!

All we need is fascination, perseverance, and time. None of these are a given. But we must fight for them. Because they matter, and because life’s too short for anything less.

So – yes, a bit early – for 2013, I wish you lots of fascination, perseverance… and time.

Data wants to be dynamic

In Software on Dec 25, 2012 at 00:01

It’s all about dynamics, really. When software becomes so dynamic that you see the data, then all that complex code will vanish into the background:

Screen Shot 2012-12-21 at 23.34.20   DSC_4327

This is the transformation we saw a long time ago when going from teletype-based interaction to direct manipulation with the mouse, and the same is happening here: if the link between physical devices and the page shown on the web browser is immediate, then the checkboxes and indicators on the web page become essentially the same as the buttons and the LED’s. The software becomes invisible – as it should be!

That demo from a few days back really has that effect. And of course then networking kicks in to make this work anywhere, including tablets and mobile phones.

But why stop there? With Tcl, I have always enjoyed the fact that I can develop inside a running process, i.e. modify code on a live system – by simply reloading source files.

With JavaScript, although the mechanism works very differently, you can get similar benefits. When launching the Node.js based server, I use this command:

    nodemon app.coffee

This not only launches a web server on port 3000 for use with a browser, it also starts watching the files in the current directory for changes. In combination with the logic of SocketStream, this leads to the following behavior during development:

  • when I change a file such as app.coffee or any file inside the server/ directory, nodemon will stop and relaunch the server app, thus picking up all the changes – and SocketStream is smart enough to make all clients re-connect automatically
  • when changing a file anywhere inside the clients/ area, the server sends a special request via WebSockets for the clients, i.e. the web browser(s), to refresh themselves – again, this causes all client-side changes to be picked up
  • when changing CSS files (or rather, the Stylus files that generate it), the same happens, but in this case the browser state does not get lost – so you can instantly view the effects of twiddling with CSS

Let me stress that: the browser updates on each save, even if it’s not the front window!

The benefits for the development workflow are hard to overstate – it means that you can really build a full client-server application in small steps and immediately see what’s going on. If there is a problem, just insert some “console.log()” calls and watch the server-side (stdout) or client-side (browser console window).

There is one issue, in that browser state gets lost with client-side code changes (current contents of input boxes, button state, etc), but this can be overcome by moving more of this state into Redis, since the Redis “store” can just stay running in the background.

All in all, I’m totally blown away by what’s possible and within reach today, and by the way this type of software development can be done. Anywhere and by anyone.

Onwards!

Setting up a SocketStream app

In Software on Dec 24, 2012 at 00:01

As shown yesterday, the SocketStream framework takes care of a lot of things for real-time web-based apps. It’s at version 0.3 (0.4 coming up), but already pretty effective.

Here’s what I did to get the “ss-blink” demo app off the ground on my Mac notebook:

  • no wallet needed: everything is either free (Xcode) or open source (the rest)
  • make sure the Xcode command-line dev tools have been installed (gcc, make, etc)
  • install the Homebrew package manager using this scary-looking one-liner:
    ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)"
  • using HomeBrew, install Node.js – brew install node
  • that happens to include NPM, the Node Package Manager, all I had to do was add the NPM bin dir to my PATH (in .bash_profile, for example), so that globally installed commands will be found – PATH=/usr/local/share/npm/bin:$PATH

Not there yet, but I wanted to point out at this point that Xcode plus Homebrew (on Mac, other platforms have their own variants), with Node.js and NPM as foundation for everything else. Once you have those installed and working smoothly, everything else is a matter of obtaining packages through NPM as needed and running them with Node.js – a truly amazing software combo. NPM can also handle uninstalls & cleanup.

Let’s move on, shall we?

  • install SocketStream globally – npm install -g socketstream
    (the “-g” is why PATH needs to be properly set after this point)
  • install the nodemon utility – npm install -g nodemon
    (this makes development a breeze, by reloading the server whenever files change)
  • create a fresh app, using – socketstream new ss-blink
  • this creates a dir called ss-blink, so first I switched to it – cd ss-blink
  • use npm to fetch and build all the dependencies in ss-blink – npm install
  • that’s it, start it up – nodemon app.js (or node app.js if you insist)
  • navigate to and you should see a boilerplate chat app
  • open a second browser window on the same URL, and marvel at how a chat works :)

So there’s some setup involved, and it’s bound to be a bit different on Windows and Linux, but still… it’s not that painful. There’s a lot hidden behind the scenes of these installation tools. In particular npm is incredibly easy to use, and the workhorse for getting tons and tons of packages from GitHub or elsewhere into your project.

The way this works, is that you add one line per package you want to the “package.json” file inside the project directory, and then simply re-run “npm install”. I did exactly that – adding “serialport” as dependency, which caused npm to go out, fetch, and compile all the necessary bits and pieces.

Note that none of the above require “root” privileges: no superuser == better security.

For yesterday’s demo, the above was my starting point. However, I did want to switch to CoffeeScript and Jade instead of JavaScript and HTML, respectively – which is very easy to do with the js2coffee and html2jade tools.

These were installed using – npm install -g js2coffee html2jade

And then hours of head-scratching, reading, browsing the web, watching video’s, etc.

But hey, it was a pretty smooth JavaScript newbie start as far as I’m concerned!

Connecting a Blink Plug to a web browser

In Hardware, Software on Dec 23, 2012 at 00:01

Here’s a fun experiment – using Node.js with SocketStream as web server to directly control the LEDs on a Blink Plug and read out the button states via a JeeNode USB:

JC's Grid, page 51

This is the web interface I hacked together:

Screen Shot 2012-12-21 at 23.34.20

The red background comes from pressing button #2, and LED 1 is currently on – so this is bi-directional & real-time communication. There’s no polling: signalling is instant in both directions, due to the magic of WebSockets (this page lists supported browsers).

I’m running blink_serial.ino on the JeeNode, which does nothing more than pass some short messages back and forth over the USB serial connection.

The rest is a matter of getting all the pieces in the right place in the SocketStream framework. There’s no AngularJS in here yet, so getting data in and out of the actual web page is a bit clumsy. The total code is under 100 lines of CoffeeScript – the entire application can be downloaded as ZIP archive.

Here’s the main client-side code from the client/code/app/app.coffee source file:

Screen Shot 2012-12-22 at 00.48.12

(some old stuff and weird coding in there… hey, it’s just an experiment, ok?)

The client side, i.e. the browser, can receive “blink:button” events via WebSockets (these are set up and fully managed by SocketStream, including reconnects), as well as the usual DOM events such as changing the state of a checkbox element on the page.

And this is the main server-side logic, contained in the server/rpc/serial.coffee file:

Screen Shot 2012-12-22 at 00.54.07

The server uses the node-serialport module to gain access to serial ports on the server, where the JeeNode USB is plugged in. And it defines a “sendCommand” which can be called via RPC by each connected web browser.

Most of the work is really figuring out where things go and how to get at the different bits of data and code. It’s all in JavaScript CoffeeScript on both client and server, but you still need to know all the concepts to get to grips with it – there is no magic pill!

Tomorrow, I’ll describe how I created this app, and how to run it.

Update – The code is now on GitHub.

Dynamic web pages

In Software on Dec 22, 2012 at 00:01

There are tons of ways to make web pages dynamic, i.e. have them update in real-time. For many years, constant automatic full-page refreshes were the only game in town.

But that’s more or less ignoring the web evolution of the past decade. With JavaScript in the browser, you can manipulate the DOM (i.e. the structure underlying each web page) directly. This has led to an explosion of JavaScript libraries in recent years, of which the most widespread one by now is probably JQuery.

In JQuery, you can easily make changes to the DOM – here is a tiny example:

Screen Shot 2012-12-20 at 23.40.23

And sure enough, the result comes out as:

Screen Shot 2012-12-20 at 23.40.32

But there is a major problem – with anything non-trivial, this style quickly ends up becoming a huge mess. Everything gets mixed up – even if you try to separate the JavaScript code into its own files, you still need to deal with things like loops inside the HTML code (to create a repeated list, depending on how many data items there are).

And there’s no automation – the more little bits of dynamic info you have spread around the page, the more code you need to write to keep all of them in sync. Both ways: setting items to display as well as picking up info entered via the keyboard and mouse.

There are a number of ways to get around this nowadays – with a very nice overview about seven of the mainstream solutions by Steven Sanderson.

I used Knockout for the RFM12B configuration generator to explore its dynamics. And while it does what it says, and leads to delightfully dynamically-updating web pages, I still found myself mixing up logic and presentation and having to think about template expansion more than I wanted to.

Then I discovered AngularJS. At first glance, it looks like just another JavaScript all-in-the-browser library, with all the usual expansion and looping mechanisms. But there’s a difference: AngularJS doesn’t mix concepts, it embeds all the information it needs in HTML elements and attributes.

AngularJS manipulates the DOM structure (better than XSLT did with XML, I think).

Here’s the same example as above, in Angular (with apologies for abusing ng-init a bit):

Screen Shot 2012-12-20 at 23.40.53

The “ng-app” attribute is the key. It tells AngularJS to go through the element tree and do its magic. It might sound like a detail, but as a result, this page remains 100% HTML – it can still be created by a graphics designer using standard HTML editing tools.

More importantly, this sort of coding can grow without ever becoming a mix of concepts and languages. I’ve seen my share of JavaScript / HTML mashups and templating attempts, and it has always kept me from using JavaScript in the browser. Until now.

Here’s a better example (live demo):

Screen Shot 2012-12-20 at 23.35.12

Another little demo I just wrote can be seen here. More physical-computing related. As with any web app, you can check the page source to see how it’s done.

For an excellent introduction about how this works, see John Lindquist’s 15-minute video on YouTube. There will be a lot of new stuff here if you haven’t seen AngularJS before, but it shows how to progressively create a non-trivial app (using WebStorm).

If you’re interested in this, and willing to invest some hours, there is a fantastic tutorial on the AngularJS site. As far as I’m concerned (which doesn’t mean much) this is just about the best there is today. I don’t care too much about syntax (or even languages), but AngularJS absolutely hits the sweet spot in the browser, on a conceptual level.

AngularJS is from Google, with MIT-licensed source on GitHub, and documented here.

And to top it all off, there is now also a GitHub demo project which combines AngularJS on the client with SocketStream on the server. Lots of reading and exploring to do!

JavaScript reading list

In Software on Dec 21, 2012 at 00:01

As I dive into JavaScript, and prompted by a recent comment on the weblog, it occurred to me that it might be useful to create a small list of books resources, for those of you interested in going down the same rabbit hole and starting out along a similar path.

Grab some nice food and drinks, you’re gonna’ need ’em!

First off, I’m assuming you have a good basis in some common programming language, such as C, C++, or Java, and preferably also one of the scripting languages, such as Lua, Perl, Python, Ruby, or Tcl. This isn’t a list about learning to program, but a list to help you dive into JavaScript, and all the tools, frameworks, and libraries that come with it.

Because JavaScript is just the enabler, really. My new-found fascination with it is not the syntax or the semantics, but the fast-paced ecosystem that is evolving around JS.

One more note before I take off: this is just my list. If you don’t agree, or don’t like it, just ignore it. If there are any important pointers missing (of course there are!), feel free to add tips and suggestions in the comments.

JavaScript

There’s JavaScript (the language), and there are the JavaScript environments (in the browser: the DOM, and on the server: Node). You’ll want to learn about them all.

  • JavaScript: The Good Parts by Douglas Crockford
    2008, 1st ed, 176 pages, ISBN 0596517742
  • JavaScript: The Definitive Guide by David Flanagan
    2011, 6th ed, 1100 pages, ISBN 0596805527

Videos: again by Douglas Crockford, there’s an excellent list at Stack Overflow. Going through them will take many hours, but they are really excellent. I watched all of these.

Don’t skim over prototypes, “==” vs “===”, and how “this” gets passed to functions.

Being able to understand every single notation in JavaScript is essential. Come back here if you can’t. Or google for stuff. Just don’t cut corners – it’s bound to bite you.

If you want to dive in really deep, check out this page about JavaScript and Scheme.

In the browser

Next on the menu: the DOM, HTML, and CSS. This is the essence of what happens inside a browser. Can be consumed in small doses, as the need arises. Simply start with the just-mentioned Wikipedia links.

Not quite sure what to recommend here – I’ve picked this up over the years. Perhaps w3schools this or this. Focus on HTML5 and CSS3, as these are the newest standards.

On the server

There are different implementations of JavaScript, but on the server, by far the most common implementation seems to be Node.js. This is a lot more than “just some JS implementation”. It comes with a standard API, full of useful functions and objects.

Node.js is geared towards asynchronous & event-driven operation. Nothing blocks, not even a read from a local disk – because in CPU terms, blocking takes too long. This means that you tend to call a “read” function and give it a “callback” function which gets called once the read completes. Very very different frame of mind. Deeply frustrating at times, but essential for any non-trivial app which needs to deal with networking, disks, and other “slow” peripherals. Including us mortals.

  • Learning Node by Shelley Powers
    2012, 1st ed, 396 pages, ISBN 1449323073

See also this great (but fairly long) list of tutorials, videos, and books at Stack Overflow.

SPA and MVC

Note that JavaScript on the server replaces all sorts of widespread approaches: PHP, ASP, and such. Even advanced web frameworks such as Rails and Django don’t play a role here. The server no longer acts as a templating system generating dynamic web pages – instead it just serves static HTML, CSS, JavaScript, and image files, and responds to requests via Ajax or WebSockets (often using JSON in both directions).

The term for this is Single-page web application, even though it’s not about staying on a single page (i.e. URL) at all costs. See this website for more background – also as PDF.

The other concepts bound to come up are MVC and MVVM. There’s an article about MVC at A List Apart. And here’s an online book with probably more than you want to know about this topic and about JavaScript design patterns in general.

In a nutshell: the model is the data in your app, the view is its presentation (i.e. while browsing), and the controller is the logic which makes changes to the model. Very (VERY!) loosely speaking, the model sits in the server, the view is the browser, and the controller is what jumps into action on the server when someone clicks, drags, or types something. This simplification completely falls apart in more advanced uses of JS.

Dialects

I am already starting to become quite a fan of CoffeeScript, Jade, and Stylus. These are pre-processors for JavaScript, HTML, and CSS, respectively. Totally optional.

Here are some CoffeeScript tutorials and a cookbook with recipes.

CoffeeScript is still JavaScript, so a good grasp of the underlying semantics is important.

It’s fairly easy to read these notations with only minimal exposure to the underlying language dialects, in my (still limited) experience. No need to use them yourself, but if you do, the above links are excellent starting points.

Just the start…

The above are really just pre-requisites to getting started. More on this topic soon, but let me just stress that good foundational understanding of JavaScript is essential. There are crazy warts in the language (which Douglas Crockford frequently points out and explains), but they’re a fact of life that we’ll just have to live with. This is what you get with a language which has now become part of every major web browser in the world.

Graphics, oh là là!

In Software on Dec 20, 2012 at 00:01

Graphs used to be made with gnuplot or RRDtool. Both generated on the server and then presented as images in the browser. This used to be called state of the art!

But that sooo last-century …

Then came JavaScript libraries such as Flot, which uses the HTML5 Canvas, allowing you to draw the graph in the browser. The key benefit is that these graphs can be made dynamic (updating through real-time data feeds) and interactive (so you can zoom in and show details).

But that sooo last-decade …

Now there is this, using the latest HTML5 capabilities and resolution-independent SVG:

Screen Shot 2012-12-18 at 22.15.15

See http://selection.datavisualization.ch/ (click through on each one to get details).

That picture doesn’t really do justice to the way some of these tools adjust dynamically and animate on change. All in the web browser. Stunning – in features and in variety!

I’ve been zooming in a bit (heh) on tools such as Rickshaw and NVD3 – both with lots of fascinating examples. Some parts are just window dressing, but the dynamics and real-time behaviour will definitely help gain more insight into the underlying datasets. Which is what all the visualisation should be about, of course.

For an interesting project using SocketStream, Flot, and DataTables, see DaisyCentral. There’s a great write-up on the Architectural Overview, and another page on graphically setting up automation rules:

Screen Shot 2012-12-18 at 22.48.27

This editor is based on jsPlumb for drawing.

Another interesting project is Dashku, based on SocketStream and Raphaël. It’s a way to build a live dashboard – the essence only became clear to me after seeing this YouTube video. As you build and adjust it in edit mode, you can keep a second view open which shows the final result. Things automatically get synced, due to SocketStream.

Now, if only I knew how to build up my fu-level and find a way into all this magic…

Getting my feet wet

In Software on Dec 19, 2012 at 00:01

It all starts with baby steps. Let me just say that it feels very awkward and humbling to stumble around in a new programming language without knowing how things should be done. Here’s the sort of gibberish I’m currently writing:

Screen Shot 2012-12-18 at 17.49.18

This must be the ugliest code I’ve ever written. Not because the language is bad, but because I’m trying to convert existing code in a hurry, without knowing how to do things properly in JavaScript / CoffeeScript. Of course it’s unreadable, but all I care for right now, is to get a real-time data source up and running to develop the rest with.

I’m posting this so that one day I can look back and laugh at all this clumsiness :)

The output appears in the browser, even though all this is running on the server:

Screen Shot 2012-12-18 at 17.11.04

Ok, so now there’s a “feed” with readings coming in. But that’s just the tip of the iceberg:

  • What should the pubsub naming structure be, i.e. what are the keys / topic names?
  • Should readings be managed per value (temperature), or per device (room node)?
  • What format should this data have, since inserting a decimal point is locale-specific?
  • How to manage new values, given that previous ones can be useful to have around?
  • Are there easy choices to make w.r.t. how to store the history of all this data?
  • How to aggregate values, but more importantly perhaps: when to do this?

And that’s just incoming data. There will also need to be rules for automation and outgoing control data. Not to mention configuration settings, admin front-ends, live development, per-user settings, access rights, etc, etc, etc.

I’m not too interested yet in implementing things for real. Would rather first spend more time understanding the trade-offs – and learning JavaScript. By doodling as I’m doing now and by examining a lot of code written by others.

If you have any suggestions on what I should be looking into, let me know!

“Experience is what you get while looking for something else.” – Federico Fellini

Real-time out of the box

In Software on Dec 18, 2012 at 00:01

I recently came across SocketStream, which describes itself as “A fast, modular Node.js web framework dedicated to building single-page realtime apps”.

And indeed, it took virtually no effort to get this self-updating page in a web browser:

Screen Shot 2012-12-17 at 00.45.50

The input comes from the serial port, I just added this code:

Screen Shot 2012-12-17 at 15.48.50

That’s not JavaScript, but CoffeeScript – a dialect with a concise functional notation (and significant white-space indentation), which gets turned into JavaScript on the fly.

The above does a lot more than collect serial data: the “try” block converts the text to a binary buffer in the form of a JavaScript DataView, ready for decoding and then publishes each packet on its corresponding channel. Just to try out some ideas…

I’m also using Jade here, a notation which gets transformed into HTML – on the fly:

Screen Shot 2012-12-17 at 15.53.56

And this is Stylus, a shorthand notation which generates CSS (yep, again on the fly):

Screen Shot 2012-12-17 at 15.54.25

All of these are completely gone once development is over: with one command, you generate a complete app which contains only pure JavaScript, HTML, and CSS files.

I’m slowly falling in love with all these notations – yeah, I know, very unprofessional!

Apart from installing SocketStream using “npm install -g SocketStream”, adding the SerialPort module to the dependencies, and scratching my head for a few hours to figure out how all this machinery works, that is virtually all I had to do.

Development is blindingly fast when it comes to client side editing: just save any file and the browser(s) will automatically reload. With a text editor that saves changes on focus loss, the process becomes instant: edit and switch to the browser. Boom – updated!

Server-side changes require nodemon (as described in the faq). After that, server reload becomes equally automatic during development. Pretty amazing stuff.

The trade-off here is learning to understand these libraries and tools and playing by their rules, versus having to write a lot more yourself. But from what I’ve seen so far, SocketStream with Express, CoffeeScript, Jade, Stylus, SocketIO, Node.js, SerialPort, Redis, etc. take a staggering amount of work off my shoulders – all 100% open source.

There’s a pubsub-over-WebSockets mechanism inside SocketStream. Using Redis.

Wow. Creating responsive real-time apps hasn’t been this much fun in a long time!

It’s all about MOM

In Software on Dec 17, 2012 at 00:01

Home monitoring and home automation have some very obvious properties:

  • a bunch of sensors around the house are sending out readings
  • with actuators to control lights and appliances, driven by secure commands
  • all of this within and around the home, i.e. in a fairly confined space
  • we’d like to see past history, usually in the form of graphs
  • we want to be able to control the actuators remotely, through control panels
  • and lastly, we’d like to automate things a bit, using configurable rules

In information processing terms, this stuff is real-time, but only barely so: it’s enough if things happen within say a tenth of a second. The amount of information we have to deal with is also quite low: the entire state of a home at any point in time is probably no more than perhaps a kilobyte (although collected history will end up being a lot more).

The challenge is not the processing side of things, but the architecture: centralised or distributed, network topology for these readings and commands, how to deal with a plethora of physical interfaces and devices, and how to specify and manage the automation rules. Oh, and the user interface. The setup should also be organic, in that it allows us to grow and evolve all the aspects of our system over time.

It’s all about state and messages: the state of the home, current, sensed, and desired, and the events which change that state, in the form of incoming and outgoing messages.

What we need is MOM, i.e. Message-oriented middleware: a core which represents that state and interfaces through messages – both incoming and generated. One very clean model is to have a core process which allows some processes to “publish” messages to it and other to “subscribe” to specific changes. This mechanism is called pubsub.

Ideally, the core process should be launched once and then kept running forever, with all the features and functions added (at least initially) as separate processes, so that we can develop, add, fix, refine, and even tear down the different functions as needed without literally “bringing down the house” at every turn.

There are a couple of ways to do this, and as you may recall, I’ve been exploring the option of using ZeroMQ as the core foundation for all message exchanges. ZeroMQ bills itself as “the intelligent transport layer” and it supports pubsub as well as several other application interconnect topologies. Now, half a year later, I’m not so sure it’s really what I want. While ZeroMQ is definitely more than flexible and scalable enough, it also is fairly low-level in many ways. A lot will need to be built on top, even just to create that central core process.

Another contender which seems to be getting a lot of traction in home automation these days is MQTT, with an open source implementation of the central core called Mosquitto. In MOM terms, this is called a “broker”: a process which manages incoming message traffic from publishers by re-routing it to the proper subscribers. The model is very clean and simple: there are “channels” with hierarchical names such as perhaps “/kitchen/roomnode/temperature” to which a sensor publishes its temperature readings, and then others can subscribe to say “/+/+/temperature” to get notified of each temperature report around the house, the moment it comes in.

MQTT adds a lot of useful functionality, and optionally supports a quality-of-service (QoS) level as a way to handle messages that need reliable delivery (QoS level 0 messages use best-effort delivery, but may occasionally get dropped). The “retain” feature can hold on to the last message sent on each channel, so that when the system shuts down and comes back up or when a connection has been interrupted, a subscriber immediately learns about the last value. The “last will and testament” lets a publisher prepare a message to be sent out to a channel (not necessarily the same one) when it drops out for any reason.

All very useful, but I’m not convinced this is a good fit. In my perception, state is more central than messages in this context. State is what we model with a home monitoring and automation system, whereas messages come and go in various ways. When I look at the system, I’m first of all interested in the state of the house, and only in the second place interested in how things have changed until now or will change in the future. I’d much rather have a database as the centre of this universe. With excellent support for messages and pubsub, of course.

I’ve been looking at Redis lately, a “key-value store” which is not only small and efficient, but which also has explicit support for pubsub built in. So the model remains the same: publishers and subscribers can find each other through Redis, with wildcards to support the same concept of channels as in MQTT. But the key difference is that the central setup is now based on state: even without any publishers active, I can inspect the current temperature, switch setting, etc. – just like MQTT’s “retain”.

Furthermore, with a database-centric core, we automatically also have a place to store configuration settings and even logic, in the form of scripts, if needed. This approach can greatly simplify publishers and subscribers, as they no longer need local storage for configuration. Not a big deal when everything lives on a single machine, but with a central general-purpose store that is no longer a necessity. Logic can run anywhere, yet operate off the same central configuration.

The good news is that with any of the above three options, programming language choice is irrelevant: they all have numerous bindings and interfaces. In fact, because interconnections take place via sockets, there is not even a need to use C-based interface code: even the language itself can be used to handle properly-formatted packets.

I’ve set up a basic installation on the Mac, using Homebrew. The following steps are not 100% precise, but this is more or less all that’s needed on Windows, MacOSX, or Linux:

    brew install node redis
    npm install -g express
    redis-server               (starts the database server)
    express                    (creates a demo app)
    npm install connect-redis
    node app.js                (starts the demo app on port 3000)

There are many examples around the net on how to get started, such as this one, which is already a bit dated.

Let’s see where this leads to…

Ahavascript

In Software on Dec 16, 2012 at 00:01

I learned to program in C a long time ago, on a PDP11 running Unix (one of the first installations in the Netherlands). That’s over 30 years ago and guess what… that knowledge is still applicable. Back in full force on all of today’s embedded µC’s, in fact.

I’ll spare you the list of languages I learned before and after that time, but C has become what is probably the most widespread programming language ever. Today, it is the #1 implementation language, in fact. It powers the gcc toolchain, the Linux operating system, most servers and browsers, and … well, just about everything we use today.

It’s pretty useful to learn stuff which lasts… but also pretty hard to predict, alas!

Not just because switching means you have start all over again, but because you can become really productive at programming when spending years and years (or perhaps just 10,000 hours) learning the ins and outs, learning from others, and getting really familiar with all the programming language’s idioms, quirks, tricks, and smells.

C (and it its wake C++ and Objective-C) has become irreplaceable and timeless.

Fast-forward to today and the scenery sure has changed: there are now hundreds of programming languages, and so many people programming, that lots and lots of them can thrive alongside each other within their own communities.

While researching a bit how to move forward with a couple of larger projects here at JeeLabs, I’ve spent a lot of time looking around recently, to decide on where to go next.

The web and dynamic languages are here to stay, and that inevitably leads to JavaScript. When you look at GitHub, the most used programming language is JavaScript. This may be skewed by the fact that the JavaScript community prefers GitHub, or that people make more and smaller projects, but there is no denying that it’s a very active trend:

Screen Shot 2012-12-14 at 15.53.14

In a way, JavaScript went where Java once tried to go: becoming the de-facto standard language inside the browser, i.e. on the client side of the web. But there’s something else going on: not only is it taking over the client side of things, it’s also making inroads on the server end. If you look at the most active projects, again on GitHub, you get this list:

Screen Shot 2012-12-14 at 15.57.09

There’s something called Node.js in each of these top-5 charts. That’s JavaScript on the server side. Node.js has an event-based asynchronous processing model and is based on Google’s V8 engine. It’s also is phenomenally fast, due to its just-in-time compilation for x86 and ARM architectures.

And then the Aha-Erlebnis set in: JavaScript is the next C !

Think about it: it’s on all web browsers on all platforms, it’s complemented by a DOM, HTML, and CSS which bring it into an ever-richer visual world, and it’s slowly getting more and more traction on the server side of the web.

Just as with C at the time, I don’t expect the world to become mono-lingual, but I think that it is inevitable that we will see more and more developments on top of JavaScript.

With JavaScript comes a free text-based “data interchange protocol”. This is where XML tried to go, but failed – and where JSON is now taking over.

My conclusion (and prediction) is: like it or not, client-side JavaScript + JSON + server-side JavaScript is here to stay, and portable / efficient / readable enough to become acceptable for an ever-growing group of programmers. Just like C.

Node.js is implemented in C++ and can be extended in C++, which means that even special-purpose C libraries can be brought into the mix. So one way of looking at JavaScript, is as a dynamic language on top of C/C++.

I have to admit that it’s quite tempting to consider building everything in JavaScript from now on – because having the same language on all sides of a network configuration will probably make things a lot simpler. Actually, I’m also tempted to use pre-processors such as CoffeeScript, Jade, and Stylus, but these are really just optional conveniences (or gimmicks?) around the basic JavaScript, HTML, and CSS trio, respectively.

It’s easy to dismiss JavaScript as yet another fad. But doing so by ignorance would be a mistake – see the Blub Paradox by Paul Graham. Features such as list comprehensions are neat tricks, but easily worked around. Prototypal inheritance and lexical closures on the other hand, are profound concepts. Closures in combination with asynchronous processing (and a form of coding called CPS) are fairly complex, but the fact that some really smart guys can create libraries using these techniques and hide it from us mere mortals means you get a lot more than a new notation and some hyped-up libraries.

I’m not trying to scare you or show off. Nor am I cherry-picking features to bring out arguments in favour of JavaScript. Several languages offer similar – and sometimes even more powerful – features . Based on conceptual power alone, I’d prefer Common Lisp or Scheme, in fact. But JavaScript is dramatically more widespread, and very active / vibrant w.r.t. what is currently being developed in it and for it.

For more about JavaScript’s strengths and weaknesses, see Douglas Crockford’s page.

So where does this leave me? Easy: a JS novice, tempted to start learning from scratch!

Tomorrow, some new considerations for middleware…

The price of electrons

In Musings on Dec 15, 2012 at 00:01

Came across this site recently, thanks to a link from Ard about his page on peak shaving.

They sell electricity at an hourly rate. Here’s an example:

Screen Shot 2012-12-14 at 12.35.52

The interesting bit is the predicitive aspect: you get a predicted price for the entire day ahead, which means you can plan your consumption! A win-win all around, since that sort of behavioural adjustment is probably what the energy company wants in the first place. Their concern is always (only?) the peak.

Is this our future? I’d definitely prefer it to “smart” grids taking decisions about my appliances and home. Better options, letting me decide whether to use, store, or pass along the solar energy production, for example.

Here’s another graph from that same site, showing this year’s trend in the Chicago area:

Screen Shot 2012-12-14 at 10.45.47

It’s pretty obvious that air-conditioners run on electricity, eh?

But look also at those rates… this is about an order of magnitude lower than the current rates in the Netherlands (and I suspect Western Europe).

Here are the rates I get from my provider, including huge taxes:

Screen Shot 2012-12-14 at 14.44.11

You can probably guess the Dutch in there – two tariffs: high is for weekdays during daytime, low is for weekends and at night. Hardly a difference, due to taxes :(

Here are the rates for natural gas, btw – just for completeness:

Screen Shot 2012-12-14 at 14.44.51

No wonder really, that different parts of the world, with their widely different income levels and energy prices, end up making completely different choices.

Solar panels are currently profitable after about 7..8 years in the Netherlands – which is reflected by a strong increase in adoption lately. But seeing the above graphs, I doubt that this would make much sense in any other part of the world right now!

Meet the LED Node v2

In Hardware on Dec 14, 2012 at 00:01

The LED Node has been around for a while, but I wasn’t 100% happy with it. In principle, the LED Node v1 is a JeeNode plus 1.5 MOSFET Plugs plus an optional Room Board.

There is a small but significant difference with regular JeeNodes (apart from their very different shape), in that all three MOSFETs are tied to pins with hardware PWM support. This is important to get flicker-free dimming, i.e. if you want to have clean and calm color effects. Software PWM doesn’t give you that (unless you turn all other interrupt sources off), and even with hardware PWM it requires a small tweak of the standard Arduino library code to work well.

The neat thing about the LED Node is the wireless capability, so you can control the unit in all sorts of funky ways.

But I didn’t like the very sharp pulses this board generates, which can cause problems with color shifts over long strips and also can produce a lot of RF interference, due to the LED driving current ringing. The other thing which didn’t turn out to be as useful as I thought was the room board part.

So here’s the new LED Node v2:

jlpcb-146

The big copper areas on the left are extra-wide traces and cooling pads, dimensioned to support at least 2 Amps for each of the RGB colors, for a total of 6 A, i.e. 72 W LED strips @ 12 V. But despite the higher specs, this board will actually be lower profile, because it uses a different type of MOSFETs. They are surface mounted and come pre-soldered so you don’t have to fiddle with them (soldering such small components on relatively large copper surfaces requires a good soldering iron and some expertise).

This new revision has the extra resistors to reduce ringing, and replaces the room board interface with two standard 6-pin port headers: one at the very end, and one on the side. These are ports 1 and 4, respectively, matching a standard JeeNode and any plugs you like. If you want, you could still hook up a Room Board, but this is now no longer the only way to use the LED Node.

Wanna add an accelerometer or compass to make your LED strips orientation aware? Well… now you can! And then place them inside your bike wheels? Could be fun :)

Details to be posted on the Café wiki soon, as well as in the Shop.

The world of audio

In Hardware on Dec 13, 2012 at 00:01

There’s a huge world out there which I’ve never looked into: audio. And it has changed.

It used to be analog (and before my time: vacuum tubes, or “valves” as the British say).

Nowadays, it’s all digital and integrated. The common Class-D amplifier is made of digitally switching MOSFETs with some cutoff filters to get rid of the residual high-frequency this generates. Leaving just the “pure” audible portion to drive the speakers.

With the recent switch to a new small TV, away from the Mac Mini, for our TV & music system, I lost the original hook-up we had, which was a (far too cheap) little analog amplifier driving (far too expensive) speakers we’ve had here for a long time.

So now we have this TV with built-in tiny 2.5W speakers blasting to the rear – a far cry from the sound we had before. And no music playback capability at all in the living room right now. Not good!

Our needs are simple: CD-quality music (we’re no audiophiles) and decent TV sound. I am going to need a setup soon, as the Christmas vacation time nears.

Trouble is: the sound source for our music is on the Mac Mini server, which is in an impossible place w.r.t. the TV and the speakers. So my first thought was: an Airport Express. It can play over WiFi, and has optical audio output. But… the AE draws 4W in standby. And turning it on for each use is awkward: waiting a minute or more to get sound from the TV is not so great.

The other options for music are an Apple TV or a specially-configured Raspberry Pi.

The only remaining issue is how to get sound from line-level analog audio or (preferably) digital audio to the speakers. I ended up choosing something fairly simple and low-end, a component from miniDSP called “miniAMP”:

DSC 4302

This takes all-digital I²S signals and produces 4x 10W audio. It needs a 12..24V @ 4A supply, i.e. a simple “brick” should do. But that’s just half a solution: it needs I²S…

This is where the “miniDSP” component comes in (the SOIC chip at the top is a PIC µC):

DSC 4301

So the whole setup becomes as follows – and I’ll double up the miniAMP (one for each channel) if the output is not powerful enough:

Screen Shot 2012 12 12 at 23 11 57

The miniDSP takes 2x analog in, and produces up to 4x digital I²S out. The nice part is that it’s fully configurable, i.e. it can do all sorts of fancy sound processing:

Screen Shot 2012 12 12 at 23 21 17

This is perfect for our setup, which includes old-but-incredibly-good separate speakers for the highs and the lows. So a fully configurable cross-over setup is just what we need:

Screen Shot 2012 12 12 at 23 23 11

The way this works is that you set it up, burn the settings into the DSP front-end via USB, and then insert it into the audio chain.

It’s tempting to start tinkering with this stuff at an even lower level, but nah… enough other things to do. Although I do want to look into auto shut-off at some point, to further lower power consumption when no audio is being played. But for now this will have to do.

Data storage and backups

In Musings on Dec 12, 2012 at 00:01

Having just gone through some reshuffling here, I thought it might be of interest to describe my setup, and how I got there.

Let’s start with some basics – apologies if this all sounds too trivial:

  • backups are not archives: backups are about redundancy, archives are about history
  • I don’t want backups, but the real world keeps proving that things can fail – badly!
  • archives are for old stuff I want to keep around for reference (or out of nostalgia…)

If you don’t set up a proper backup strategy, then you might as well go jump off a cliff.

If you don’t set up archives, fine: some hold onto everything, others prefer to travel light – I used to collect lots of movies and software archives. No more: there’s no end to it, and especially movies take up large amounts of space. Dropping all that gave me my life back.

We do keep all our music, and our entire photo collection (each 100+ GB). Both include digitised collections of everything before today’s bits-and-bytes era. So about 250 GB in all.

Now the deeply humbling part: everything I’ve ever written or coded in my life will easily fit on a USB stick. Let’s be generous and assume it will grow to 10 GB, tops.

What else is there? Oh yes, operating systems, installed apps, that sort of thing. Perhaps 20..50 GB per machine. The JeeLabs Server, with Mac OSX Server, four Linux VM’s, and everything else needed to keep a bunch of websites going, clocks in at just over 50 GB.

For the last few years, my main working setup has been a laptop with a 128 GB SSD, and it has been fairly easy to keep disk usage under 100 GB, even including a couple of Linux and Windows VM’s. Music and photo’s were stored on the server.

I’m rambling about this to explain why our entire “digital footprint” (for Liesbeth and me) is substantially under 1 TB. Some people will laugh at this, but hey – that’s where we stand.

Backup…

Ah, yes, back to the topic of this post. How to manage backups of all this. But before I do, I have to mention that I used to think in terms of “master disks” and “slave disks”, i.e. data which was the real thing, and copies on other disks which existed merely for convenience, off-line / off-site security, or just “attics” with lots of unsorted old stuff.

But that has changed in the past few months.

Now, with an automatic off-site backup strategy in place, there is no longer a need to worry so much about specific disks or computers. Any one of them could break down, and yet it would be no more than the inconvenience of having to get new hardware and restore data – it’d probably take a few days.

The key to this: everything that matters, now exists in at least three places in the world.

I’m running a mostly-Mac operation here, so that evidently influences some of the choices made – but not all, and I’m sure there are equivalent solutions for Windows and Linux.

This is the setup at JeeLabs:

  • one personal computer per person
  • a central server

Sure, there are lots of other older machines around here (about half a dozen, all still working fine, and used for various things). But our digital lives don’t “reside” on those other machines. Three computers, period.

For each, there are two types of backups: system recovery, and vital data.

System recovery is about being able to get back to work quickly when a disk breaks down or some other physical mishap. For that, I use Carbon Copy Cloner, which does full disk tree copying, and is able to create bootable images. These copies include the O/S, all installed apps, everything to get back up to a running machine from scratch, but none of my personal data (unless you consider some of the configuration settings to be personal).

These copies are made once a day, a week, or a month – some of these copies are fully automatic, others require me to hook up a disk and start the process. So it’s not 100% automated, but I know for sure I can get back to a running system which is “reasonably” close to my current one. In a matter of hours.

That’s 3 computers with 2 system copies for each. One of the copies is always off-site.

Vital data is of course just that: the stuff I never want to lose. For this, I now use CrashPlan+, with an unlimited 10-computer paid plan. There are a couple of other similar services, such as BackBlaze and Carbonite. They all do the same: you keep a process running in the background, which pumps changes out over internet.

In my case, one of the copies goes to the CrashPlan “cloud” itself (in the US), the other goes to a friend who also has fast internet and a CrashPlan setup. We each bought a 2.5″ USB-powered disk with lots of storage, placed our initial backups on them, and then swapped the drives to continue further incremental backups over the net.

The result: within 15 minutes, every change on my disk ends up in two other places on this planet. And because these backups contain history, older versions continue to be available long after each change and long after any deletion, even (I limit the history to 90 days).

That’s 1 TB of data, always in good shape. Virtually no effort, other than an occasional glance on the menu bar to see that the backup is operating properly. Any failure of 3 or more days for any of these backup streams leads to a warning email in my inbox (which is at an ISP, i.e. off-site). Once a week I get a concise backup status report, again via email.

The JeeLabs server VM’s get their own daily backup to Amazon S3, which means I can re-launch them as EC2 instances in the cloud if there is a serious problem with the Mac Mini used as server here. See an older post for details.

Yes, this is all fairly obvious: get your backups right and you get to sleep well at night.

But what has changed, is that I no longer use the always-on server as “stable disk” for my laptop. I used to try putting more and more data on the central server here, since it was always on and available anyway. Which means that for really good performance you need a 1 Gbit wired ethernet connection. Trivial stuff, but not so convenient when sitting on the couch in the living room. And frankly also a bit silly, since I’m the only person using those large PDF and code collections I’m relying on more and more these days.

So now, I’ve gone back to the simplest possible setup: one laptop, everything I need on there (several hundred GB in total), and an almost empty server again. On the server, just our music collection (which is of course shared) and the really always-on stuff, i.e. the JeeLabs server VM’s. Oh, and the extra hard disk for my friend’s backups…

Using well under 1 TB for an entire household will probably seem ridiculous. But I’m really happy to have a (sort of) NAS-less, and definitely RAID-less, setup here.

Now I just need to sort out all the old disks….

Inventing on Principle

In Musings on Dec 11, 2012 at 00:01

It’s going to take almost an hour of your time to watch this presentation:

Bret Victor – Inventing on Principle from CUSEC.

Let me just say: this is sooo worth it, from the beginning all the way to the very end. No need to view it now (it’s been out for 10 months) – but when you do, you’ll enjoy it.

O n e   h o u r   o f   m i n d b l o w i n g   i n s i g h t s   . . .

Bret Victor’s site is here. My fault for having seen it, but never paying proper attention.

Stumbled onto this via a related fascinating development, called CodeBook.

Idiots and the Universe

In Musings on Dec 10, 2012 at 00:01

Check out this quote:

Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning. — Rick Cook, The Wizardry Compiled

The latest trend is to add comments to weblog posts, praising me in all sorts of truly wonderful (but totally generic) ways. The only purpose being to get that comment listed with a reference to some site peddling some stuff. Fortunately, the ploy is trivial to detect. So easy in fact, that filtering can be fully automated via the Akismet web service, plus a WordPress plug-in by that same name.

Here’s the trend on this weblog (snapshot taken about a week ago):

Screen Shot 2012 12 03 at 18 33 14

The drop comes from the fact that all posts on this weblog are automatically closed for comments after two weeks, and there were no new posts in July and August. So it’s just a bunch of, eh, slightly desperate people pounding on the door.

One of them got through in the past six months. The other 326 just wasted their time.

Something similar is happening on the discussion forum. And behind the scenes, some new work is now being done to make those constant attempts there just as futile :)

And it’s not even Christmas yet!

In Uncategorized on Dec 9, 2012 at 00:01

Winter has set in here – it’s down to minus 15°C at night, with this view from JeeLabs:

DSC 4296

Speaking of Christmas: gives me an excuse to talk about some administrative details…

We are running the Shop and shipping orders as fast as we can right up until the Christmas break – including some new products as they become available. If you’re planning to receive items in time for Christmas, we recommend you place your order before the following dates:

UK


Standard 1st class: 18th Dec
Special Delivery Request: 21st Dec

Mainland Europe


Airmail/Airsure: 12th Dec
Special Delivery Request: 20th Dec

Outside Europe


Airmail: right now please!
Special Delivery Request: 18th Dec

If you don’t manage to get your order in before these dates, we will still process it right up until the 22nd Dec, but since the sleigh and reindeer are out on a rush job, you take your chances ….

Fourier analysis

In Hardware on Dec 8, 2012 at 00:01

The three scope shots shown yesterday illustrated how the output signal moves further and further away from the “ideal” input sine waves, near the limits of the AD8532 op-amp.

This was all based on a vague sense of how “clean” the wave looks. Let’s now investigate a bit deeper and apply FFT to these signals. First, the 500 KHz from my frequency generator:

SCR09

You can see that peak #1 is the 500 KHz signal, but there’s also a peak #2 harmonic at 1 MHz, i.e. twice that frequency, and an even weaker one at 1.5 MHz.

My frequency generator is not perfect, but let’s not forget to put things in perspective:

  • peak #1 is at roughly 10 dBm
  • peak #2 is at roughly -40 dBm, i.e. 50 dB less

First off: I really should have set to scope to dBV. But the image would have looked the same in this case – just a different scale, so let’s figure out this dBm thing first:

  • 0 dBm is defined as 1 mW of power
  • the generator was set to drive a 50 Ω load, but I forgot to enable it
  • therefore the “effective load” is 100 Ω (off by a factor of two, long story)
  • the signal is swinging ± 1 V around a 2V base level, i.e. 0.707 V (RMS)
  • so the signal is driving ± 7.07 mA into the load (plus 14.14 mA DC)
  • power is I x V, i.e. 7.07 mA x 0.707 V x 2 (for the termination mistake) = 10 mW

Next thing to note is that dB and dBm (decibels) use a logarithmic scale. That’s a fancy way of saying that each step of 10 is 10 times more or less than the previous. From 0 to 10 dBm is a factor 10, i.e. from 1 mW to 10 mW. From 10 to 20 dBm is again a factor 10, i.e. 10 mW to 100 mW, etc. Likewise, -10 dBm is one tenth of 0 dBm (0.1 mW) etc.

The 500 KHz signal (peak #1) is therefore 10 mW (10 dBm), and the 1 MHz harmonic is roughly 100,000 times as weak at 0.1 µW (-40 dBm). It looks like a huge peak on the screen, but each vertical division down is one tenth of the value. The vertical scale on screen covers a staggering 1:100,000,000 power level ratio.

That 500 KHz sine wave is in fact very clean, despite the extra peaks seen at this scale.

Now let’s look at the same signal, on the output of the op-amp:

SCR10

Not too bad (the second peak is still less than 1/30,000 of the original). Which is why the output shape at 500 KHz still looks very much like a pure sine wave.

At 1 MHz, the secondary peaks become a bit more pronounced:

SCR05 . SCR06

And at 2 MHz, you can see that the output harmonics are again a lot stronger:

SCR07 . SCR08

Not only has the level of the 2 MHz signal dropped from 9.23 dBm to 6.59 dBm, the second harmonic at 4 MHz is now only a bit under 1/100th the main frequency. And that shows itself as a severely distorted sine wave in yesterday’s weblog post.

In case you’re wondering: those other smaller peaks around 1 MHz come from public AM radio – there are some strong transmitters, located only a few km from here!

Anyway – I hope you were able to distill some basic intuition from this sort of signal analysis, if this is all new to you. It’s quite a valuable technique and all sort of within reach now, since most recent scopes include an FFT capability – the bread and butter of the analog electronics world…

Let’s now get back to digital again. Ah, bits and bytes, sooo much simpler!

Op-amp limits

In Hardware on Dec 7, 2012 at 00:01

Let’s look at that AD8532 dual op-amp mentioned yesterday and start with its “specs”:

Screen Shot 2012 11 24 at 22 54 50

The slew rate is relatively low for this unit. Its output voltage can only rise 5V per µs. In a way, this explains the ≈ 0.1 µs phase shift in the image which I’ll repeat again here:

SCR17

As you can see, the 500 KHz sine wave takes about 200 ns to rise 1 division, i.e. 0.5V, so it’s definitely nearing the limit of this op-amp. Let’s push it a bit with 1 and 2 MHz sine waves:

SCR19

SCR20

Whoa! As you can see, the output cannot quite reproduce a 1 MHz input signal faithfully (there’s an odd little ripple), let alone 2 MHz in the second screen, which starts to diverge badly in both shape and amplitude. The vertical scale is 0.5V per division.

Sine waves are “pure frequencies” – in a vague manner of speaking. It’s the natural way for things to oscillate (not just electrical signals, sine waves are everywhere!). The field of Fourier analysis is based on one of the great mathematical discoveries that all repetitive signals (or motions) can be re-interpreted as the sum of sines and cosines with different amplitudes and frequencies.

You don’t have to dive into the math to benefit from this. Most modern oscilloscopes support an FFT mode, an amazing computed transformation which decomposes a repetitive signal into those sine waves. One of the simplest uses of FFT is to get a feel for how “pure” signals are, i.e. how close to a pure sine wave.

Unfortunately, I have too many FFT scope shots for one post, so tomorrow I’ll post the rest and finish this little diversion into signal analysis. It’ll allow us to compare the above three signals in a more quantitative way.

Power booster

In Hardware on Dec 6, 2012 at 00:01

The trouble with the Arbitrary Waveform Generator I use, is that it has a fairly limited output drive capability. I thought it was broken, and returned it to TTi, but they tested it and couldn’t find any problem. It’ll drive a 50 Ω load, but my habit of raising the signal to stay above 0V (for single-supply uses) probably pushed it too far via that extra DC offset.

I’d like to use a slow ramp as sort of a controllable power supply for JeeNodes and the AA Power Board to find out how they behave with varying input voltages. A simple sawtooth running from 0.5V to 4V would be very convenient – as long as it can drive 50 mA or so.

Here’s one way to do it:

Volt follower

This is an op-amp, connected in such a way that the output will follow exactly what the input is doing – hence the name buffer amplifier or “voltage follower”.

Quick summary of how it works – an op-amp always increases its output when “+” is above “-“, and vice versa. So whatever the output is right now, if you raise the “+” pin, the output will go up, until the “-” pin is at the same value.

It seems pointless, but the other property of an op-amp, is that the input impedance of its inputs is very high. In other words: it draws nearly no current. The input load is negligible.

The output current is determined by the limits of the op-amp. And the AD8532 from Analog Devices can drive up to 250 mA – pretty nice for a low-power supply, in fact!

Here’s the experimental setup (only one of the two op-amps is being used here):

DSC 4273

Here you can see that the input voltage is exactly the same as the output:

SCR17

(yellow = input signal, blue = output signal, a 500 KHz sine wave between 1V and 3V)

Well, almost…

As you can see, there’s a phase shift. It’s not really a big deal – keep in mind that the signal used here is a high-frequency wave, and that shift is in fact less than 0.1 µs. Irrelevant for a power supply with a slow ramp.

Tomorrow I’ll bombard you with scope shots, to illustrate how this op-amp based voltage follower behaves when gradually pushed beyond its capabilities. Nasty stuff…

Keep in mind that the point of this whole setup is to drive more current than the function generator can provide. As a test, I connected a 100 Ω resistor over the output, and sure enough nothing changes. The AD8532 will simply drive the 10..30 mA through the resistor and still maintain its output voltage.

The beauty of op-amps is that all this just works!

But there is a slight problem: the AD8532 can drive up to 250 mA, but it’s not short-circuit proof. If we ever draw over 250 mA, we’ll probably damage it. The solution is simple, once you think about how op-amps really work (from the datasheet):

Screen Shot 2012 11 24 at 20 26 21

The extra resistor limits the output current to the safe value, but the side-effect is that the more current you draw, the less “headroom” you end up with: if we draw 100 mA, then that resistor will have a 2V voltage drop, so the maximum output voltage will be 3V when the supply voltage is 5V.

If you look at my experimental setup above, you’ll see a 22 Ω resistor tied to each output.

That’s it. This simple setup should make it possible to explore how simple circuits work with varying supply voltages. A great way to simulate battery limits, I hope!

Ringing MOSFETs

In Hardware on Dec 5, 2012 at 00:01

The LED Node uses MOSFETs to drive the red, green, and blue LED strings, respectively.

Here’s the circuit (note that the LED strips must also include current-limiting resistors):

JC s Grid page 39

Well… in the LED Node v1, input pin B and resistor R2 are missing, and R1 is 10 kΩ.

This leads to a fair amount of electrical trouble – have a look:

SCR31

The yellow line is the input, a 6V signal in this case (not 3.3V, as used in the LED Node). The blue line is the voltage over the MOSFET. The input is a 1000 Hz square wave with 20% duty cycle, i.e. 200 µs high, 800 µs low.

When the input voltage goes low, the N-MOSFET switches off. In this case, I don’t use an actual LED strip as load, but a 1 Ω power resistor, driven from a 2V power supply line to keep the heat production manageable during these tests. So that’s 2 A of current going through the MOSFET, and when it switches off that happens so quickly that the current simply has nowhere to go (the power supply is not a very nice conductor for such high-frequency events, alas).

As you can see, this signal ringing is so strong in this case, that the voltage will overshoot the power supply by a multiple of 2V.

Here are the leading edge (MOSFET turns on & starts to draw 2 A) and the trailing edge (MOSFET turns off & breaks the 2 A current) of that cycle again, in separate screenshots:

SCR34 SCR36

The horizontal time scale is 1 µs per division.

The vertical scales are 0.5 V and 5 V (!) per division for the input (yellow) and MOSFET voltage (blue), respectively. Note the 30V overshoot when turning that MOSFET off!

This has all sorts of nasty consequences. For one, such high frequency signals will vary across the length of the LED strip, which will affect the intensities and color balance.

But what’s much worse, is the electromagnetic interference these signals will generate. There’s probably a strong 5..10 MHz component in there. Yikes!

There are various solutions. One is to simply dampen the turn-on / turn-off slopes by inserting a resistor in series between the µC’s output pin and the MOSFET’s gate. If you recall the schematic above, I switched the output signal to pin B, made R1 = 1 MΩ and R2 = 1 kΩ. Here’s the effect – keeping all other conditions the same as before:

SCR35 SCR37

What a difference! Sure, the flanks have become quite soft, but that ringing has also been reduced to one fifth of the original case. And those soft flanks (about 2 µs on the blue line) will probably just make it easier to dim the LED strips to very low levels.

The little hump at about 1V is when this particular MOSFET starts to switch – these units were specifically selected to switch at very low voltages, so that they would be fully switched on at 3.3V. This helps reduces heat generation in the MOSFETs – an important detail when you’re switching up to 2 Amps. And indeed, the STN4NF03L MOSFETs used here don’t get more than hand-warm @ 2A – pretty amazing technology!

The new LED Node v2 will include those extra resistors in the MOSFET gate, obviously. And that 1 kΩ value for R2 seems just about right.

The other resistor (R1) is a pull-down, it only serves to avoid unpleasant power-up spikes – by keeping the MOSFET off until the µC enables its I/O pins and starts driving it.

In case you’re wondering about the ringing on the yellow input trace: there’s something called the Miller effect, which amplifies the capacitance between the drain and the gate, causing strong signals on the output to leak back through to the gate. The input signal from my signal generator has a certain impedance and can’t fully wipe them out.

Oh, by the way, have a nice Sinterklaas! :)

Meet the Color Plug

In Hardware on Dec 4, 2012 at 00:01

Yet another plug designed by Lennart Herlaar:

DSC 4291

It contains the TAOS TCS3414 color sensor. JeeLib now includes a new ColorPlug class which simplifies reading out this chip, as well as a colorDemo.ino sketch:

Screen Shot 2012 12 03 at 15 17 47

Sample output:

Screen Shot 2012 12 03 at 14 03 40

One nice use for this sensor and code is to determine the color temperature of white light sources, such as incandescent lamps, CFL’s, and LED’s. I’m trying to find a pleasant replacement for a few remaining warm white halogen lights around the house here and such a unit (especially portable) could be very handy when shopping for alternatives.

Hardware description in the Café to follow soon, as well as in the JeeLabs shop.

Onwards!

Meet the Precision RTC Plug

In Hardware on Dec 3, 2012 at 00:01

Here’s another new board, the Precision RTC Plug – this is is a revision of a design by Lennart Herlaar from almost a year ago – my, my, this year sure went by quickly:

DSC 4292

The current RTC Plug from JeeLabs will be kept as low-end option, but this one reduces drift by an order of magnitude if you need it: that’s at most ≈ 1 second per week off over a temperature range of 0 .. 40°C. Or one minute per year.

Drift can go up to twice that for the full -40 .. +85°C range, but that’s still one sixth of the crystal used in the original RTC Plug. Considerably better than this, in fact, if you need the extended temperature range. Here’s a comparison between both plugs, from the datasheet:

Screen Shot 2012 12 02 at 12 58 23

The way the Precision RTC works is with a Temperature Compensated Crystal Oscillator (TCXO): once a minute, the approximate temperature is determined and the capacitance used by the crystal oscillator is adjusted ever so slightly to try and keep the 32,768 Hz frequency right on the dot. Since the chip also knows how long it has been running, it can even apply an “aging” correction to compensate this small effect in every crystal.

The temperature can be read out, but it’s only specified as accurate to ± 3°C.

No need to use any special software for this, all the normal clock functions are available through the same code as used with the original RTC Plug. If you want to use fancy functions, or perhaps calibrate things further for an even lower drift, you can access all the registers via normal I2C read and write comands.

The board will be added to the shop in a few days, and the wiki page on the Café updated.

Meet the JeeNode Micro v2

In Hardware on Dec 2, 2012 at 00:01

Just in yesterday, haven’t even had the time yet to assemble it!

DSC 4294

Dig that JeeLabs logo on there! :)

As you can see, the shape and layout have not changed much in this revision:

JMv2 traces

Here’s the main part of the new JeeNode Micro v2 schematic:

Screen Shot 2012 12 01 at 16 23 13

Several major changes:

  • the power to the RFM12B module is now controlled via a MOSFET
  • the PWR pin is connected to the +3V pin with 2 diodes
  • there’s room for an optional boost regulator (same as on the AA Power Board)
  • and there’s even room for a RESET button

When you look at the PCB’s, you’ll see that the extra headers have all been removed, there is just one 9-pin header left – the “IOX” signal from v1 now controls power to the RFM12B.

Through a sneaky placement of the ISP header, there is still a way to connect a single-cell AA or AAA battery to opposite ends of the board.

This extra power control is intended to reduce the current consumption during startup, but I haven’t tried it yet. The idea is that the RFM12B will not be connected to the power source before the ATtiny starts and verifies that the voltage level is high enough to do so. After that, it can be turned on and immediately put to sleep – in practice, its power probably never needs to be turned off again.

The other main change has to do with the different power options:

  • 2.2 .. 3.8V through the +3V pin, intended for 2-cell batteries of various kinds
  • 3.5 .. 5.1V through the PWR pin, for 5V and LiPo use
  • 0.9 .. 5.1V through the PWR pin when the boost regulator is present

The latter might seem the most flexible one, but keep in mind that the boost regulator has a 15 .. 30 µA idle current draw, even when the rest of the circuit is powered down, so this is not always the best option (and the extra switching supply components add to the cost).

As you can imagine, I’ll be running some final tests on all this in the next few days – but the new unit is now available for pre-order in the shop (“direct power” version only for now, the boost version will be available later this month). Design files are in the Café.

Extracting data from P1 packets

In Software on Dec 1, 2012 at 00:01

Ok, now that I have serial data from the P1 port with electricity and gas consumption readings, I would like to do something with it – like sending it out over wireless. The plan is to extend the homePower code in the node which is already collecting pulse data. But let’s not move too fast here – I don’t want to disrupt a running setup before it’s necessary.

So the first task ahead is to scan / parse those incoming packets shown yesterday.

There are several sketches and examples floating around the web on how to do this, but I thought it might be interesting to add a “minimalistic sauce” to the mix. The point is that an ATmega (let alone an ATtiny) is very ill-suited to string parsing, due to its severely limited memory. These packets consist of several hundreds of bytes of text, and if you want to do anything else alongside this parsing, then it’s frighteningly easy to run out of RAM.

So let’s tackle this from a somewhat different angle: what is the minimal processing we could apply to the incoming characters to extract the interesting values from them? Do we really have to collect each line and then apply string processing to it, followed by some text-to-number conversion?

This is the sketch I came up with (“Look, ma! No string processing!”):

Screen Shot 2012 11 29 at 20 44 16

This is a complete sketch, with yesterday’s test data built right into it. You’re looking at a scanner implemented as a hand-made Finite State Machine. The first quirk is that the “state” is spread out over three global variables. The second twist is that the above logic ignores everything it doesn’t care about.

Here’s what comes out, see if you can unravel the logic (see yesterday’s post for the data):

Screen Shot 2012 11 29 at 20 44 49

Yep – that’s just about what I need. This scanner requires no intermediate buffer (just 7 bytes of variable storage) and also very little code. The numeric type codes correspond to different parameters, each with a certain numeric value (I don’t care at this point what they mean). Some values have 8 digits precision, so I’m using a 32-bit int for conversion.

This will easily fit, even in an ATtiny. The moral of this story is: when processing data – even textual data – you don’t always have to think in terms of strings and parsing. Although regular expressions are probably the easiest way to parse such data, most 8-bit microcontrollers simply don’t have the memory for such “elaborate” tools. So there’s room for getting a bit more creative. There’s always a time to ask: can it be done simpler?

PS. I had a lot of fun come up with this approach. Minimalism is an addictive game.