Computing stuff tied to the physical world

Archive for the ‘Musings’ Category

Getting back in the groove

In Musings on Sep 30, 2015 at 00:01

This will be the last post in “summer mode”. Next week, I’ll start posting again with articles that will end up in the Jee Book, as before – i.e. trying to create a coherent story again.

The first step has just been completed: clearing up my workspace at JeeLabs. Two days ago, every flat surface in this area was covered with piles of “stuff”. Now it’s cleaned up:

IMG 0154

On the menu for the rest of this year: new products, and lots of explorations / experiments in Physical Computing, I hope. I have an idea of where to go, but no definitive plans. There is a lot going on, and there’s a lot of duplication when you surf around on the web. But this weblog will always be about trying out new things, not just repeating what others are doing.

My focus will remain aimed at “Computing stuff tied to the physical world” as the JeeLabs byline says, in essentially two ways: 1) to improve our living environment in and around the house, and 2) to have fun and tinker with low-cost hardware and open source software.

For one, I’d like to replace the wireless sensor network I’ve been running here, or at least gradually evolve all of the nodes to new ARM-based designs. Not for the sake of change but to introduce new ideas and features, get even better battery lifetimes, and help me further in my quest to reduce energy consumption. I’d also like to replace my HouseMon 0.6 setup which has been running here for years now, but with virtually no change or evolution.

An idea I’d love to work on is to sprinkle lots of new room-node like sensors around the house, to find out where the heat is going – then correlate it to outside temperature and wind direction, for example. Is there some window we can replace, or some other measure we could take to reduce our (still substantial) gas consumption during the cold months? Perhaps the heat loss is caused by the cold rising from our garage, below the living room?

Another long-overdue topic, is to start controlling some appliances over wireless, not just collecting the data from what are essentially send-only nodes. Very different, since usually there is power nearby for these nodes, and they need good security against replay-attacks.

I’ll want to be able to see the basic “health” indicators of the house at a glance, perhaps shown inconspicuously on a screen on the wall somewhere (as well as on a mobile device).

As always, all my work at JeeLabs will be fully open source for anyone to inspect, adopt, re-use, extend, modify, whatever. You do what you like with it. If you learn from it and enjoy, that’d be wonderful. And if you share and give back your ideas, time, or code: better still!

Stay tuned. Lots of fun with bits, electrons, and molecules ahead :)

Shedding weight

In Musings on Sep 23, 2015 at 00:01

I’ve been on a weight loss diet lately. In more ways than one…

As an old fan of the Minimal Mac weblog (now extinct), I’ve always been intrigued by simplification. Fewer applications, a less cluttered desk (oops, not there yet!), simpler tools, and leaner workflows. And with every new laptop over the years, I’ve been toning down the use of tons of apps, widgets, RSS feeds, note taking systems, and reminders.

Life is about flow and zen, not about interruptions or being busy. Not for me, anyway.

One app for all my documents (DevonThink), one app for all my quick notes (nvAlt), one programming-editor convention (vim/spacemacs), one off-line backup system (Arq), one on-line backup (Time Machine), one app launcher / search tool (Spotlight) … and so on.

I’ve recently gone back to doing everything on a single (high-end Mac) laptop. No more tinkering with two machines, Dropbox, syncing, etc. Everything in one place, locally, with a nice monitor plugged in when at my desk. That’s 1920×1200 pixels when on the move, and 2560×1600 otherwise, all gorgeously retina-sharp. I find it amazing how much calmer life becomes when things remain the same every time you come back to it.

I don’t have a smartphone, which probably puts me in the freaky Luddite category. So be it. I now only keep a 4 mm thin credit-card sized junk phone in my pocket for emergency use.

We’ve gone from an iPad each to a shared one for my wife Liesbeth and me. It’s mostly used for internet access and stays in the living room, like newspapers did in the old days.

I’ve gone back to using an e-paper based reader for when I want to sit on the couch or go outside and read. It’s better than an iPad because it’s smaller, lighter, and it’s passively lit, which is dramatically better in daylight than an LCD screen. At night I read less, because in the end it’s much nicer to wake up early and go enjoy daylight again. What a concept, eh?

While reading, I regularly catch myself wanting to access internet. Oops, can’t do. Great!

As for night-time habits: it’s astonishing how much better I sleep when not looking at that standard blueish LCD screen in the evening. Sure, I do still burn the midnight oil banging away on the keyboard, but thanks to a utility called f.lux the screen white balance follows the natural reddening colour shift of the sun across the day. Perfect for a healthy sleep!

Our car sits unused for weeks on end sometimes, as we take the bike and train for almost everything nowadays. It’s too big a step to get rid of it – maybe in a few years from now. So there’s no shedding weight there yet, other than in terms of reducing our CO2 footprint.

And then there’s the classical weight loss stuff. For a few months now, I’ve been following the practice of intermittent fasting, combined with picking up my old habit of going out running again, 2..3 times per week. With these two combined, losing real weight has become ridiculously easy – I’ve shed 5 kg, with 4 more to go until the end of the year.

Eat less and move more – who would have thought that it actually works, eh?

But hey, let me throw in some geek notes as well. Today, I received the Withings Pulse Ox:

DSC 5144

(showing the heart rate sensor on the back – the front has an OLED + touch display)

It does exactly what I want: tell the time, count my steps, and measure my running activity, all in a really small package which should last well over a week between charges. It sends its collected data over BLE to a mobile device (i.e. our iPad), with tons of statistics.

Time will tell, but I think this is precisely the one gadget I want to keep in my pocket at all times. And when on the move: keys, credit cards, and that tiny usually-off phone, of course.

Except for one sick detail: why does the Withings “Health Mate” app insist on sending out all my personal fitness tracking data to their website? It’s not a show-stopper, but I hate it. This means that Withings knows all about my activity, and whenever I sync: my location.

So here’s an idea for anyone looking for an interesting privacy-oriented challenge: set up a Raspberry as firewall + proxy which logs all the information leaking out of the house. It won’t address mobile use, but it ought to provide some interesting data for analysis over a period of a few months. What sort of info is being “shared” by all the apps and tools we’ve come to rely on? Although unfortunately, it won’t be of much use with SSL-based sessions.

Bandwagons and islands

In Musings on Sep 16, 2015 at 00:01

I’ve always been a fan of the Arduino ecosystem, hook, line, and sinker: that little board, with its AVR microcontroller, the extensibility, through those headers and shields, and the multi-platform IDE, with its simple runtime library and access to all its essential hardware.

So much so, that the complete range of JeeNode products has been derived from it.

But I wanted a remote node, a small size, a wireless radio, flexible sensor options, and better battery lifetimes, which is why several trade-offs came out differently: the much smaller physical dimension, the RFM radio, the JeePort headers, and the FTDI interface as alternative for a built-in USB bridge. JeeNodes owe a lot to the Arduino ecosystem.

That’s the thing with big (even at the time) “standards”: they create a common ground, around which lots of people can flock, form a community, and extend it all in often quite surprising and innovative ways. Being able to acquire and re-use knowledge is wonderful.

The Arduino “platform” has a bandwagon effect, whereby synergy and cross-pollination of ideas lead to a huge explosion of projects and add-ons, both on the hardware as on the software side. Just google for “Arduino” … need I say more?

Yet sometimes, being part of the mainstream and building on what has become the “baseline” can be limiting: the 5V conventions of early Arduino’s doesn’t play well with most of the newer sensor chips these days, nor is it optimal for ultra low-power uses. Furthermore, the Wiring library on which the Arduino IDE’s runtime is based is not terribly modular or suitable for today’s newer µC’s. And to be honest, the Arduino IDE itself is really quite limited compared to many other editors and IDE’s. Last but definitely not least, C++ support in the IDE is severely crippled by the pre-processing applied to turn .ino files into normal .cpp files before compilation.

It’s easy to look back and claim 20-20 vision in hindsight, so in a way most of these issues are simply the result of a platform which has evolved far beyond the original designer’s wildest dreams. No one could have predicted today’s needs at that point in time.

There is also another aspect to point out: there is in fact a conflict w.r.t. what this ecosystem is for. Should it be aimed at the non-techie creative artist, who just wants to get some project going without becoming an embedded microelectronics engineer? Or is it a playground for the tech geek, exploring the world of physical computing, diving in to learn how it works, tinkering with every aspect of this playground, and tracing / extending the boundaries of the technology to expand the user’s horizon?

I have decades of software development experience under my belt (and by now probably another decade of physical computing), so for me the Arduino and JeeNode ecosystem has always been about the latter. I don’t want a setup which has been “dumbed down” to hide the details. Sure, I crave for abstraction to not always have to think about all the low-level stuff, but the fascination for me is that it’s truly open all the way down. I want to be able to understand what’s under the hood, and if necessary tinker with it.

The Arduino technology doesn’t have that many secrets any more for me, I suspect. I think I understand how the chips work, how the entire circuit works, how the IDE is set up, how the runtime library is structured, how all the interrupts work together, yada, yada, yada.

And some of it I’m no longer keen to stick to: the basic editing + compilation setup (“any editor + makefiles” would be far more flexible), the choice of µC (so many more ARM fascinating variants out there than what Atmel is offering), and in fact the whole premise of using an edit-compile-upload-run seems limiting (over-the air uploads or visual system construction anyone?).

Which is why for the past year or so, I’ve started bypassing that oh-so-comfy Arduino ecosystem for my new explorations, starting from scratch with an ARM gcc “toolchain”, simple “makefiles”, and using the command-line to drive everything.

Jettisoning everything on the software side has a number of implications. First of all, things become simpler and faster: less tools to use, (much) lower startup delays, and a new runtime library which is small enough to show the essence of what a runtime is. No more.

A nice benefit is that the resulting builds are considerably smaller. Which was an important issue when writing code for that lovely small LPC810 ARM chip, all in an 8-pin DIP.

Another aspect I very much liked, is that this has allowed me to learn and subsequently write about how the inside of a runtime library really works and how you actually set up a serial port, or a timer, or a PWM output. Even just setting up an I/O pin is closer to the silicon than the digitalWrite(...) abstraction provided by the Arduino runtime.

… but that’s also the flip side of this whole coin: ya gotta dive very deep!

By starting from scratch, I’ve had to figure out all the nitty gritty details of how to control the hardware peripherals inside the µC, tweaking bit settings in some very specific way before it all started to work. Which was often quite a trial-and-error ordeal, since there is nothing you can do other than to (re-) read the datasheet and look at proven example code. Tinker till your hair falls out, and then (if you’re lucky) all of a sudden it starts to work.

The reward for me, was a better understanding, which is indeed what I was after. And for you: working examples, with minimal code, and explained in various weblog posts.

Most of all this deep-diving and tinkering can now be found in the embello repository on GitHub, and this will grow and extend further over time, as I learn more tricks.

Embello is also a bit of an island, though. It’s not used or known widely, and it’s likely to stay that way for some time to come. It’s not intended to be an alternative to the Arduino runtime, it’s not even intended to become the ARM equivalent of JeeLib – the library which makes it easy to use the ATMega-based JeeNodes with the Arduino IDE.

As I see it, Embello is a good source of fairly independent examples for the LPC8xx series of ARM µC’s, small enough to be explored in full detail when you want to understand how such things are implemented at the lowest level – and guess what: it all includes a simple Makefile-based build system, plus all the ready-to-upload firmware.bin binary images. With the weblog posts and the Jee Book as “all-in-one” PDF/ePub documentation.

Which leaves me at a bit of a bifurcation point as to where to go from here. I may have to row back from this “Embello island” approach to the “Arduino mainland” world. It’s no doubt a lot easier for others to “just fire up the Arduino IDE” and load a library for the new developments here at JeeLabs planned for later this year. Not everyone is willing to learn how to use the command line, just to be able to power up a node and send out wireless radio packets as part of a sensor network. Even if that means making the code a bit bulkier.

At the same time, I really want to work without having to use the Arduino IDE + runtime. And I suspect there are others who do too. Once you’ve developed other software for a while, you probably have adopted a certain work style and work environment which makes you productive (I know I have!). Being able to stick to it for new embedded projects as well makes it possible to retain that investment (in routine, knowledge, and muscle memory).

Which is why I’m now looking for a way to get the best of both worlds: retain my own personal development preferences (which a few of you might also prefer), while making it easy for everyone else to re-use my code and projects in that mainstream roller coaster fashion called “the Arduino ecosystem”. The good news is that the Arduino IDE has finally evolved to the point where it can actually support alternate platforms, including ARM.

We’ll see how it goes… all suggestions and pointers welcome!


In Musings on Sep 9, 2015 at 00:01

No techie post this time, just some pictures from a brief trip last week to Magdeburg:

IMG 0727

… and on the inside, even more of a little playful fantasy world:

IMG 0700

This was designed by the architect Friedensreich Hundertwasser at the turn of this century. It was the last project he worked on, and the building was in fact completed after his death.

Feels a bit like an Austrian (and more restrained) reincarnation of Antoni Gaudí to me.

A playful note added to a utilitarian construction – I like it!

Space tools

In Musings on Sep 2, 2015 at 00:01

It’s a worrisome sign when people start to talk about tools. No real work to report on?

With that out of the way, let’s talk about tools :) – programming tools.

Everyone has their favourite programmer’s editor and operating system. Mine happens to be Vim (MacVim) and Mac OSX. Yours will likely be different. Whatever works, right?

Having said that, I found myself a bit between a rock and a hard place lately, while trying out ClojureScript, that Lisp’y programming language I mentioned last week. The thing is that Lispers tend to use something called the REPL – constantly so, during editing in fact.

What’s a REPL for?

Most programming languages use a form of development based on frequent restarts: edit your code, save it, then re-run the app, re-run the test suite, or refresh the browser. Some development setups have turned this into a very streamlined and convenient fine art. This works well – after all, why else would everybody be doing things this way, right?

Edit file

But there’s a drawback: when you have to stop the world and restart it, it takes some effort to get back to the exact context you’re working on right now. Either by creating a good set of tests, with “mocks” and “spies” to isolate and analyse the context, or by repeating the steps to get to that specific state in case of interactive GUI- or browser-based apps.

Another workaround, depending on the programming language support for it, is to use a debugger, with “breakpoints” and “watchpoints” set to stop the code just where you want it.

But what if you could keep your application running – assuming it hasn’t locked up, that is? So it’s still running, but just not yet doing what it should. What if we could change a few lines of code and see if that fixes the issue? What if we could edit inside a running app?

What if we could in fact build an app from scratch this way? Take a small empty app, define a function, load it in, see if it works, perhaps call the function from a console-style session running inside the application? And then iterate, extend, tweak, fix, add code… live?

This is what people have been doing with Lisp for over half a century. With a “REPL”:

Edit repl

A similar approach has been possible for some time in a few other languages (such as Tcl). But it’s unfortunately not mainstream. It can take quite some machinery to make it work.

While a traditional edit-save-run cycle takes a few seconds, REPL-based coding is instant.

A nice example of this in action is in Tim Baldridge’s videos about Clojure. He never starts up an application in fact: he just fires up the REPL in an editor window, and then starts writing little pieces of code. To try it out, he hits a key combination which sends the parenthesised form currently under the cursor to the REPL, and that’s it. Errors in the code can be fixed and resent at will. Definitions, but also little test calls, anything.

More substantial bits of code are “require’d” in as needed. So what you end up, is keeping a REPL context running at all times, and loading stuff into it. This isn’t limited to server-side code, it also works in the browser: enter “(js/alert "Hello")” and up pops a dialog. All it takes is the REPL to be running inside the browser, and some websocket magic. In the browser, it’s a bit like typing everything into the developer console, but unlike that setup, you get to keep all the code and trials you write – in the editor, with all its conveniences.


Another recent development in ClojureScript land is Figwheel by Bruce Hauman. There’s a 6-min video showing an example of use, and a very nice 45-min video where he goes into things in a lot more detail.

In essence, Figwheel is a file-driven hot reloader: you edit some code in your editor, you save the file, and Figwheel forces the browser (or node.js) to reload the code of just that file. The implementation is very different, but the effect is similar to Dan Abramov’s React Hot Reloader – which works for JavaSript in the browser, when combined with React.

There are some limitations for what you can do in both the REPL-based and the Figwheel approach, but if all else fails you can always restart things and have a clean slate again.

The impact of these two approaches on the development process are hard to understate: it’s as if you’re inside the app, looking at things and tweaking it as it runs. App restarts are far less common, which means server-side code can just keep running as you develop pieces of it further. Likewise, browser side, you can navigate to a specific page and context, and change the code while staying on that page and in that context. Even a scroll position or the contents of an input box will stay the same as you edit and reload code.

For an example Figwheel + REPL setup running both in the browser and in node.js at the same time, see this interesting project on GitHub. It’s able to do hot reloads on the server as well as on (any number of) browsers – whenever code changes. Here’s a running setup:

Edit figwheel

And here’s what I see when typing “(fig-status)” into Figwheel’s REPL:

Figwheel System Status
Autobuilder running? : true
Focusing on build ids: app, server
Client Connections
     server: 1 connection
     app: 1 connection

This uses two processes: a Figwheel-based REPL (JVM), and a node-based server app (v8). And then of course a browser, and an editor for actual development. Both Node.js and the browser(s) connect into the Figwheel JVM, which also lets you type in ClojureScript.


So what do we need to work in this way? Well, for one, the language needs to support it and someone needs to have implemented this “hot reload” or “live code injection” mechanism.

For Figwheel, that’s about it. You need to write your code files in a certain way, allowing it to reload what matters without messing up the current state – “defonce” does most of this.

But the real gem is the REPL: having a window into a running app, and peeking and poking at its innards while in flight. If “REPL” sounds funny, then just think of it as “interactive command prompt”. Several scripting languages support this. Not C, C++, or Go, alas.

For this, the editor should offer some kind of support, so that a few keystrokes will let you push code into the app. Whether a function definition or a printf-type call, whatever.

And that’s where vim felt a bit inadequate: there are a few plugins which try to address this, but they all have to work around the limitation that vim has no built-in terminal.

In Emacs-land, there has always been “SLIME” for traditional Lisp languages, and now there is “CIDER” for Clojure (hey, I didn’t make up those names, I just report them!). In a long-ago past, I once tried to learn Emacs for a very intense month, but I gave up. The multi-key acrobatics is not for me, and I have tons of vim key shortcuts stashed into muscle memory by now. Some people even point to research to say that vim’s way works better.

For an idea of what people can do when they practically live inside their Emacs editor, see this 18-min video. Bit hard to follow, but you can see why some people call Emacs an OS…

Anyway, I’m not willing to unlearn those decades of vim conventions by now. I have used many other editors over the years (including TextMate, Sublime Text, and recently Atom), but I always end up going back. The mouse has no place in editing, and no matter how hard some editors try to offer a “vim emulation mode”, they all fail in very awkward ways.

And then I stumbled upon this thing. All I can say is: “Vim, reloaded”.

Wow – a 99% complete emulation, perhaps one or two keystrokes which work differently. And then it adds a whole new set of commands (based on the space bar, hence the name), incredibly nice pop-up help as you type the shortcuts, and… underneath, it’s… Emacs ???

Spacemacs comes with a ton of nice default configuration settings and plugins. Other than some font changes, some extra language bindings, I hardly change it. My biggest config tweak so far has been to make it start up with a fixed position and window size.

So there you have it. I’m switching my world over to ClojureScript as main programming language (which sounds more dramatic than it is, since it’s still JavaScript + browser + node.js in the end), and I’m switching my main development tool to Emacs (but that too is less invasive than it sounds, since it’s Vim-like and I can keep using vim on remote boxes).

Clojure and ClojureScript

In Musings on Aug 26, 2015 at 00:01

I’m in awe. There’s a (family of) programming languages which solves everything. Really.

  • it works on the JVM, V8, and CLR, and it interoperates with what already exists
  • it’s efficient, it’s dynamic, and it has parallelism built in (threaded or cooperative)
  • it’s so malleable, that any sort of DSL can trivially be created on top of it

As this fella says at this very point in his videoState. You’re doing it wrong.

I’ve been going about programming in the wrong way for decades (as a side note: the Tcl language did get it right, up to a point, despite some other troublesome shortcomings).

The language I’m talking about re-uses the best of what’s out there, and even embraces it. All the existing libraries in JavaScript can be used when running in the browser or in Node.js, and similarly for Java or C# when running in those contexts. The VM’s, as I already mentioned also get reused, which means that decades of research and optimisation are taken advantage of.

There’s even an experimental version of this (family of) programming languages for Go, so there again, it becomes possible to add this approach to whetever already exists out there, or is being introduced now or in the future.

Due to the universal reach of JavaScript these days, on browsers, servers, and even on some embedded platforms, that really has most interest to me, so what I’ve been putting my teeth into recently is “ClojureScript”, which specifically targets JavaScript.

Let me point out that ClojureScript is not another “pre-processor” like CoffeScript.

“State. You’re doing it wrong.”

As Rich Hickey, who spoke those words in the above video quickly adds: “which is ok, because I was doing it wrong too”. We all took a wrong turn a few decades ago.

The functional programming (FP) people got it right… Haskell, ML, that sort of thing.

Or rather: they saw the risks and went to a place where few people could follow (monads?).

FP is for geniuses

What Clojure and ClojureScript do, is to bring a sane level of FP into the mix, with “immutable persistent datastructures”, which makes it all very practical and far easier to build with and reason about. Code is a transformation: take stuff, do things with it, and return derived / modified / updated / whatever results. But don’t change the input data.

Why does this matter?

Let’s look at a recent project taking the world by storm: React, yet another library for building user interfaces (in the browser and on mobile). The difference with AngularJS is the conceptual simplicity. To borrow another image from a similar approach in CycleJS:

Screen Shot 2015 08 16 at 16 08 35

Things happen in a loop: the computer shows stuff on the screen, the user responds, and the computer updates its state. In a talk by CycleJS author Andre Staltz, he actually goes so far as treat the user as a function: screen in, key+mouse actions out. Interesting concept!

Think about it:

  • facts are stored on the disk, somewhere on a network, etc
  • a program is launched which presents (some of it) on the screen
  • the user interface leads us, the monkeys, to respond and type and click
  • the program interprets these as intentions to store / change something
  • it sends out stuff to the network, writes changes to disk (perhaps via a database)
  • these changes lead to changes to what’s shown on-screen, and the cycle repeats

Even something as trivial as scrolling down is a change to a scroll position, which translates to a different part of a list or page being shown on the screen. We’ve been mixing up the view side of things (what gets shown) with the state (some would say “model”) side, which in this case is the scroll position – a simple number. The moment you take them apart, the view becomes nothing more than a function of that value. New value -> new view. Simple.

Nowhere in this story is there a requirement to tie state into the logic. It didn’t really help that object orientation (OO) taught us to always combine and even hide state inside logic.

Yet I (we?) have been programming with variables which remember / change and loops which iterate and increment, all my life. Because that’s how programming works, right?

Wrong. This model leads to madness. Untraceable, undebuggable, untestable, unverifiable.

In a way, Test-Driven-Design (TDD) shows us just how messy it got: we need to explicitly compare what a known input leads to with the expected outcome. Which is great, but writing code which is testable becomes a nightmare when there is state everywhere. So we invented “mocks” and “spies” and what-have-you-not, to be able to isolate that state again.

What if everything we implemented in code were easily reducible to small steps which cleanly compose into larger units? Each step being a function which takes one or more values as state and produces results as new values? Without side-effects or state variables?

Then again, purely functional programming with no side-effects at all is silly in a way: if there are zero side-effects, then the screen wouldn’t change, and the whole computation would be useless. We do need side-effects, because they lead to a screen display, physical-computing stuff such as motion & sound, saved results, messages going somewhere, etc.

What we don’t need, is state sprinkled across just about every single line of our code…

To get back to React: that’s exactly where it revolutionises the world of user interfaces. There’s a central repository of “the truth”, which is in fact usually nothing more than a deeply nested JavaScript data structure, from which everything shown on the web page is derived. No more messing with the DOM, putting all sorts of state into it, having to update stuff everywhere (and all the time!) for dynamic real-time apps.

React (a.k.a. ReactJS) treats an app as a pipeline: state => view => DOM => screen. The programmer designs and writes the first two, React takes care of the DOM and screen.

I’ll get back to ClojureScript, please hang in there…

What’s missing in the above, is user interaction. We’re used to the following:

    mouse/keyboard => DOM => controller => state

That’s the Model-View-Controller (MVC) approach, as pioneered by Smalltalk in the 80’s. In other words: user interaction goes in the opposite direction, traversing all those steps we already have in reverse, so that we end up with modified state all the way back to the disk.

This is where AngularJS took off. It was founded on the concept of bi-directional bindings, i.e. creating an illusion that variable changes end up on the screen, and screen interactions end up back in those same variable – automatically (i.e. all taken care of by Angular).

But there is another way.

Enter “reactive programming” (RP) and “functional reactive programming” (FRP). The idea is that user interaction still needs to be interpreted and processed, but that the outcome of such processing completely bypasses all the above steps. Instead of bubbling back up the chain, we take the user interaction, define what effect it has on the original central-repository-of-the-truth, period. No figuring out what our view code needs to do.

So how do we update what’s on screen? Easy: re-create the entire view from the new state.

That might seem ridiculously inefficient: recreating a complete screen / web-page layout from scratch, as if the app was just started, right? But the brilliance of React (and several designs before it, to be fair) is that it actually manages to do this really efficiently.

Amazingly so in fact. React is faster than Angular.

Let’s step back for a second. We have code which takes input (the state) and generates output (some representation of the screen, DOM, etc). It’s a pure function, i.e. it has no side effects. We can write that code as if there is no user interaction whatsoever.

Think – just think – how much simpler code is if it only needs to deal with the one-way task of rendering: what goes where, how to visualise it – no clicks, no events, no updates!

Now we need just two more bits of logic and code:

  1. we tell React which parts respond to events (not what they do, just that they do)

  2. separately, we implement the code which gets called whenever these events fire, grab all relevant context, and report what we need to change in the global state

That’s it. The concepts are so incredibly transparent, and the resulting code so unbelievably clean, that React and its very elegant API is literally taking the Web-UI world by storm.

Back to ClojureScript

So where does ClojureScript fit in, then? Well, to be honest: it doesn’t. Most people seem to be happy just learning “The React Way” in normal main-stream JavaScript. Which is fine.

There are some very interesting projects on top of React, such as Redux and React Hot Loader. This “hot loading” is something you have to see to believe: editing code, saving the file, and picking up the changes in a running browser session without losing context. The effect is like editing in a running app: no compile-run-debug cycle, instant tinkering!

Interestingly, Tcl also supported hot-loading. Not sure why the rest of the world didn’t.

Two weeks ago I stumbled upon ClojureScript. Sure enough, they are going wild over React as well (with Om and Reagent as the main wrappers right now). And with good reason: it looks like Om (built on top of React) is actually faster than React used from JavaScript.

The reason for this is their use of immutable data structures, which forces you to not make changes to variables, arrays, lists, maps, etc. but to return updated copies (which are very efficient through a mechanism called “structural sharing”). As it so happens, this fits the circular FRP / React model like a glove. Shared trees are ridiculously easy to diff, which is the essence of why and how React achieves its good performance. And undo/redo is trivial.

Hot-loading is normal in the Clojure & ClojureScript world. Which means that editing in a running app is not a novelty at all, it’s business as usual. As with any Lisp with a REPL.

Ah, yes. You see, Clojure and ClojureScript are Lisp-like in their notation. The joke used to be that LISP stands for: “Lots of Irritating Little Parentheses”. When you get down to it, it turns out that there are not really much more of those than parens / braces in JavaScript.

But notation is not what this is all about. It’s the concepts and the design which matter.

Clojure (and ClojureScript) seem to be born out of necessity. It’s fully open source, driven by a small group of people, and evolving in a very nice way. The best introduction I’ve found is in the first 21 minutes of the same video linked to at the start of this post.

And if you want to learn more: just keep watching that same video, 2:30 hours of goodness. Better still: this 1 hour video, which I think summarises the key design choices really well.

No static typing as in Go, but I found myself often fighting it (and type hints can be added back in where needed). No callback hell as in JavaScript & Node.js, because Clojure has implemented Go’s CSP, with channels and go-routines as a library. Which means that even in the browser, you can write code as if there where multiple processes, communicating via channels in either synchronous or asynchronous fashion. And yes, it really works.

All the libraries from the browser + Node.js world can be used in ClojureScript without special tricks or wrappers, because – as I said – CLJ & CLJS embrace their host platforms.

The big negative is that CLJ/CLJS are different and not main-stream. But frankly, I don’t care at this point. Their conceptual power is that of Lisp and functional programming combined, and this simply can’t be retrofitted into the popular languages out there.

A language that doesn’t affect the way you think about programming, is not worth knowing — Alan J. Perlis

I’ve been watching many 15-minute videos on Clojure by Tim Baldridge (it costs $4 to get access to all of them), and this really feels like it’s lightyears ahead of everything else. The amazing bit is that a lot of that (such as “core.async”) catapults into plain JavaScript.

As you can probably tell, I’m sold. I’m willing to invest a lot of my time in this. I’ve been doing things all wrong for a couple of decades (CLJ only dates from 2007), and now I hope to get a shot at mending my ways. I’ll report my progress here in a couple of months.

It’s not for the faint of heart. It’s not even easy (but it is simple!). Life’s too short to keep programming without the kind of abstractions CLJ & CLJS offer. Eh… In My Opinion.

A feel for numbers

In Musings on Aug 19, 2015 at 00:01

It’s often really hard to get a meaningful sense what numbers mean – especially huge ones.

What is a terabyte? A billion euro? A megawatt? Or a thousand people, even?

I recently got our yearly gas bill, and saw that our consumption was about 1600 m3 – roughly the same as last year. We’ve insulated the house, we keep the thermostat set fairly low (19°C), and there is little more we can do – at least in terms of low-hanging fruit. Since the house has an open stairway to the top floors, it’s not easy to keep the heat localised.

But what does such a gas consumption figure mean?

For one, those 1600 m3/y are roughly 30,000 m3 in the next twenty years, which comes to about €20,000, assuming Dutch gas prices will stay the same (a big “if”, obviously).

That 30,000 m3 sounds like a huge amount of gas, for just two people to be burning up.

Then again, a volume of 31 x 31 x 31 m sounds a lot less ridiculous, doesn’t it?

Now let’s tackle it from another angle, using the Wolfram Alpha “computational knowledge engine”, which is a really astonishing free service on the internet, as you’ll see.

How much gas is estimated to be left on this planet? Wolfram Alpha has the answer:

Screen Shot 2015 08 18 at 11 36 14

How many people are there in the world?

Screen Shot 2015 08 18 at 11 39 09

Ok, let’s assume we give everyone today an equal amount of those gas reserves:

Screen Shot 2015 08 18 at 11 44 02

Which means that we will reach our “allowance” (for 2) 30 years from now. Now that is a number I can grasp. It does mean that in 30 years or so it’ll all be gone. Totally. Gone.

I don’t think our children and all future generations will be very pleased with this…

Oh, and for the geeks in us: note how incredibly easy it is to get at some numerical facts, and how accurately and easily Wolfram Alpha handles all the unit conversions. We now live in a world where the well-off western part of the internet-connected crowd has instant and free access to all the knowledge we’ve ammassed (Wikipedia + Google + Wolfram Alpha).

Facts are no longer something you have to learn – just pick up your phone / tablet / laptop!

But let’s not stop at this gloomy result. Here’s another, more satisfying, calculation using figures from an interesting UK site, called Electropedia (thanks, Ard!):

[…] the total Sun’s power it intercepted by the Earth is 1.740×10^17 Watts

When accounting for the earth’s rotation, seasonal and climatic effects, this boils down to:

[…] the actual power reaching the ground generally averages less than 200 Watts per square meter

Aha, that’s a figure I can relate to again, unlike the “10^17” metric in the total above.

Let’s google for “heat energy radiated by one person”, which leads to this page, and on it:

As I recall, a typical healthy adult human generates in the neighborhood of 90 watts.

Interesting. Now an average adult’s calorie intake of 2400 kcal/day translates to 2.8 kWh. Note how this nicely matches up (at least roughly): 2.8 kWh/day is 116 watt, continuously. So yes, since we humans just burn stuff, it’s bound to end up as mostly heat, right?

But there is more to be said about the total solar energy reaching our little blue planet:

Integrating this power over the whole year the total solar energy received by the earth will be: 25,400 TW X 24 X 365 = 222,504,000 TeraWatthours (TWh)

Yuck, those incomprehensible units again. Luckily, Electropedia continues, and says:

[…] the available solar energy is over 10,056 times the world’s consumption. The solar energy must of course be converted into electrical energy, but even with a low conversion efficiency of only 10% the available energy will be 22,250,400 TWh or over a thousand times the consumption.

That sounds promising: we “just” need to harvest it, and end all fossil fuel consumption.

And to finish it off, here’s a simple calculation which also very much surprised me:

  • take a world population of 7.13 billion people (2013 figures, but good enough)
  • place each person on his/her own square meter
  • put everyone together in one spot (tight, but hey, the subway is a lot tighter!)
  • what you end up, is of course 7.13 billion square meters, i.e. 7,130,000,000 m3
  • sounds like a lot? how about an area of 70 by 100 km? (1/6th of the Netherlands)

Then, googling again, I found out that 71% of the surface of our planet is water.

And with a little more help from Wolfram Alpha, I get this result:

Screen Shot 2015 08 18 at 14 18 41

That’s 144 x 144 meters per person, for everyone on this planet. Although not every spot is inhabitable, of course. But at least these are figures I can fit into my head and grasp!

Now if only I could understand why we can’t solve this human tragedy. Maths won’t help.

Lessons from history

In Musings on Aug 12, 2015 at 00:01

(No, not the kind of history lessons we all got treated to in school…)

What I’d like to talk about, is how to deal with sensor readings over time. As described in last week’s post, there’s the “raw” data:

raw/rf12/868-5/3 "0000000000038d09090082666a"
raw/rf12/868-5/3 "0000000000038e09090082666a"
raw/rf12/868-5/3 "0000000000038f090900826666"

… and there’s the decoded data, i.e. in this case:

sensor/BATT-2 {"node":"rf12/868-5/3","ping":592269,
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592270,
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592271,

In both cases, we’re in fact dealing with a series of readings over time. This aspect tends to get lost a bit when using MQTT, since each new reading is sent to the same topic, replacing the previous data. MQTT is (and should be) 100% real-time, but blissfully unaware of time.

The raw data is valuable information, because everything else derives from it. This is why in HouseMon I stored each entry as timestamped text in a logfile. With proper care, the raw data can be an excellent way to “replay” all received data, whether after a major database or other system failure, or to import all the data into a new software application.

So much for the raw data, and keeping a historical archive of it all – which is good practice, IMO. I’ve been saving raw data for some 8 years now. It requires relatively little storage when saved as daily text files and gzip-compressed: about 180 Mb/year nowadays.

Now let’s look a bit more at that decoded sensor data…

When working on HouseMon, I noticed that it’s incredibly useful to have access to both the latest value and the previous value. In the case of these “BATT-*” nodes, for example, having both allows us to determine the elapsed time since the previous reception (using the “time” field), or to check whether any packets have been missed (using the “ping” counter).

With readings of cumulative or aggregating values, the previous reading is in fact essential to be able to calculate an instantaneous rate (think: gas and electricity meters).

In the past, I implemented this by having each entry store a previous and a latest value (and time stamp), but with MQTT we could actually simplify this considerably.

The trick is to use MQTT’s brilliant “RETAIN” flag:

  • in each published sensor message, we set the RETAIN flag to true
  • this causes the MQTT broker (server) to permanently store that message
  • when a new client connects, it will get all saved messages re-sent to it the moment it subscribes to a corresponding topic (or wildcard topic)
  • such re-sent messages are flagged, and can be recognised as such by the client, to distinguish them from genuinely new real-time messages
  • in a way, retained message handling is a bit like a store-and-forward mechanism
  • … but do keep in mind that only the last message for each topic is retained

What’s the point? Ah, glad you asked :)

In MQTT, a RETAINed message is one which can very gracefully deal with client connects and disconnects: a client need not be connected or subscribed at the time such a message is published. With RETAIN, the client will receive the message the moment it connects and subscribes, even if this is after the time of publication.

In other words: RETAIN flags a message as representing the latest state for that topic.

The best example is perhaps a switch which can be either ON or OFF: whenever the switch is flipped we publish either “ON” or “OFF” to topic “my/switch”. What if the user interface app is not running at the time? When it comes online, it would be very useful to know the last published value, and by setting the RETAIN flag we make sure it’ll be sent right away.

The collection of RETAINed messages can also be viewed as a simple key-value database.

For an excellent series of posts about MQTT, see this index page from HiveMQ.

But I digress – back to the history aspect of all this…

If every “sensor/…” topic has its RETAIN flag set, then we’ll receive all the last-known states the moment we connect and subscribe as MQTT client. We can then immediately save these in memory, as “previous” values.

Now, whenever a new value comes in:

  • we have the previous value available
  • we can do whatever we need to do in our application
  • when done, we overwrite the saved previous value with the new one

So in memory, our applications will have access to the previous data, but we don’t have to deal with this aspect in the MQTT broker – it remains totally ignorant of this mechanism. It simply collects messages, and pushes them to apps interested in them: pure pub-sub!

Doodling with decoders

In Musings on Aug 5, 2015 at 00:01

With plenty of sensor nodes here at JeeLabs, I’ve been exploring and doodling a bit, to see how MQTT could fit into this. As expected, it’s all very simple and easy to do.

The first task at hand is to take all those “OK …” lines coming out of a JeeLink running RF12demo, and push them into MQTT. Here’s a quick solution, using Python for a change:

import serial
import paho.mqtt.client as mqtt

def on_connect(client, userdata, flags, rc):
    print("Connected with result code "+str(rc))

def on_message(client, userdata, msg):
    # TODO pick up outgoing commands and send them to serial
    print(msg.topic+" "+str(msg.payload))

client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message

client.connect("localhost", 1883, 60) # TODO reconnect as needed

ser = serial.Serial('/dev/ttyUSB1', 57600)

while True:
    # read incoming lines and split on whitespace
    items = ser.readline().split()
    # only process lines starting with "OK"
    if len(items) > 1 and items[0] == 'OK':
        # convert each item string to an int
        bytes = [int(i) for i in items[1:]]
        # construct the MQTT topic to publish to
        topic = 'raw/rf12/868-5/' + str(bytes[0])
        # convert incoming bytes to a single hex string
        hexStr = ''.join(format(i, '02x') for i in bytes)
        # the payload has 4 extra prefix bytes and is a JSON string
        payload = '"00000010' + hexStr + '"'
        # publish the incoming message
        client.publish(topic, payload) #, retain=True)
        # debugging                                                         
        print topic, '=', hexStr

Trivial stuff, once you install this MQTT library. Here is a selection of the messages getting published to MQTT – these are for a bunch of nodes running radioBlip and radioBlip2:

raw/rf12/868-5/3 "0000000000038d09090082666a"
raw/rf12/868-5/3 "0000000000038e09090082666a"
raw/rf12/868-5/3 "0000000000038f090900826666"

What needs to be done next, is to decode these to more meaningful results.

Due to the way MQTT works, we can perform this task in a separate process – so here’s a second Python script to do just that. Note that it subscribes and publishes to MQTT:

import binascii, json, struct, time
import paho.mqtt.client as mqtt

# raw/rf12/868-5/3 "0000000000030f230400"
# raw/rf12/868-5/3 "0000000000033c09090082666a"

# avoid having to use "obj['blah']", can use "obj.blah" instead
# see end of
C = type('type_C', (object,), {})

client = mqtt.Client()

def millis():
    return int(time.time() * 1000)

def on_connect(client, userdata, flags, rc):
    print("Connected with result code "+str(rc))

def batt_decoder(o, raw):
    o.tag = 'BATT-0'
    if len(raw) >= 10: = struct.unpack('<I', raw[6:10])[0]
        if len(raw) >= 13:
            o.tag = 'BATT-%d' % (ord(raw[10]) & 0x7F)
            o.vpre = 50 + ord(raw[11])
            if ord(raw[10]) >= 0x80:
                o.vbatt = o.vpre * ord(raw[12]) / 255
            elif ord(raw[12]) != 0:
                o.vpost = 50 + ord(raw[12])
        return True

def on_message(client, userdata, msg):
    o = C();
    o.time = millis()
    o.node = msg.topic[4:]
    raw = binascii.unhexlify(msg.payload[1:-1])
    if msg.topic == "raw/rf12/868-5/3" and batt_decoder(o, raw):
        #print o.__dict__
        out = json.dumps(o.__dict__, separators=(',',':'))
        client.publish('sensor/' + o.tag, out) #, retain=True)

client.on_connect = on_connect
client.on_message = on_message

client.connect("localhost", 1883, 60)

Here is what gets published, as a result of the above three “raw/…” messages:

sensor/BATT-2 {"node":"rf12/868-5/3","ping":592269,
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592270,
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592271,

So now, the incoming data has been turned into meaningful readings: it’s a node called “BATT-2”, the readings come in roughly every 64 seconds (as expected), and the received counter value is indeed incrementing with each new packet.

Using a dynamic scripting language such as Python (or Lua, or JavaScript) has the advantage that it will remain very simple to extend this decoding logic at any time.

But don’t get me wrong: this is just an exploration – it won’t scale well as it is. We really should deal with decoding logic as data, i.e. manage the set of decoders and their use by different nodes in a database. Perhaps even tie each node to a decoder pulled from GitHub?

Could a coin cell be enough?

In Musings on Jul 29, 2015 at 00:01

To state the obvious: small wireless sensor nodes should be small and wireless. Doh.

That means battery-powered. But batteries run out. So we also want these nodes to last a while. How long? Well, if every node lasts a year, and there are a bunch of them around the house, we’ll need to replace (or recharge) some battery somewhere several times a year.

Not good.

The easy way out is a fat battery: either a decent-capacity LiPo battery pack or say three AA cells in series to provide us with a 3.6 .. 4.5V supply (depending on battery type).

But large batteries can be ugly and distracting – even a single AA battery is large when placed in plain sight on a wall in the living room, for example.

So… how far could we go on a coin cell?

Let’s define the arena a bit first, there are many types of coin cells. The smallest ones of a few mm diameter for hearing aids have only a few dozen mAh of energy at most, which is not enough as you will see shortly. Here some coin cell examples, from Wikipedia:

Coin cells

The most common coin cell is the CR2032 – 20 mm diameter, 3.2 mm thick. It is listed here as having a capacity of about 200 mAh:

A really fat one is the CR2477 – 24 mm diameter, 7.7 mm thick – and has a whopping 1000 mAh of capacity. It’s far less common than the CR2032, though.

These coin cells supply about 3.0V, but that voltage varies: it can be up to 3.6V unloaded (i.e. when the µC is asleep), down to 2.0V when nearly discharged. This is usually fine with today’s µCs, but we need to be careful with all the other components, and if we’re doing analog stuff then these variations can in some cases really throw a wrench into our project.

Then there are the AAA and AA batteries of 1.2 .. 1.5V each, so we’ll need at least two and sometimes even three of them to make our circuits work across their lifetimes. An AAA cell of 10.5×44.5 mm has about 800..1200 mAh, whereas an AA cell of 14.5×50.5 mm has 1800..2700 mAh of energy. Note that this value doesn’t increase when placed in series!


Let’s see how far we could get with a CR2032 coin cell powering a µC + radio + sensors:

  • one year is 365 x 24 – 8,760 hours
  • one CR2032 coin cell can supply 200 mAh of energy
  • it will last one year if we draw under 23 µA on average
  • it will last two years if we draw under 11 µA on average
  • it will last four years if we draw under 5 µA on average
  • it will last ten years if we draw under 2 µA on average

An LPC8xx in deep sleep mode with its low-power wake-up timer kept running will draw about 1.1 µA when properly set up. The RFM69 draws 0.1 µA in sleep mode. That leaves us roughly a 10 µA margin for all attached sensors if we want to achieve a 2-year battery life.

This is doable. Many simple sensors for temperature, humidity, and pressure can be made to consume no more than a few µA in sleep mode. Or if they consume too much, we could tie their power supply pin to an output pin on the µC and completely remove power from them. This requires an extra I/O pin, and we’ll probably need to wait a bit longer for the chip to be ready if we have to power it up every time. No big deal – usually.

A motion sensor based on passive infrared detection (PIR) draws 60..300 µA however, so that would severely reduce the battery lifetime. Turning it off is not an option, since these sensors need about a minute to stabilise before they can be used.

Note that even a 1 MΩ resistor has a non-negligible 3 µA of constant current consumption. With ultra low-power sensor nodes, every part of the circuit needs to be carefully designed! Sometimes, unexpected consequences can have a substantial impact on battery life, such as grease, dust, or dirt accumulating on an openly exposed PCB over the years…

Door switch

What about sensing the closure of a mechanical switch?

In that case, we can in fact put the µC into deep power down without running the wake-up timer, and let the wake-up pin bring it back to life. Now, power consumption will drop to a fraction of a microamp, and battery life of the coin cell can be increased to over a decade.

Alternately, we could use a contact-less solution, in the form of a Hall effect sensor and a small magnet. No wear, and probably easier to install and hide out of sight somewhere.

The Seiko S-5712 series, for example, draws 1..4 µA when operated at low duty cycle (measuring 5 times per second should be more than enough for a door/window sensor). Its output could be used to wake up the µC, just as with a mechanical switch. Now we’re in the 5 µA ballpark, i.e. about 4 years on a CR2032 coin cell. Quite usable!

It can pay off to carefully review all possible options – for example, if we were to instead use a reed relay as door sensor, we might well end up with the best of both worlds: total shut-off via mechanical switching, yet reliable contact-less activation via a small magnet.

What about the radio

The RFM69 draws from 15 to 45 mA when transmitting a packet. Yet I’m not including this in the above calculations, for good reason:

  • it’s only transmitting for a few milliseconds
  • … and probably less than once every few minutes, on average
  • this means its duty cycle can stay well under 0.001%
  • which translates to less than 0.5 µA – again: on average

Transmitting a short packet only every so often is virtually free in terms of energy requirements. It’s a hefty burst, but it simply doesn’t amount to much – literally!


Aiming for wireless sensor nodes which never need to listen to incoming RF packets, and only send out brief ones very rarely, we can see that a coin cell such as the common CR2032 will be able to support nodes for several years. Assuming that the design of both hardware and software was properly done, of course.

And if the CR2032 doesn’t cut it – there’s always the CR2477 option to help us further.

Forth on a DIP

In Musings on Jul 22, 2015 at 00:01

In a recent article, I mentioned the Forth language and the Mecrisp implementation, which includes a series of builds for ARM chips. As it turns out, the mecrisp-stellaris-... archive on the download page includes a ready-to-run build for the 28-pin DIP LPC1114 µC, which I happened to have lying around:

DSC 5132

It doesn’t take much to get this chip powered and connected through a modified BUB (set to 3.3V!) so it can be flashed with the Mecrisp firmware. Once that is done, you end up with a pretty impressive Forth implementation, with half of flash memory free for user code.

First thing I tried was to connect to it and list out all the commands it knows – known as “words” in Forth parlance, and listed by entering “words” + return:

$ lpc21isp -termonly -control x /dev/tty.usbserial-AH01A0EG 115200 0
lpc21isp version 1.97
Terminal started (press Escape to abort)

Mecrisp-Stellaris 2.1.3 with M0 core for LPC1114FN28 by Matthias Koch
words words
--- Mecrisp-Stellaris Core ---

2dup 2drop 2swap 2nip 2over 2tuck 2rot 2-rot 2>r 2r> 2r@ 2rdrop d2/ d2*
dshr dshl dabs dnegate d- d+ s>d um* m* ud* udm* */ */mod u*/ u*/mod um/mod
m/mod ud/mod d/mod d/ f* f/ 2!  2@ du< du> d< d> d0< d0= d<> d= sp@ sp!
rp@ rp!  dup drop ?dup swap nip over tuck rot -rot pick depth rdepth >r r>
r@ rdrop rpick true false and bic or xor not clz shr shl ror rol rshift
arshift lshift 0= 0<> 0< >= <= < > u>= u<= u< u> <> = min max umax umin
move fill @ !  +!  h@ h!  h+!  c@ c!  c+!  bis!  bic!  xor!  bit@ hbis!
hbic!  hxor!  hbit@ cbis!  cbic!  cxor!  cbit@ cell+ cells flash-khz
16flash!  eraseflash initflash hflash!  flushflash + - 1- 1+ 2- 2+ negate
abs u/mod /mod mod / * 2* 2/ even base binary decimal hex hook-emit
hook-key hook-emit?  hook-key?  hook-pause emit key emit?  key?  pause
serial-emit serial-key serial-emit?  serial-key?  cexpect accept tib >in
current-source setsource source query compare cr bl space spaces [char]
char ( \ ." c" s" count ctype type hex.  h.s u.s .s words registerliteral,
call, literal, create does> <builds ['] ' postpone inline, ret, exit
recurse state ] [ : ; execute immediate inline compileonly 0-foldable
1-foldable 2-foldable 3-foldable 4-foldable 5-foldable 6-foldable
7-foldable constant 2constant smudge setflags align aligned align4,
align16, h, , ><, string, allot compiletoram?  compiletoram compiletoflash
(create) variable 2variable nvariable buffer: dictionarystart
dictionarynext skipstring find cjump, jump, here flashvar-here then else if
repeat while until again begin k j i leave unloop +loop loop do ?do case
?of of endof endcase token parse digit number .digit hold hold< sign #> f#S
f# #S # <# f.  f.n ud.  d.  u.  .  evaluate interpret hook-quit quit eint?
eint dint ipsr nop unhandled reset irq-systick irq-fault irq-collection
irq-adc irq-i2c irq-uart

--- Flash Dictionary --- ok.

That’s over 300 standard Forth words, including all the usual suspects (I’ve shortened the above to only show their names, as Mecrisp actually lists these words one per line).

Here’s a simple way of making it do something – adding 1 and 2 and printing the result:

  • type “1 2 + .” plus return, and it types ” 3 ok.” back at you

Let’s define a new “hello” word:

: hello ." Hello world!" ;  ok.

We’ve extended the system! We can now type hello, and guess what comes out:

hello Hello world! ok.
----- + <CR>

Note the confusing output: we typed “hello” + a carriage return, and the system executed our definition of hello and printed the greeting right after it. Forth is highly interactive!

Here’s another definition, of a new word called “count-up”:

: count-up 0 do i . loop ;  ok.

It takes one argument on the stack, so we can call it as follows:

5 count-up 0 1 2 3 4  ok.

Again, keep in mind that the ” 0 1 2 3 4 ok.” was printed out, not typed in. We’ve defined a loop which prints increasing numbers. But what if we forget to provide an argument?

count-up 0 1 2 [...] Stack underflow

Whoops. Not so good: stack underflow was properly detected, but not before the loop actually ran and printed out a bunch of numbers (how many depends on what value happened to be in memory). Luckily, a µC is easily reset!

Permanent code

This post isn’t meant to be an introduction to Mecrisp (or Forth), you’ll have to read other documentation for that. But one feature is worth exploring: the ability to interactively store code in flash memory and set up the system so it runs that code on power up. Here’s how:

compiletoflash  ok.
: count-up 0 do i . loop ;  ok.
: init 10 count-up ;  ok.

In a nutshell: 1) we instruct the system to permanently add new definitions to its own flash memory from now on, 2) we define the count-up word as before, and 3) we (re-)define the special init word which Mecrisp Forth will automatically run for us when it starts up.

Let’s try it, we’ll reset the µC and see what it types out:

$ lpc21isp -termonly -control x /dev/tty.usbserial-AH01A0EG 115200 0
lpc21isp version 1.97
Terminal started (press Escape to abort)

Mecrisp-Stellaris 2.1.3 with M0 core for LPC1114FN28 by Matthias Koch
0 1 2 3 4 5 6 7 8 9 

Bingo! Our new code has been saved in flash memory, and starts running the moment the LPC1114 chip comes out of reset. Note that we can get rid of it again with “eraseflash“.

As you can see, it would be possible to write a full-blown application in Mecrisp Forth and end up with a standalone µC chip which then works as instructed every time it powers up.


Forth code runs surprisingly fast. Here is a delay loop which does nothing:

: delay 0 do loop ;  ok.

And this code:

10000000 delay  ok.

… takes about 3.5 seconds before printing out the final “ok.” prompt. That’s some 3 million iterations per second. Not too shabby, if you consider that the LPC1114 runs at 12 MHz!

RFM69s, OOK, and antennas

In Musings on Jul 15, 2015 at 00:01

Recently, Frank @ SevenWatt has been doing a lot of very interesting work on getting the most out of the RFM69 wireless radio modules.

His main interest is in figuring out how to receive weak OOK signals from a variety of sensors in and around the house. So first, you’ll need to extract the OOK information – it turns out that there are several ways to do this, and when you get it right, the bit patterns that come out snap into very clear-cut 0/1 groups – which can then be decoded:

FS20 histo 32768bps

Another interesting bit of research went into comparing different boards and builds to see how the setups affect reception. The good news is that the RFM69 is fairly consistent (no extreme variations between different modules).

Then, with plenty of data collection skills and tools at hand, Frank has been investigating the effect of different antennas on reception quality – which is a combination of getting the strongest signal and the lowest “noise floor”, i.e. the level of background noise that every receiver has to deal with. Here are the different antenna setups being evaluated:

RFM69 three antennas 750x410

Last but not least, is an article about decoding packets from the ELV Cost Control with an RFM69 and some clever tricks. These units report power consumption every 5 seconds:


Each of these articles is worth a good read, and yes… the choice of antenna geometry, its build accuracy, the quality of cabling, and the distance to the µC … they all do matter!

Greasing the “make” cycle on Mac

In Musings on Jul 8, 2015 at 00:01

I’m regularly on the lookout for ways to optimise my software development workflow. Anything related to editing, building, testing, uploading – if I can shave off a little time, or better still, find a way to automate more and get tasks into my muscle memory, I’m game.

It may not be everyone’s favourite, but I keep coming back to vim as my editor of choice, or more specifically the GUI-aware MacVim (when I’m not working on some Linux system).

And some tasks need to be really fast. Simply Because I Do Them All The Time.

Such as running “make”.

So I have a keyboard shortcut in vim which saves all the changes and runs make in a shell window. For quite some time, I’ve used the Vim Tmux Navigator for this. But that’s not optimal: you need to have tmux running locally, you need to type in the session, window, and pane to send the make command to (once after every vim restart), and things … break at times (occasional long delays, wrong tmux pane, etc). Unreliable automation is awful.

Time to look for a better solution. Using as few “moving parts” as possible, because the more components take part in these custom automation tricks, the sooner they’ll break.

The following is what I came up with, and it works really well for me:

  • hitting “,m” (i.e. “<leader>m“) initiates a make, without leaving my vim context
  • there needs to be a “Terminal” app running, with a window named “⌘1” open
  • it will receive this command line:  clear; date; make $MAKING

So all I have to do is leave that terminal window open – set to the proper directory. I can move around at will in vim or MacVim, run any number of them, and “,m” will run “make”.

By temporarily setting a value in the “MAKING” shell variable, I can alter the make target. This can be changed as needed, and I can also change the make directory as needed.

The magic incantation for vim is this one line, added to the ~/.vimrc config file:

    nnoremap <leader>m :wa<cr>:silent !makeit<cr>

In my ~/bin/ directory, I have a shell script called “makeit” with the following contents:

    exec osascript >/dev/null <<EOF
        tell application "Terminal"
            repeat with w in windows
                if name of w ends with "⌘1" then
                    do script "clear; date; make $MAKING" in w
                end if
            end repeat
        end tell

The looping is needed to always find the proper window. Note that the Terminal app must be configured to include the “⌘«N»” command shortcut in each window title.

This all works out of the box with no dependency on any app, tool, or utility – other than what is present in a vanilla Mac OSX installation. Should be easy to adapt to other editors.

It can also be used from the command line: just type “makeit”.

That’s all there is to it. A very simple and clean convention to remember and get used to!

Low-power mode :)

In Musings on Jul 1, 2015 at 00:01

First, an announcement:

Starting now, the JeeLabs Weblog is entering “low-power mode” for the summer period. What this means: one weblog post every Wednesday, but no additional articles.

While in low-power mode, I’ll continue to write about fun stuff, things I care about, bump into, and come up with – and maybe even some progress reports about some projects I’m working on. But no daily article episodes, and hence no new material for the Jee Book:

Jeebook cover

Speaking of which: The Jee Book has been updated with all the articles from the weblog for this year so far and has tripled in size as a result. Only very light editing has been applied at this point – but in case you want it all in a single convenient e-book, there ya go!

Have a great, inspiring, and relaxing summer! (cq winter, if you’re down under)

(For comments, visit the forum area)

Thank you

In Musings on Oct 31, 2014 at 00:01

Thank you for many dozens of kind, encouraging, honest, and very helpful responses and suggestions over the past few weeks. I started replying to each one individually, but had to give up at some point. Still, thank you for every single word you wrote – it’s not something to ask for all the time, but it sure helps to occasionally hear where you’re coming from, what interests you, what topics you’d like to read more about, and of course also that the fun and delight came across loud and clear.

I’m proud to see that the JeeLabs weblog has touched and inspired such a large and diverse group of people. There really is a huge and happy family out there, spread out across the entire planet, interested in physical computing and willing to learn, share insights, and create new stuff. Good vibes make the world go round!

Somewhere next week, I’ll restart the weblog. It won’t be exactly the same as it was, but it will be on a regular weekly schedule (and sometimes more). It’ll take some time to “find my voice” and get back into a regular and consistent mode again, but I’m really looking forward to it. There is definitely no lack of directions to go into, both deep and broad!

Sooo… let’s see where this renewed adventure will take us, eh?

PS. Comments on this weblog will not be re-enabled, but there’s a new area on the forum.

Wrapping up

In AVR, Hardware, News, Software, Musings on Oct 6, 2013 at 00:01

I’m writing this post while one of the test JeeNode Micro’s here at JeeLabs is nearing its eighth month of operation on a single coin cell:


It’s running the radioBlip2 sketch, sending out packets with an incrementing long integer packet count, roughly once every minute:

Screen Shot 2013-10-04 at 15.44.58

The battery voltage is also tracked, using a nice little trick which lets the ATtiny measure its own supply voltage. As you can see, the battery is getting weaker, dropping in voltage after each 25 mA transmission pulse, but still recovering very nicely before the next transmission:

Screen Shot 2013-10-04 at 15.45.45

Fascinating stuff. A bit like my energy levels, I think :)

But this post is not just about reporting ultra low-power consumption. It’s also my way of announcing that I’ve decided to wrap up this daily weblog and call it quits. There will be no new posts after this one. But this weblog will remain online, and so will the forum & shop.

I know from the many emails I’ve received over the years that many of you have been enjoying this weblog – some of you even from the very beginning, almost 5 years ago. Thank you. Unfortunately, I really need to find a new way to push myself forward.

This is post # 1400, with over 6000 comments to date. Your encouragement, thank-you’s, insightful comments, corrections and additions – I’m deeply grateful for each one of them. I hope that the passion which has always driven me to explore this computing stuff tied to the physical world technology and to write about these adventures, have helped you appreciate the creativity that comes with engineering and invention, and have maybe even tempted you to take steps to explore and learn beyond the things you already knew.

In fact, I sincerely hope that these pages will continue to encourage and inspire new visitors who stumble upon this weblog in the future. For those visitors, here’s a quick summary of the recent flashback posts, to help you find your way around on this weblog:

Please don’t ever stop exploring and pushing the boundaries of imagination and creativity – be it your own or that of others. There is infinite potential in each of us, and I’m certain that if we can tap even just a tiny fraction of it, the world will be a better place.

I’d like to think that I’ve played my part in this and wish you a lot of happy tinkering.

Take care,
Jean-Claude Wippler

PS. For a glimpse of of what I’m considering doing next, see this page. I can assure you that my interests and passions have not changed, and that I’ll remain as active as ever w.r.t. research and product development. The whole point of this change is to allow me to invest more focus and time, and to take the JeeLabs projects and products further, in fact.

PPS. Following the advice of some friends I highly respect, I’m making this last weblog post open-ended: it’ll be the last post for now. Maybe the new plans don’t work out as expected after all, or maybe I’ll want to reconsider after a while, knowing how much joy and energy this weblog has given me over the years. So let’s just call this a break, until further notice :)

Update Dec 2013 – Check out the forum at for the latest news about JeeLabs.

Making software choices

In Musings on Sep 2, 2013 at 00:01

If there were just one video in the field of software and technology which I’ve watched this summer and would really like to recommend, then it’s this one from about a year ago:

Brian Ford: Is Node.js Better?

It’s 40 minutes long. It’s a presentation by Brian Ford, who has earned his marks in the Ruby world (not to be confused with another Brian Ford from the Angular.js community). He gets on stage at JSConf US 2012, a major conference on JavaScript and Node.js, and spends almost half an hour talking about everything but JavaScript.

At the end, he voices some serious concerns about Node.js in the high-end networking arena w.r.t. its single event-loop without threading, and how the Ruby community hit that wall long ago and made different choices. Interesting, also on a technical level.

But this is not really about language X vs language Y (yawn) – it’s about how we make choices in technology. No doubt your time is limited, and you may not even be interested in Node.js or Ruby, but all I can say is: it made me re-evaluate my positions and convictions, taught me about the power of honest argumentation, and got me to read that brilliant book by Daniel Kahneman, titled Thinking ,Fast and Slow (heavy read – I’m halfway).

Elevating stuff…

It’s been a while…

In Musings on Sep 1, 2013 at 00:01

… since that last blog post. Time to get back into the fun – and I can’t wait!

This summer was spent in comfort and relaxation, a lot of it in and around the house, as everyone had left Houten, leaving behind a totally calm and quiet village, with lots of really nice summer days. As usual, when things are well, time passes quickly…

From a nice bike-visit to around Zwolle, to a very pleasant stay in Copenhagen with a trip to the splendid Louisiana Museum, we had a delightfully “slow” summer for a change.


I spent days on end reading (eReader/iPad) and also had a great time at the OHM2013 hacker event. Made me feel old and young at the same time… I suppose that’s good :)

With a fantastic new discovery: a presentation and workshop on Molecular Cooking. Some pretty amazing stuff with food, such as spherification – we made “apple-juice caviar”, and a soup which makes one side of the mouth warm and the other side cold! (using fluid gels)

Here are some of the other things you can do with this (from the Molécule-R site):

Screen Shot 2013-08-31 at 16.29.46

Lots of fun and ideas to try out. It’s a charming mix between exploring original new tastes and playing tricks with the senses. It’s also called food hacking, which could explain why this topic came up at OHM2013 (alongside activism, banking, and fablabs).

A summer of contrasts… from a Hanseatic city to modern gastronomic creativity!

Meet the FanBot

In Hardware, Musings on Jun 27, 2013 at 00:01

The FanBot is a very simple robot based on a small PCB with microcontroller by Peter Brier, some LEDs as eyes and mouth, and a servo to allow the robot to wave its arms:


Over a thousand boards have been produced, along with accessories to let children create a little cardboard robot with their own name and a little program to store a personalised sequence of LED blinks and servo movements. The µC is an ARM LPC11U24 chip, donated by NXP – which has plenty of power, but more importantly: can be programmed by powering it up as a USB memory stick.


Wednesday was the kick-off / trial day, with 120 kids dropping by and creating their own FanBot. The FanBots will all be used for the main event to cheer on the main RoboCup contestants. Here’s a quick impression of the first 80 “fans” (it’s a huge task to get them all up there, checked, stapled, and connected – not to mention the power/network setup!):


It’s a really wonderful project, IMO: lots of young kids get exposed to new technology, learning about robots by building their own, and creating a huge collection of truly individual and personal robots, all cheering together!

For more info, see Peter and Marieke’s KekBot weblog – there’s also a progress page.

The RoboCup championship itself uses more sophisticated robots, such as these:

BvOF RoboCup2013

Many more pictures of this event can already be found on the RoboCup 2013 website and on their Flickr group. The event has only just started so if you’re in the neighbourhood: it’s free, and bound to be oodles of fun for kids of any age!

Myra and I had a wonderful time, and I even had a chance to see Asimo in action, live!

And JeeLabs ended up getting a spot on the sponsor page – not bad, eh?

Update – Forgot to mention that one of the requirements of RoboCup is that everything must be made open source after the event. This means that any advances made can be used by anyone else in the next year. What a fantastic way to stimulate shared progress!

Storage – let’s do the math

In Musings on Jun 22, 2013 at 00:01

Doesn’t it ever end? Storage density, I mean:


That’s a 16 GB µSD card. There are also 64 GB cards, but they are a bit more expensive.

Ok, let’s see – that’s 0.7 x 11 x 15 mm = 115.5 mm3 in volume.

Now let’s take a 2.5″ hard disk. The dimensions are 9.5 x 70 x 100 mm = 66,500 mm3.

So an already-small 2.5″ disk drive (or cell phone, if you prefer) takes as much space as 575 µSD cards, if you don’t count the card slot or other mounting options.

Let’s go for the 64 GB cards, that’s a whopping 36.8 terabytes of solid-state data. At €50 each, that’d require a €28K investment for about €0.80 per gigabyte. I’ll pass for now :)

But how much is 36 TB? Well, you’d be able to store over 5 Kb of data for each living person on this planet, for example. And carry it in your pocket.

Hello sir. One millisec… let me check your name, travel history, family tree, and CV.

What a terrifying thought.

The Knapsack problem

In Musings on Jun 20, 2013 at 00:01

This is related to the power consumption puzzle I ran into the other day.

The Knapsack problem is a mathematical problem, which is perhaps most easily described using this image from Wikipedia:


Given a known sum, but without information about which items were used to reach that sum (weight in this example), can we deduce what went in, or at least reason about it and perhaps exclude certain items?

This is a one-dimensional (constraint) knapsack problem, i.e. we’re only dealing with one dimension (weight). Now you might be wondering what this has to do with anything on this weblog… did I lose my marbles?

The relevance here, is that reasoning about this is very much like reasoning about which power consumers are involved when you measure the total house consumption: which devices and appliances are involved at any point in time, given a power consumption graph such as this one?

Screen Shot 2013-06-17 at 09.35.49

By now, I know for example that the blip around 7:00 is the kitchen fridge:

Screen Shot 2013-06-17 at 09.35.49

A sharp start pulse as the compressor motor powers up, followed by a relatively constant period of about 80 W power consumption. Knowing this means I could subtract the pattern from the total, leaving me with a cleaned up graph which is hopefully easier to reason about w.r.t. other power consumers. Such as that ≈ 100 W power segment from 6:35 to 7:45. A lamp? Unlikely, since it’s already light outside this time of year.

Figuring out which power consumers are active would be very useful. It would let us put a price on each appliance, and it would let us track potential degradation over time, or things like a fridge or freezer needing to be cleaned due to accumulated ice, or how its average power consumption relates to room temperature, for example. It would also let me figure out which lamps should be replaced – not all lamps are energy efficient around here, but I don’t want to discard lamps which are only used a few minutes a day anyway.

Obviously, putting an energy meter next to each appliance would solve it all. But that’s not very economical and also a bit wasteful (those meters are going to draw some current too, you know). Besides, not all consumers are easily isolated from the rest of the house.

My thinking was that perhaps we could use other sensors as circumstantial evidence. If a lamp goes on, the light level must go up as well, right? And if a fridge or washing machine turns on, it’ll start humming and generating heat in the back.

The other helpful bit of information, is that some devices have clear usage patterns. I can recognise our dishwasher from its very distinctive double power usage pulse. And the kitchen boiler from it’s known 3-minute 2000 W power drain. Over time, one could accumulate the variance and shape of several of these bigger consumers. Even normal lamps have a plain rectangular shape with fairly precise power consumption pattern.

Maybe prolonged data collection plus a few well thought-out sensors around the house can give just enough information to help solve the knapsack problem?

Power consumption puzzle

In Musings on Jun 17, 2013 at 00:01

Recently I had to go abroad for a few days, and given a new access path I just added to my HouseMon setup, it let me track things while away from home. Is that good? Eh, sort of…

The nice thing is that I can now check the solar production in the evening while away from home. And obviously I can then also check on the energy consumption – here’s what I saw:

Screen Shot 2013-06-15 at 19.03.26

Very odd. Something was turning on and off all the time! Here’s a close-up:

Screen Shot 2013-06-15 at 19.00.26

As you can see, I did figure it out in the end – but only after this had been going on for two days, on the phone with Liesbeth to try and identify the source of this behaviour.

It was a bit worrying – that’s a 300..400 watt power draw, every 45 seconds. Going on for days. This is enough energy to cause serious heat problems and my worry was that it was some sort of serious malfunction, with a thermal safety constantly kicking in.

Yikes. Especially when you’re away, knowing there is no one at home during the day…

The answer to this riddle isn’t 100% conclusive, because I did find a way to stop the pulsing, but turning power on again did not cause the pulsing to resume:

Screen Shot 2013-06-15 at 19.29.32

This is crazy. The only explanation I found, was the laser printer: I had printed a few pages before going away (and as far as I can remember, I did turn the printer off after use). The printer was behaving erratically, and I had to turn it on and off a few times.

So my conclusion is: the laser printer (which has a toner heater built-in that can indeed easily draw a few hundred watts) got stuck in some weird mode. And kept me worried for a few days – all just because of me being a geek and checking my graphs!

I don’t know what lesson to take home from all this: stop monitoring? stop checking? stop traveling? – or perhaps just stop worrying when there’s nothing you can do about it ;)

What if you’re lost on this site?

In Musings on May 29, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

With over 1300 posts on this weblog, it’s easy to get lost. Maybe you stumbled onto one of the posts after a web search, and then kept reading. Some people told me they just started reading it all from end to finish (gulp!).

It’s not always easy to follow the brain dump of a quirky Franco-Dutch maverick :)

Let me start by listing the resources related to all this JeeStuff:

  • This daily weblog is my train-of-thought, year-in, year-out. Some projects get started and never end (low-power tweaking?), others get started and never get finished (sad, but true), yet others are me going off on some tangent, and finally there are the occasional series and mini-series – diving a bit deeper into some topic, or trying to explain something (electronics, usually).
  • There’s a chronological index, which I update from time to time using a little script. It lists just the titles and the tags. It’s a quick way to see what sort of topics get covered.
  • Most post are tagged, see the “tag cloud” at the bottom of each page. Clicking on a term leads to the corresponding posts, as one (large) page. This is probably a good way to read about certain topics when you come to this web site for the first time.
  • At the bottom of each weblog page is a list of posts, grouped by month. Frankly, I don’t think it’s that useful – it’s mostly there because WordPress makes it easy to add.
  • And there’s the search field, again at the bottom of each page. It works quite well, but if your search term is too vague, you’ll get a page with a huge list of weblog posts.

Apart from this weblog, which is at, there is also the community site at, and the shop at It’s a bit unfortunate that they all look different, and that they all use different software packages, but that’s the way it is.

The community site contains a number of areas:

  • The café is a publik wiki, with reference materials, projects, and pointers to other related pages and sites. Note that although it’s a wiki, it is not open for editing without signing up – that’s just to keep out spammers. Everyone is welcome, cordially invited even, to participate.

  • The software I write – with the help and contributions of others – ends up on GitHub so that anyone can browse the code, see what is being added / changed / fixed over time, and also create a fork to hack on. My code tends to be MIT-licensed wherever possible, so it’s all yours to look at, learn from, re-use, whatever.

  • There is documentation at for several of the more important packages and libraries on GitHub. Updating is a manual step here, so it can lag, occasionally. These pages are generated by Doxygen.

  • The hardware area lists all the products which have escaped from JeeLabs, and are ending up all over the world. It’s a reference area, which should be the final word on what each product is and isn’t.

  • There are several forums for discussion, making suggestions, asking questions, and posting notes about anything somehow related (or at least relevant) to JeeLabs.

  • For real-time discussion, there’s a #jeelabs IRC channel, though I rarely leave my IRC client running very long. Doesn’t seem to be used much, but it’s there to be used.

If you’re new to electronics, you could go through the series called Easy electrons. For a write-up about setting up a sensor network at home, see the Dive Into JeeNodes series.

What else? Let me know, please. I find it very hard to get in the mindset of someone reaching this site for the first time. If you are lost, chances are that others will be too – so any tips and suggestions on how to improve this site for new visitors would be a big help.

You can always reach me via the info listed on the “About” page.

Wireless, the CAN bus, and enzymes

In Musings on May 27, 2013 at 00:01

How’s that for a title to get your attention, eh?

There’s an interesting mechanism in communication, which has kept me intrigued for quite some time now:

  • with JeeNode and RF12-based sensors, wireless packets are often broadcast with only a sender node ID (most network protocols use both a source and a destination)
  • CAN bus is a wired bus protocol from the car industry, its messages do not contain a destination address, there is just a 11- or 29-bit “message ID”

What both these systems do (most of the time, but not exclusively), is to tag transmitted packets with where they came from (or what their “meaning” is) and then just send this out to whoever happens to be interested. No acknowledgements: in the case of wireless, some messages might get lost – with CAN bus, the reliability is considerably higher.

It’s a bit like hormones and other chemicals in our blood stream, added for a specific purpose, but not really addressed to an area in the body. That’s up to various enzymes and other receptors to pick up (I know next to nothing about biology, pardon my ignorance).

Couple of points to note about this:

  • Communicating 1-to-N (i.e. broadcasting) is just as easy as communicating 1-to-1, in fact there is no such thing as privacy in this context – anyone / anything can listen-in on any conversation. The senders won’t know.
  • There is no guaranteed delivery, since the intended targets may not even be around or listening. The best you can do, is look for the effects of the communication, which could be an echo from the receiving end, or some observable side-effect.
  • You can still set up focused interactions, by agreeing on a code / channel to use for a specific purpose: A can say “let’s discuss X”, and B can say “I’ll be listening to topic X on channel C”. Then both A and B could agree to tag all their messages with “C”, and they’ll be off on their own (public) discussion.
  • This mode of communicating via “channels” or “topics” is quite common, once you start looking for it. The MQTT messaging system uses “channels” to support generic data exchange. Or take the human-centric IRC, for example. Or UDP’s multicast.
  • Note that everything which has to do with discovery on a network also must rely on such a “sender-id-centric” approach, since by definition it will be about finding a path to some sender which doesn’t know about us.

Having no one-to-one communication might seem limiting, but it’s not. First of all, the nature of both wireless and busses is such that everything reaches everyone anyway. It’s more about filtering out what we’re not interested in. The transmissions are the same, it’s just the receivers which apply different filtering rules.

But perhaps far more importantly, is that this intrinsic broadcasting behaviour leads to a different way of designing systems. I can add a new wireless sensor node to my setup without having to decide what to do with the measurements yet. Also, I will often set up a second listen-only node for testing, and it just picks up all the packets without affecting my “production” setup. For tests which might interfere, I pick a different net group, since the RF12 driver (and the RFM12B hardware itself) has implicit “origin-id-filtering” built in. When initialised for a certain net group, all other packets automatically get ignored.

Even N-to-1 communication is possible by having multiple nodes send out messages with the same ID (and their distinguishing details elsewhere in the payload). This is not allowed on the CAN bus, btw – there, each sender has to stick to unique IDs.

The approach changes from “hey YOU, let me tell you THIS”, to “I am saying THIS”. If no one is listening, then so be it. If we need to make sure it was received, we could extend the conventions so that B nods by saying “got THIS” and then we just wait for that message (with timeouts and retries, it’s very similar to a traditional ACK mechanism).

It’s a flexible and natural model – normal speech works the same, if you think about it…

PS. The reason this is coming up, is that I’m looking for a robust way way to implement JeeBoot auto-discovery.

What if the sun doesn’t shine?

In Musings on May 22, 2013 at 00:01

Welcome to the weekly What-If series, also available via the Café wiki.

Slightly different question this time – not so much about investigating, but about coming up with some ideas. Because, now that solar energy is being collected here at JeeLabs and winter is over, there’s a fairly obvious pattern appearing:

Screen Shot 2013-05-14 at 12.47.42

Surplus solar energy during the day, but none in the evenings and at night for cooking + lighting (it looks like the heater is still kicking in at the end of the day, BTW).

This particular example shows that the amount of surplus energy would be more or less what’s needed in the evening – if only there were a way to store this energy for 6 hours…

Looking at some counters over that same period, I can see that the amount of energy is about 2.5 kWh. The challenge is to store this amount of energy locally. Some thoughts:

  • A 12 V lead-acid battery could be used, with 2.5 kWh corresponding to some 208 Ah.
  • But that’s a lower bound: let’s assume 90% conversion efficiency in both directions, i.e. 81% for charge + discharge (i.e. 19% losses) – we’ll now need a 257 Ah battery.
  • But the lifetime of lead-acid batteries is only good if you don’t discharge them too far. So-called deep cycle batteries are designed specifically for cases like these, where the charge/discharge is going to happen day in day out. To use them optimally, you shouldn’t discharge them over 50%, so we’ll need a battery twice as large: 514 Ah.

Let’s see… three of these 12V 230 Ah units could easily do the job:

Screen Shot 2013-05-14 at 13.14.23

Note that the cost of the batteries alone will be €2,000 and their total weight 200 kg!

There’s an interesting article about the energy shortage after the Fukushima disaster, including a good diagram about a somewhat similar issue (lowering evening peak use):


Although driven by a much harsher reality in that article, I wouldn’t be surprised to see new “one-day storage” solutions come out of all this, usable in the rest of the world as well.

For winter-time, I suppose one could heat up a large water tank, and then re-use it for heating in the evening. Except, ehm, that there’s a lot less surplus energy in winter.

Are there any other viable “semi off-grid” options out there? A flywheel in the basement?

PS. New milestone reached yesterday: total solar production so far has caught up with the consumption here at JeeLabs during that same period (since end October, that is).

Winding down

In News, Musings on Apr 22, 2013 at 00:01

The JeeDay 2013-04 event is over.

I would like to warmly thank the 40 or so people who attended on Friday and Saturday. It is clear to me from the kind follow-up emails that the event was appreciated by many of you and I really hope that everyone got something useful and stimulating out of this.

Allow me to also thank the “anonymous sponsor” at this point for funding the venue, the coffee and drinks, and Saturday’s lunch. I’ve passed on your and my appreciation, and it has gratefully been accepted. As several people have pointed out, this whole concept of an anonymous sponsor is really a contradiction in terms, so let’s all just cherish the fact that philanthropy (and mystery) still exists, even in today’s western societies.

This is probably the point where I’m expected to write sentences full of superlatives, self-congratulatory remarks, let’s-conquer-the-world type of pep-talk, congratulations for the speakers and their choice of interesting topics, all sorts of grandiose plans, and where I’d also describe how stimulating all the discussions on the side turned out to be.

I could, and it’d be true. But I won’t…

Instead, I’d like to give this a somewhat different (personal / philosophical) twist.

We’re focused on success. We crave rewards. We seek recognition. So when something good (for some definition of “good”) happens, we want to take it further.

Again. Better. More.

Yet to me, that’s not what JeeDay was about. Sure, we could do it again. In fact, I’d love to and I’ve even sort-of committed to organising another JeeDay a year from now. We’ll see.

But to me, JeeDay is not about the next step or some future trend. It’s about this event we just had. Some 10 talks from people describing what they like to do in their free time. That’s quite a special situation, when you stop and think about it: here we all are, a few dozen geeks with a common techie interest, and this what we choose to spend our time, our creative energies, and our money on. We could do anything, yet this is what we want to do. In. Our. Free. Time.

Now of course, everyone’s reasons will differ. But to me, it’s pretty amazing: there’s rarely a financial reward (heck, it usually costs money!). There’s often not much recognition. These are not TED talks, we’re not working on some high-visibility successful project and showing the world. We just tinker in private, we come up with stuff, we learn, and we like doing it.

In my view, this is about the top two tiers of Maslow’s hierarchy of needs:


The basic idea being that you can’t really get to focus on the levels above before the levels underneath have more or less been covered.

This is – again, in my perception – not about success, and probably not even about peer recognition, but about the intrinsic fun of discovery, invention, creation, and problem-solving. And about finding out how others deal with this. It’s no accident that most of it happens as open source, either: open source (hardware + software) and sharing is what floats to the top when the intrinsic puzzles and their solutions dominate.

In a world where so much is about ownership, money, and time, I think that’s precious.

I hope JeeDay has helped you find and follow your passion. Everything else is secondary.

PS. The mystery topic in my presentation was JeeBoot – more to follow soon.

Energy savings…

In News, Musings on Mar 31, 2013 at 00:01

Here’s a view of the solar energy production earlier this week here at JeeLabs:

Screen Shot 2013-03-26 at 18.49.36

Best day so far … 19.3 kWh in one day: that’s some 2.5 x our average daily consumption!

This graph was made with HouseMon, which is still in the early stages but I’m viewing it on a daily basis – the Status and Graphs pages are already quite practical. Then again, it’s a constant reminder that the progress of this project is considerably slower than I had hoped when I started out. One reason for that is that I’m still hesitant to make some major design decisions – mostly because I don’t have enough experience and don’t feel confident enough with Node.js and CoffeeScript yet. So many things still feel awkward :(

Speaking of insufficient progress… it’s time to switch off:

wallpaper power symbol green

(image by TheBigDaveC, as found on this site)

I’m going to take a brief break, and interrupt this daily flow of weblog posts for a while.

It has happened before, and it will happen again: I want to clear my head and focus on some projects which take a bit more concentration than I seem to be finding these days.

But no worries: this daily weblog will resume before JeeDay (April 19 + 20), so there will still be enough time to get the latest info and news out to you.

Soooo… see you then, and more importantly: see you there !

(Pssst… in case you haven’t seen this… let that nano stuff inspire you… pretty amazing!)

Software development

In Software, Musings on Mar 28, 2013 at 00:01

As you probably know, I’m now doing all my software development in just two languages – C++ in the embedded world and JavaScript in the rest. Well, sort of, anyway:

  • The C++ conventions I use are very much watered down from where C++ has been going for some time (with all the STL machinery). I am deliberately not using RTTI, exceptions, namespaces, or templates, because they add far too much complexity.
  • Ehm… C++ templates are actually extremely powerful and would in fact be a big help in reducing code overhead on embedded processors, but: 1) they require a very advanced understanding of C++, 2) that would make my code even more impenetrable to others trying to understand it all, and 3) it can be very hard to understand and fix templating-related compiler error messages.
  • I also use C++ virtual functions sparingly, because they tend to generate more code, and what’s worse: VTables use up RAM, a scarce resource on the ATmega / ATtiny!
  • As for programming in JavaScript: I don’t, really. I write code in a dialect called CoffeeScript, which then gets compiled to JavaScript on-the-fly. The main reason is that it fixes some warts in the JavaScript syntax, and keeps the good parts. It’s also delightfully concise, although I admit that you have to learn to read it before you can appreciate the effect it has on making the essence of the logic stand out better.
  • There is an incredible book called CoffeeScript Ristretto by Reginald Braithwaite, which IMO is a must read. There is also a site called which appears to have the entire content of that book online (although I find the PDF version more readable). Written in the same playful style as The Little Schemer, so be prepared to get your mind twisted by the (deep and valuable) concepts presented.
  • To those who dismiss today’s JavaScript, and hence CoffeeScript, on the basis of its syntax or semantics, I can only point to an article by Paul Graham, in which he describes The Blub Paradox. There is (much) more to it than meets the eye.
  • In my opinion, CoffeeScript + Node.js bring together the best ideas from Scheme (functions), Ruby (notation), Python (indentation), and Tcl (events).
  • If you’re craving for more background, check out How I Learned To Enjoy JavaScript and some articles it links to, such as JS: The Right Way and Idiomatic JavaScript.

I’m quite happy with the above choices now, even though I still feel frustratingly inept at writing CoffeeScript and working with the asynchronous nature and libraries of Node.js – but at every turn, the concepts do “click” – this really feels like the right way to do things, staying away from all the silliness of statically compiled languages and datatypes, threads, and blocking system calls. The Node.js community members have taken some very bold steps, adopted what people found worthwhile in Ruby on Rails and other innovations, and lived through the pain of the all-async nature by coming up with libraries such as Async, as well as other great ones like Underscore, Connect cq. Express, Mocha, and Marked.

I just came across a very nice site about JavaScript, called SuperHero, via this weblog post. Will be going through it soon, to try and improve my understanding and (hopefully) skills. If you like video’s, check out PeepCode, i.e. this one on Node.js + Express + CoffeeScript.

As for the client-side JavaScript framework, I’m using AngularJS. There’s a nice little music player example here, with the JavaScript here, illustrating the general style quite well, IMO.

Isn’t it amazing how much knowledge and tools we have at our fingertips nowadays?

It seems to me that for software technologies and languages to gain solid traction and momentum, it will all come down to how “learnable” things are. Because we all start from scratch at some point. And we all only have so many hours in the day to learn things well. There is this never-ending struggle between picking the tools which are instantly obvious (but perhaps limited in the long rung) vs. picking very advanced tools (which may take too much time to become proficient in). Think BASIC vs. Common Lisp, for example. It gets even worse as the world is moving on – how can anyone decide to “dive in” and go for the deep stuff, when it might all be obsolete by the time the paybacks really set in?

In my case, I want to not only build stuff (could have stayed with Tcl + JeeMon for that), but also take advantage of what others are doing, and – with a bit of luck – come up with something which some of you find attractive enough to build upon and extend. Time will tell whether the choices made will lead there…

One other very interesting new development I’d like to mention in this context, is the Markdown-based Literate Programming now possible with CoffeeScript: see this weblog post by Jeremy Ashkenas, the inventor & implementor of CoffeeScript. I’m currently looking into this, alongside more traditional documentation tools such as Docco and Groc.

Would it make sense to write code for HouseMon as a story? I have no idea, yet…

Heat imaging at JeeLabs

In Musings on Mar 14, 2013 at 00:01

Yesterday’s images were just a diversion from the task of “measuring the house”, of course. Here are some outside thermal images, taken mid-February:



But the really interesting bit comes from sucking air out of the house using the blower door on a cold day, to see where cold air comes leaking back in:



You have to take the measurement scale into account, but some of the (over 200!) images made were absolutely shocking. There are some major air leaks in and around the house!

As final test, smoke was used: you turn the blowers around to create a slight over-pressure, turn on a big smoke generator (of the kind used in shows on-stage), get out of the house, and watch the smoke appear – it was literally everywhere, even in the neighbour’s houses!

This sort of testing needs to be pre-announced to the local fire department…

Retro kitsch

In Musings on Mar 6, 2013 at 00:01

Here’s a crazy book for ya’ – one which you can’t quite judge by its cover, in fact:


Well… I fell for it, and decided to get one:


Just think of what this represents: 500 GB of storage in the space of a single book.

Not bad for compression: half a million books in the size of… a book? :)

JeeDay => April 20

In Musings on Feb 24, 2013 at 00:01

It’s been four and a half years of fun since I had this crazy idea to start JeeLabs, and it’s been four years also since the JeeNode was born. An excellent reason to celebrate, eh?

Coming April 19th and 20th (Friday evening and Saturday), I’m going to kick off JeeDay:

Meet face-to-face with fellow PhysComp / WSN / JeeStuff enthusiasts and JC + Martyn. Get the latest news, share your ideas and show off your project (or pictures of it). Discussions, presentations, hands-on sessions – it’s all possible, if we organise ourselves and our time appropriately!

The topics we could cover include things like:

  • Wireless Sensor Networks
  • Ultra-low power nodes in the Arduino world
  • Home monitoring and home automation
  • JeeLabs products Q & A
  • Solutions for dealing with AC mains
  • Funky sensors and clever displays
  • How to lower your energy bill
  • Soldering and measurement techniques
  • Hands-on with an oscilloscope
  • Designing and manufacturing PCBs
  • Enclosures, laser-cutting, 3D printing
  • Hack sessions? Debug sessions?
  • Bring and show your projects, especially if in-progress
  • Ideas for future projects and products
  • Presentations, presentations, presentations

Whoa, that list could go on forever… a huge set of topics!

The location will be in Utrecht or in Houten (5 min by train from Utrecht), which is located in the middle of the Netherlands. There are lots of accommodation nearby for those who want to stay overnight. Come and visit the Netherlands, you’ll enjoy it!

We can extend this to Sunday, if I can find a suitable venue and if there is enough interest, although perhaps that’s a bit too ambitious for such a first event.

Fees would be just to cover costs, drinks, etc. Also some sandwiches or pizza to get us through the day. Should all be doable for €15 .. €25.

I have no idea yet how many people would be interested and might be able to come, so I’ve set up a meeting scheduler – if you’re considering participating, please, please, please do add your name and indicate the time range – 10? 20? 50? 100? people – Let’s find out!

Further details will be added to the JeeDay 13.04 wiki page, as preparations progress. The sooner you respond, the more chances that I can figure out a proper venue and how to make it all happen. And… if you have any tips or suggestions, please get in touch now!

It’ll be great to meet face-to-face, it can be informative for all, and it’ll definitely be fun! :)

a s d f – j k l ;

In Musings on Feb 19, 2013 at 00:01

What can I say? Some things are worth the pain and the dip, I guess…

Exactly two weeks ago, I mentioned that one of the things I wanted to try was to learn and touch-type. You know, using this curious contraption without looking at it all the time…


Actually, the one I use has an extra key, the “~” / “`” key is in a different place, and the Shift and Return keys have a different size. And to make matters, ehm, more “interesting”, I even followed the suggestion to change the Caps-lock key into an alternate Control key (consider for a moment what that does to typing “TODO” in capitals as a touch-typist!).

Right now, after two weeks, I’m still typing quite a bit slower than I used to, am making tons and tons of mistakes (especially for the hard / far-away keys, and all the tricky Ctrl / Shift combinations), and I’m probably hitting the backspace key almost as often as all the other keys combined to correct my never-ending mistakes.

The worst bit is that this way of typing is still much more distracting than it used to be.

But… you know what? I really, really think it’s going to work and pay off!

Weird as it sounds, and hard as it feels, the letter keys are slowly starting to fall into place. Even after a mere two weeks (I did have some prior experience, to be fair). The thing that is taking most time to adjust to, is not the change in overall speed and effort, but the fact that typing speed is much more variable right now, depending on whether I’m typing letters or hitting other keys, and especially whether the Ctrl and/or Shift keys are involved (not to mention the Mac’s Option and Command keys!). So what seems to be happening is that words already come out faster than I used to type, but everything else really really takes a lot of time and mental effort. In short: punctuation and symbols are a pain, and with programming languages that means lots and lots of keystrokes are… t e d i o u s !

If you write a lot of prose, it’s pretty obvious that touch typing will be a wise investment. But now I’m starting to think that even with something as different as typing code, i.e. a notation full of weird characters, and even typing commands in Vim (where all sorts of key combinations have to be used all the time, and where mistakes are awkward!), the task of learning to type without looking at your fingers or the keyboard is indeed worth the effort.

(Speaking of Vim: it’s easier to type Ctrl-[ than it is to hit the far-away Escape key!)

It’s not even really about looking less up and down, as I thought. It’s about muscle memory, about off-loading an activity to a different part of your brain, and about focus. While writing this weblog post, which is obviously mostly text, I already feel more “relaxed” while typing in this new way. It’s like having an extra assistant – albeit still clumsy ;)

Pretty crazy stuff, if you think about it. I’m not making it up. This IS working!

Up next: the final posts of the Dive Into JeeNodes series – stay tuned…


In Musings on Feb 12, 2013 at 00:01

Wow. What a lovely time of year (especially from inside a comfy home):


The next two pictures were taken by Liesbeth, in a little walk around the neighbourhood:


Not much solar power right now (150W), despite the sun – panels are covered up in snow!


Regular transmissions to resume tomorrow…


In Musings on Feb 7, 2013 at 00:01

About an hour’s drive south of Houten, in the Netherlands, lies the city of Eindhoven. This is where the Philips family started, and as a technological giant, the company has shaped the city over the decades. Now, it’s also the beating heart of a fascinating new “brainport”, with a University of Technology (TU/e) and a creative Design Academy (DAE).

Dutch Design is known around the world by now.

And then there’s Strijp – an area being developed where the Philips factories once where.

Wow – if only I could be 20 again, and live that life and learn from scratch today!

These trends fascinate me. The idea that future generations will grow up in such a fertile environment, changing everything in the world around them, down to the very fabric of living together and applying creativity and technology in ways never before imagined.

If you have 20 minutes, and turn down your speakers a bit… here’s a video describing the vision and work of the city planners and architects involved in the Strijp project:


Wow. Just wow. If this doesn’t lead to an explosion of creative makers and idealists applying technology to shape the world of the(ir) future, then I don’t know what will.

I do have to add that I have a special emotional bond to what is starting to happen there, as our daughter Myra and her boyfriend Pieter, both currently studying at the DAE, are about to settle into one of the first apartments being set up in those re-purposed Philips buildings at the “Torenallee”.

Yeah, ok, so I’m a proud papa, I guess… :)

But hey, whatever. It’s very exciting to see these trends, ehm, materialise in today’s world!

HouseMon resources

In AVR, Hardware, Software, Musings, Linux on Feb 6, 2013 at 00:01

As promised, a long list of resources I’ve found useful while starting off with HouseMon:

JavaScript – The core of what I’m building now is centered entirely around “JS”, the language behind many sites on the web nowadays. There’s no way around it: you have to get to grips with JS first. I spent several hours watching most of the videos on Douglas Crockford’s site. The big drawback is the time it takes…

Best book on the subject, IMO, if you know the basics of JavaScript, is “JavaScript: The Good Parts” by the same author, ISBN 0596517742. Understanding what the essence of a language is, is the fastest way to mastery, and his book does exactly that.

CoffeeScript – It’s just a dialect of JS, really – and the way HouseMon uses it, “CS” automatically gets compiled (more like “expanded”, if you ask me) to JS on the server, by SocketStream.

The most obvious resource,, is also one of the best ways to understand it. Make sure you are comfortable with JS, even if not in practice, before reading that home page top to bottom. For an intruiging glimpse of how CS code can be documented, see this example from the CS compiler itself (pretty advanced stuff!).

But the impact of CS goes considerably deeper. To understand how Scheme-like functional programming plays a role in CS, there is an entertaining (but fairly advanced) book called CoffeeScript Ristretto by Reginald Braithwaite. I’ve read it front-to-back, and intend to re-read it in its entirety in the coming days. IMO, this is the book that cuts to the core of how functions and objects work together, and how CS lets you write on a high conceptual level. It’s a delightful read, but be prepared to scratch your head at times…

For a much simpler introduction, see The Little Book on CoffeeScript by Alex MacCaw, ISBN 1449321046. Also available on GitHub.

Node.js – I found the Node.js in Action book by Mike Cantelon, TJ Holowaychuk and Nathan Rajlich to be immensely useful, because of how it puts everything in context and introduces all the main concepts and libraries one tends to use in combination with “Node”. It doesn’t hurt that one of the most prolific Node programmers also happens to be one of the authors…

Another useful resource is the API documentation of Node itself.

SocketStream – This is what takes care of client-server communication, deployment, and it comes with many development conveniences and conventions. It’s also the least mature of the bunch, although I’ve not really encountered any problems with it. I expect “SS” to evolve a bit more than the rest, over time.

There’s a “what it is and what it does” type of demo tour, and there is a collection on what I’d call tech notes, describing a wide range of design docs. As with the code, these pages are bound to change and get extended further over time.

Redis – This a little database package which handles a few tasks for HouseMon. I haven’t had to do much to get it going, so the README plus Command Summary were all I’ve needed, for now.

AngularJS – This is the most framework-like component used in HouseMon, by far. It does a lot, but the challenge is to understand how it wants you to do things, and altough “NG” is not really an opinionated piece of software, there is simply no other way to get to grips with it, than to take the dive and learn, learn, learn… Let me just add that I really think it’s worth it – NG can be magic on the client side, and once you get the hang of it, it’s in fact an extremely expressive way to create a responsive app in the browser, IMO.

There’s an elaborate tutorial on the NG site. It covers a lot of ground, and left me a bit overwhelmed – probably because I was trying to learn too much as quickly as possible…

There’s also a video, which gives a very clear idea of NG, what it is, how it is used, etc. Only downside is that it’s over an hour long. Oh, and BTW, the NG FAQ is excellent.

For a broader background on this sort of JS frameworks, see Rich JavaScript Applications by Steven Sanderson. An eye opener, if you’ve not looked into RIA’s before.

Arduino – Does this need any introduction on this weblog? Let me just link to the Reference and the Tutorial here.

JeeNode – Again, not really much point in listing much here, given that this entire weblog is more or less dedicated to that topic. Here’s a big picture and the link to the hardware page, just for completeness.

RF12 – This is the driver used for HopeRF’s wireless radio modules, I’ll just mention the internals weblog posts, and the reference documentation page.

Vim – My editor of choice, lately. After many years of using TextMate (which I still use as code browser), I’ve decided to go back to MacVim, because of the way it can be off-loaded to your spine, so to speak.

There’s a lot of personal preference involved in this type of choice, and there are dozens of blog posts and debates on the web about the pro’s and con’s. This one by Steve Losh sort of matches the process I am going through, in case you’re interested.

Best way to get into vim? Install it, and type “vimtutor“. Best way to learn more? Type “:h<CR>” in vim. Seriously. And don’t try to learn it all at once – the goal is to gradually migrate vim knowledge into your muscle memory. Just learn the base concepts, and if you’re serious about it: learn a few new commands each week. See what sticks.

To get an idea of what’s possible, watch some videos – such as the vim entries on the DAS site by Gary Bernhardt (paid subscription). And while you’re at it: take the opportunity to see what Behaviour Driven Design is like, he has many fascinating videos on the subject.

For a book, I very much recommend Practical Vim by Drew Neil. He covers a wide range of topics, and suggests reading up on them in whatever order is most useful to you.

While learning, this cheatsheet and wallpaper may come in handy.

Raspberry Pi – The little “RPi” Linux board is getting a lot of attention lately. It makes a nice setup for HouseMon. Here are some links for the hardware and the software.

Linux – Getting around on the command line in Linux is also something I get asked about from time to time. This is important when running Linux as a server – the RPi, for example.

I found the resource which appears to do a good job of explaining all the basic and intermediate concepts. It’s also available as a book, called “The Linux Command Line” by William E. Shotts, Jr. (PDF).

There… the above list ought to get you a long way with all the technologies I’m currently messing around with. Please feel free to add pointers and tips in the comments, if you think of other resource which can be of use to fellow readers in this context.

Orthogonal learning

In Musings on Feb 5, 2013 at 00:01

It’s been about 2 months now since I switched back to doing mostly software development – in an attempt to get a new home monitoring software going. That does not mean that electronics and hardware design are off the table – far from it, in fact – but the switch has turned out to be essential for me to break out of the box, and find the headroom needed to make things happen.

This is not just a post about HouseMon, however. What I’d like to describe, is the process I went through, and some first experiences and notes.

Picking a new programming language is no big deal, really, but in this case I intend to go in very deep. I don’t just want to program in CoffeeScript, I want to become productive in it. If this will take several months – as I expect – then so be it. As the saying goes:

“If it’s a job’s worth doing, it’s a job worth doing well.”

Note that I didn’t set out to use CoffeeScript, or Node.js, or AngularJS. Half a year ago, I wanted to use ZeroMQ and Lua, in fact. But there were tiny gaps and little doubts in my mind, and I thought it best to let the whole issue rest. Turns out that this strategy works really well (in cases where one can afford the extra delay, of course) – if a decision doesn’t solidify once taken… wait, don’t rush: it may be trying to tell you something!

Of course every switch will be painful, to some degree. The habitual is, by definition, more convenient. A switch to something new and unfamiliar is a step back, sometimes even a huge step back. And at the end of this post I’ll tell you about one more switch which is also excruciatingly painful for me this very moment…

Not all changes turn out well over time. There are several forum choices to prove it, as all long-time readers and participants at JeeLabs know, and which I still feel bad about.

So how does one pick new technology, apart from choosing based on requirements?

My answer to this is now: follow your passion, but don’t let it blind you…

Go for what interests you, go read and surf like crazy, and try the things which you like. Because unconsciously, your mind will make lots of choices for you, leaving you free to apply your reasoning for making trade-offs which need to be made anyway.

The second guideline IMO, is to never go down a rabbit’s hole – if all the decisions you have to make end up becoming inter-dependent, to the point that you have to accept a single package deal, then… stop. Choices are compromises. There is no silver bullet. If something looks too good to be true – then it probably is (hmmm… where have I heard that before?).

I’ve taken quite a few very far-reaching decisions, lately, w.r.t. HouseMon. But watch this:

  • CoffeeScript is not essential – it interoperates 100% with standard JavaScript
  • Node.js is not essential – it could be replaced by Ruby On Rails, or Django
  • SocketStream is not essential – it merely streamlines the use of WebSockets
  • Redis is not essential – it could have been ZeroMQ, or Mosquitto, or some DB
  • AngularJS is not essential – it could be replaced by Knockout, or Backbone.js
  • Safari is not essential – everything will work fine with Chrome or FireFox
  • the Arduino IDE is not essential – underneath it’s all done with avr-gcc
  • the JeeNodes are not essential – there are several µC alternatives out there
  • the RFM12B is not essential – again, there are alternatives one could use
  • the Mac I use is not essential – it’s merely my personal platform choice
  • Vim is not essential – it just happens to be the editor I’ve chosen to work with

The key phrase is “is not essential”, and the key concept is orthogonality – the choices made above are to a large degree (but not 100%) independent of each other!

If any of the above turns out to be a disappointment, I can still get rid of it, without the whole plan unraveling and blowing up in my face (I hope…).

Which brings me to the main point of this post: by having a certain amount of de-coupling in there, something else also becomes quite an important benefit… the task of learning new stuff has a reduced risk! – if any of the above hits a dead end, I will lose my investment in time and energy in part of what I had to go through. But not all.

It’s really the same as in the real world: don’t put all your eggs in the same basket, as the saying goes. If one of ’em crashes and breaks, it won’t be the end of the world that way.

With so many eggs, eh, I mean technology choices, forcing me to re-learn, there is one more which I’ve decided to go for. Long ago, as a kid I took a course in touch typing, but never found the courage to carry it through for programming – all those nasty characters and punctuation marks were (and are) scaring the heck out of me! Well, no more excuses and no more hunt-and-peck: this post was written in touch-typing mode. Let me just say that my hands are hurting like crazy right now, and that it took – a g e s – to write!

Tomorrow, I’ll post an annotated list of pointers, for many of the items listed above, about the information I found which is really helping me forward right now. You may not make all the same choices, but with a bit of luck there will be something in there for everybody.

Now what?

In Musings on Jan 31, 2013 at 00:01

(Warning, this post is a bit about playing the devil’s advocate…)

Ok, so now I have this table with incoming data, updated in real time – Ajax polling is so passé – with unit conversions, proper labeling, and locations associated with each device:

Screen Shot 2013-01-28 at 16.19.55 copy 2

As I’ve said before: neat as a gimmick, but… yawn, who wants to look at this sort of stuff?

Which sort of begs the question: what’s the point of home monitoring?

(Automation is a different story: rules and automation could certainly be convenient)

What’s the point? Flashy / fancy dashboards with clever fonts? Zoomable graphs? Statistics? Knowing which was the top day for the solar panels? Calculating average temperatures? Predicting the utility bill? Collecting bragging rights or brownie points?

Don’t get me wrong: I’ve got a ton of plans for all this stuff (some of them in other directions than what you’d probably expect, but it’s way too early to talk about that).

But for home monitoring… what’s the point? Just because we can?

The only meaningful use I’ve been able to come up with so far is to save on energy (oh wait, that’s now called “reducing the carbon footprint”). And that has already been achieved to a fairly substantial degree, here at JeeLabs. For that, all I need really, are a few indicators to see the main energy consumption patterns on a day-to-day basis. Heck, a couple of RGB LEDs might be enough – so who needs all these figures, once you’ve interpreted them, drawn some conclusions, and adjusted your behaviour?

The key-value straightjacket

In Software, Musings on Jan 5, 2013 at 00:01

It’s probably me, but I’m having a hard time dealing with data as arrays and hashes…

Here’s what you get in just about every programming language nowadays:

  • variables (which is simply keyed access by name)
  • simple scalars, such as ints, floats, and strings
  • indexed aggregation, i.e. arrays: blah[index]
  • tagged aggregation, i.e. structs:
  • arbitrarily nested combinations of the above

JavaScript structs are called objects and tag access can be blah.tag or blah['tag'].

It would seem that these are all one would need, but I find it quite limiting.

Simple example: I have a set of “drivers” (JavaScript modules), implementing all sorts of functionality. Some of these need only be set up once, so the basic module mechanism works just fine: require "blah" can give me access to the driver as an object.

But others really act like classes (prototypal or otherwise), i.e. there’s a driver to open a serial port and manage incoming and outgoing packets from say a JeeLink running the RF12demo sketch. There can be more than one of these at the same time, which is also indicated by the fact that the driver needs a “serial port” argument to be used.

Each of these serial port interfaces has a certain amount of configuration (the RF12 band/group settings, for example), so it really is a good idea to implement this as objects with some state in them. Each serial port ends up as a derived instance of EventEmitter, with raw packets flowing through it, in both directions: incoming and outgoing.

Then there are packet decoders, to make sense of the bytes coming from room nodes, the OOK relay, and so on. Again, these are modules, but it’s not so clear whether a single decoder object should decode all packets on any attached JeeLink or whether there should be one “decoder object” per serial interface object. Separate objects allow more smarts, because decoders can then keep per-sensor state.

The OOK relay in turn, receives (‘multiplexes”) data from different nodes (and different types of nodes), so this again leads to a bunch of decoders, each for a specific type of OOK transmitter (KAKU, FS20, weather, etc).

As you can see, there’s sort of a tree involved – taking incoming packet data and dissecting / passing it on to more specific decoders. In itself, this is no problem at all – it can all be represented as nested driver objects.

As final step, the different decoders can all publish their readings to a common EventEmitter, which will act as a simple bus. Same as an MQTT broker with channels, with the same “nested key” strings to identify each reading.

So far so good. But that’s such a tiny piece of the puzzle, really.

Complexity sets in once you start to think about setup and teardown of this whole setup at run time (i.e. configuration in the browser).

Each driver object may need some configuration settings (the serial port name for the RF12demo driver was one example). To create a user interface and expose it all in the browser, I need some way of treating drivers as a generic collection, independent of their nesting during the decoding process.

Let’s call the driver modules “interfaces” for now, i.e. in essence the classes from which driver instances can be created. Then the “drivers” become instantiations of these classes, i.e. the objects which actually do the work of connecting, reading, writing, decoding, etc.

One essential difference is that the list of interfaces is flat, whereas a configured system with lots of drivers running is often a tree, to cope with the gradual decoding task described a bit above.

How do I find all the active drivers of a specific interface? Walk the driver tree? Yuck.

Given an driver object, how do I find out where it sits in the tree? Store path lists? Yuck.

Again, it may well be me, but I’m used to dealing with data structures in a non-redundant way. The more you link and cross-link stuff (let alone make copies), the more hassles you run into when adding, removing, or altering things. I’m trying to avoid “administrative code” which only keeps some redundant invariants intact – as much as possible, anyway.

Aren’t data structures supposed to be about keeping each fact in exactly one place?

PS. My terminology is still a mess in flux: interfaces, drivers, devices, etc…

Update – I should probably add that my troubles all seem to come from trying to maintain accurate object identities between clients and server.

An eventful year

In Musings on Dec 26, 2012 at 00:01

Maybe it’s a bit soon’ish to talk about this, but I often like to go slightly against the grain, so with everybody planning to look back at 2012 a few days from now, and coming up with interesting things to say about 2013 – heck, why not travel through time a bit early, eh?

The big events for me this year were the shop hand-over to Martyn and Rohan Judd (who continue to do a magnificent job), and a gradual but very definitive re-focusing on home energy saving and software development. Product development, i.e. physical computing hardware, is taking place in somewhat less public ways, but let me just say that it’s still as much part of what I do as ever. The collaboration with Paul Badger of Modern Device is not something you hear from me about very much, but we’re in regular and frequent discussion about what we’re both doing and where we’d like to go. For 2012, I’m very pleased with how things have worked out, and mighty proud to be part of this team.

The year 2012 was also the year which brought us large-scale online courses, such as Udacity and Coursera. I have to admit that I signed up for several of their courses, but never completed them. Did enough to learn some really useful things, but also realised that it would take probably 2 full days per week to actually complete this (assuming it wouldn’t all end up being above my head…). At the time – in the summer – I just didn’t have the peace of mind to see it through. So this is back on the TODO list for now.

My shining light is Khan Academy, an initiative which was started in 2006 by one person:

Screen Shot 2012-12-25 at 00.07.58

Here’s an important initiative from 2012 which I’d really like to single out at this point:

Khan Academy Computer Science Launch with Salman Khan and John Resig

To me, this isn’t about the Khan Academy, Salman Khan, John Resig, or JavaScript. What is happening here, is that that education is changing in major ways, and now the tools are changing in equally fundamental ways. This world is becoming a place for people who take their future into their own hands. And there’s nothing better than the above to illustrate what that means for a domain such as Computer Science. This isn’t about a better teacher or a better book – this is about a new way of learning. On a global scale.

The message is loud and clear: “Wanna go somewhere? Go! What’s holding you back?” – and 2012 is where it all switched into a higher gear. There are more places to go and learn than ever, and the foundations of that learning are more and more based on open source – meaning that you can dive in as deep as you like. Given the time, I’d actually love to have a good look inside Node.js one day… but nah, not quite yet :)

I’ve been rediscovering this path recently, trying to understand even the most stupid basic aspects of this new (for me) programming language called JavaScript, iterating between total despair at the complexity and the breadth of all the material on the one hand, and absolute delight and gratitude as someone answered my question and helped me reach the next level. Wow. Everything is out there. BSD/MIT-licensed. Right in front of our nose!

All we need is fascination, perseverance, and time. None of these are a given. But we must fight for them. Because they matter, and because life’s too short for anything less.

So – yes, a bit early – for 2013, I wish you lots of fascination, perseverance… and time.

The price of electrons

In Musings on Dec 15, 2012 at 00:01

Came across this site recently, thanks to a link from Ard about his page on peak shaving.

They sell electricity at an hourly rate. Here’s an example:

Screen Shot 2012-12-14 at 12.35.52

The interesting bit is the predicitive aspect: you get a predicted price for the entire day ahead, which means you can plan your consumption! A win-win all around, since that sort of behavioural adjustment is probably what the energy company wants in the first place. Their concern is always (only?) the peak.

Is this our future? I’d definitely prefer it to “smart” grids taking decisions about my appliances and home. Better options, letting me decide whether to use, store, or pass along the solar energy production, for example.

Here’s another graph from that same site, showing this year’s trend in the Chicago area:

Screen Shot 2012-12-14 at 10.45.47

It’s pretty obvious that air-conditioners run on electricity, eh?

But look also at those rates… this is about an order of magnitude lower than the current rates in the Netherlands (and I suspect Western Europe).

Here are the rates I get from my provider, including huge taxes:

Screen Shot 2012-12-14 at 14.44.11

You can probably guess the Dutch in there – two tariffs: high is for weekdays during daytime, low is for weekends and at night. Hardly a difference, due to taxes :(

Here are the rates for natural gas, btw – just for completeness:

Screen Shot 2012-12-14 at 14.44.51

No wonder really, that different parts of the world, with their widely different income levels and energy prices, end up making completely different choices.

Solar panels are currently profitable after about 7..8 years in the Netherlands – which is reflected by a strong increase in adoption lately. But seeing the above graphs, I doubt that this would make much sense in any other part of the world right now!

Data storage and backups

In Musings on Dec 12, 2012 at 00:01

Having just gone through some reshuffling here, I thought it might be of interest to describe my setup, and how I got there.

Let’s start with some basics – apologies if this all sounds too trivial:

  • backups are not archives: backups are about redundancy, archives are about history
  • I don’t want backups, but the real world keeps proving that things can fail – badly!
  • archives are for old stuff I want to keep around for reference (or out of nostalgia…)

If you don’t set up a proper backup strategy, then you might as well go jump off a cliff.

If you don’t set up archives, fine: some hold onto everything, others prefer to travel light – I used to collect lots of movies and software archives. No more: there’s no end to it, and especially movies take up large amounts of space. Dropping all that gave me my life back.

We do keep all our music, and our entire photo collection (each 100+ GB). Both include digitised collections of everything before today’s bits-and-bytes era. So about 250 GB in all.

Now the deeply humbling part: everything I’ve ever written or coded in my life will easily fit on a USB stick. Let’s be generous and assume it will grow to 10 GB, tops.

What else is there? Oh yes, operating systems, installed apps, that sort of thing. Perhaps 20..50 GB per machine. The JeeLabs Server, with Mac OSX Server, four Linux VM’s, and everything else needed to keep a bunch of websites going, clocks in at just over 50 GB.

For the last few years, my main working setup has been a laptop with a 128 GB SSD, and it has been fairly easy to keep disk usage under 100 GB, even including a couple of Linux and Windows VM’s. Music and photo’s were stored on the server.

I’m rambling about this to explain why our entire “digital footprint” (for Liesbeth and me) is substantially under 1 TB. Some people will laugh at this, but hey – that’s where we stand.


Ah, yes, back to the topic of this post. How to manage backups of all this. But before I do, I have to mention that I used to think in terms of “master disks” and “slave disks”, i.e. data which was the real thing, and copies on other disks which existed merely for convenience, off-line / off-site security, or just “attics” with lots of unsorted old stuff.

But that has changed in the past few months.

Now, with an automatic off-site backup strategy in place, there is no longer a need to worry so much about specific disks or computers. Any one of them could break down, and yet it would be no more than the inconvenience of having to get new hardware and restore data – it’d probably take a few days.

The key to this: everything that matters, now exists in at least three places in the world.

I’m running a mostly-Mac operation here, so that evidently influences some of the choices made – but not all, and I’m sure there are equivalent solutions for Windows and Linux.

This is the setup at JeeLabs:

  • one personal computer per person
  • a central server

Sure, there are lots of other older machines around here (about half a dozen, all still working fine, and used for various things). But our digital lives don’t “reside” on those other machines. Three computers, period.

For each, there are two types of backups: system recovery, and vital data.

System recovery is about being able to get back to work quickly when a disk breaks down or some other physical mishap. For that, I use Carbon Copy Cloner, which does full disk tree copying, and is able to create bootable images. These copies include the O/S, all installed apps, everything to get back up to a running machine from scratch, but none of my personal data (unless you consider some of the configuration settings to be personal).

These copies are made once a day, a week, or a month – some of these copies are fully automatic, others require me to hook up a disk and start the process. So it’s not 100% automated, but I know for sure I can get back to a running system which is “reasonably” close to my current one. In a matter of hours.

That’s 3 computers with 2 system copies for each. One of the copies is always off-site.

Vital data is of course just that: the stuff I never want to lose. For this, I now use CrashPlan+, with an unlimited 10-computer paid plan. There are a couple of other similar services, such as BackBlaze and Carbonite. They all do the same: you keep a process running in the background, which pumps changes out over internet.

In my case, one of the copies goes to the CrashPlan “cloud” itself (in the US), the other goes to a friend who also has fast internet and a CrashPlan setup. We each bought a 2.5″ USB-powered disk with lots of storage, placed our initial backups on them, and then swapped the drives to continue further incremental backups over the net.

The result: within 15 minutes, every change on my disk ends up in two other places on this planet. And because these backups contain history, older versions continue to be available long after each change and long after any deletion, even (I limit the history to 90 days).

That’s 1 TB of data, always in good shape. Virtually no effort, other than an occasional glance on the menu bar to see that the backup is operating properly. Any failure of 3 or more days for any of these backup streams leads to a warning email in my inbox (which is at an ISP, i.e. off-site). Once a week I get a concise backup status report, again via email.

The JeeLabs server VM’s get their own daily backup to Amazon S3, which means I can re-launch them as EC2 instances in the cloud if there is a serious problem with the Mac Mini used as server here. See an older post for details.

Yes, this is all fairly obvious: get your backups right and you get to sleep well at night.

But what has changed, is that I no longer use the always-on server as “stable disk” for my laptop. I used to try putting more and more data on the central server here, since it was always on and available anyway. Which means that for really good performance you need a 1 Gbit wired ethernet connection. Trivial stuff, but not so convenient when sitting on the couch in the living room. And frankly also a bit silly, since I’m the only person using those large PDF and code collections I’m relying on more and more these days.

So now, I’ve gone back to the simplest possible setup: one laptop, everything I need on there (several hundred GB in total), and an almost empty server again. On the server, just our music collection (which is of course shared) and the really always-on stuff, i.e. the JeeLabs server VM’s. Oh, and the extra hard disk for my friend’s backups…

Using well under 1 TB for an entire household will probably seem ridiculous. But I’m really happy to have a (sort of) NAS-less, and definitely RAID-less, setup here.

Now I just need to sort out all the old disks….

Inventing on Principle

In Musings on Dec 11, 2012 at 00:01

It’s going to take almost an hour of your time to watch this presentation:

Bret Victor – Inventing on Principle from CUSEC.

Let me just say: this is sooo worth it, from the beginning all the way to the very end. No need to view it now (it’s been out for 10 months) – but when you do, you’ll enjoy it.

O n e   h o u r   o f   m i n d b l o w i n g   i n s i g h t s   . . .

Bret Victor’s site is here. My fault for having seen it, but never paying proper attention.

Stumbled onto this via a related fascinating development, called CodeBook.

Idiots and the Universe

In Musings on Dec 10, 2012 at 00:01

Check out this quote:

Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning. — Rick Cook, The Wizardry Compiled

The latest trend is to add comments to weblog posts, praising me in all sorts of truly wonderful (but totally generic) ways. The only purpose being to get that comment listed with a reference to some site peddling some stuff. Fortunately, the ploy is trivial to detect. So easy in fact, that filtering can be fully automated via the Akismet web service, plus a WordPress plug-in by that same name.

Here’s the trend on this weblog (snapshot taken about a week ago):

Screen Shot 2012 12 03 at 18 33 14

The drop comes from the fact that all posts on this weblog are automatically closed for comments after two weeks, and there were no new posts in July and August. So it’s just a bunch of, eh, slightly desperate people pounding on the door.

One of them got through in the past six months. The other 326 just wasted their time.

Something similar is happening on the discussion forum. And behind the scenes, some new work is now being done to make those constant attempts there just as futile :)

Paper-based Google

In Musings on Oct 14, 2012 at 00:01

Look what got delivered (unannounced) here at JeeLabs the other day:

DSC 4171

That’s nearly 3000 pages of product and price information, in a 2 kg package :)

Crazy? Maybe. But there’s still some value in seeing so many products at one glance:

DSC 4172

What would be even nicer, IMO, is a paper version with QR codes so you can instantly tie what you see to a web page, with more information, full quantity pricing details, and engineering specs / datasheets. Or at least a direct link between each full page and the web – that’s just 4 digits to enter, after all.

It feels a bit old-fashioned to leaf through such a catalog, but hey… when you don’t know exactly what you need (or more likely: what range of solutions is available), then that can still beat every parametric search out there.

RS Components have an excellent selection of electronic and mechanical parts, BTW.

This logo is real!

In Musings on Sep 16, 2012 at 00:01

Real as in tangible, that is! (thanks, David)

DSC 4142

That’s MDF, sprayed in black and red, along with a plexiglass inset to hold it all together.

Created with my own laser cutter (sort of). And you can make a laser too, stay tuned :)

Shop trick

In Musings on Sep 6, 2012 at 00:01

The JeeLabs Shop has gained some extra functionality, since about a year, as it now lets you “sign up” and add a password to simplify re-ordering later.

What I didn’t know until today (thanks, Martyn) is that there is actually a way to access the order history and to manage your shipping address(es).

The trick is to go to – which will redirect you to a login page unless your browser has already saved the relevant cookies:

Screen Shot 2012 09 05 at 20 05 43

Once logged in, you can see what you’ve ordered in the past:

Screen Shot 2012 09 05 at 20 09 18

In my case, most of these orders were of course just dummies, which I then cancelled.

Three things to note about this functionality:

  • yes, the shop will use cookies if you decide to sign-up when placing an order
  • you can’t change the info on existing orders (contact order_assistance at jeelabs dot org for that)
  • I’ll update the email confirmations sent out to mention this feature

I still think that there are plenty of smaller and larger inconveniences in this shop (hosted by Shopify), none of which I have control over unfortunately, but it’s good to know that this history mechanism is there if you need it.

Fully recharged

In Musings on Sep 1, 2012 at 00:01

… and ready to go! (is this what they mean by “solar energy”?)

We had a gorgeous vacation. The best part: the car broke down at the start of our trip, forcing us to make some quick decisions. So we ended up in Retournac, a little village in the south of the Auvergne while getting that car fixed (kudo’s to Volkswagen for their splendid service, which included a free rental car replacement). In fact, we liked this place so much that we decided to come back to it in the second part of our vacation – this little spot was unbelievably calm, with a great little Camping Municipal on the border of the Loire, and restaurants with fantastic 4-course plat du jour meals for the price of what would get us just about one pizza back home.


See that little green tent over on the left, under the trees? No? Oh well, that’s where Liesbeth and I set up camp :)

What else to do in France in high season, apart from going on lots of hikes and lazily reading books? Well, we visited lots of smaller and larger villages for one, such as these:


… and we chased all the scents in those gorgeous little markets everywhere:


The other half of our vacation was spent visiting French & Portuguese friends in the area.

It was a truly wonderful break … and now it’s time to get back to Physical Computing!

Structured data

In Software, Musings on Jun 22, 2012 at 00:01

As hinted at yesterday, I intend to use the ZeroMQ library as foundation for building stuff on. ZeroMQ bills itself as “The Intelligent Transport Layer”, and frankly, I’m inclined to agree. Platform and vendor agnostic. Small. Fast.

So now we’ve got ourselves a pipe. What do we push through it? Water? Gas? Electrons?

Heh – none of the above. I’m going to push data / messages through it, structured data that is.

The next can of worms: how does a sender encode structured data, and how does a receiver interpret those bytes? Have a look at this Comparison of data serialization formats for a comprehensive overview (thanks, Wikipedia!).

Yikes, too many options! This is almost the dreaded language debate all over again…

Ok, I’ve travelled the world, I’ve looked around, I’ve pondered on all the options, and I’ve weighed the ins and outs of ’em all. In the name of choosing a practical and durable solution, and to create an infrastructure I can build upon. In the end, I’ve picked a serialization format which most people may have never heard of: Bencode.

Not XML, not JSON, not ASN.1, not, well… not anything “common”, “standard”, or “popular” – sorry.

Let me explain, by describing the process I went through:

  • While the JeeBus project ran, two years ago, everything was based on Tcl, which has implicit and automatic serialization built-in. So evidently, this was selected as mechanism at the time (using Tequila).

  • But that more or less constrains all inter-operability to Tcl (similar to using pickling in Python, or even – to some extent – JSON in JavaScript). All other languages would be second-rate citizens. Not good enough.

  • XML and ASN.1 were rejected outright. Way too much complexity, serving no clear purpose in this context.

  • Also on the horizon: JSON, a simple serialization format which happens to be just about the native source code format for data structures in JavaScript. It is rapidly displacing XML in various scenarios.

  • But JSON is too complex for really low-end use, and requires relatively much effort and memory to parse. It’s based on reserved characters and an escape character mechanism. And it doesn’t support binary data.

  • Next in the line-up: Bernstein’s netstrings. Very elegant in its simplicity, and requiring no escape convention to get arbitrary binary data across. It supports pre-allocation of memory in the receiver, so datasets of truly arbitrary size can safely be transferred.

  • But netstrings are a too limited: only strings, no structure. Zed Shaw extended the concept and came up with tagged netstrings, with sufficient richness to represent a few basic datatypes, as well as lists (arrays) and dictionaries (associative arrays). Still very clean, and now also with exactly the necessary functionality.

  • (Tagged) netstrings are delightfully simple to construct and to parse. Even an ATmega could do it.

  • But netstrings suffer from memory buffering problems when used with nested data structures. Everything sent needs to be prefixed with a byte count. That means you have to either buffer or generate the resulting byte sequence twice when transmitting data. And when parsed on the receiver end, nested data structures require either a lot of temporary buffer space or a lot of cleverness in the reconstruction algorithm.

  • Which brings me to Bencode, as used in the – gasp! – Bittorrent protocol. It does not suffer from netstring’s nested size-prefix problems or nested decoding memory use. It has the interesting property that any structured data has exactly one representation in Bencode. And it’s trivially easy to generate and parse.

Bencode can easily be used with any programming language (there are lots of implementations of it, new ones are easy to add), and with any storage or communication mechanism. As for the Bittorent tie-in… who cares?

So there you have it. I haven’t written a single line of code yet (first time ever, but it’s the truth!), and already some major choices have been set in stone. This is what I meant when I said that programming language choice needs to be put in perspective: the language is not the essence, the data is. Data is the center of our information universe – programming languages still come and go. I’ve had it with stifling programming language choices.

Does that mean everybody will have to deal with ZeroMQ and Bencode? Luckily: no. We – you, me, anyone – can create bridges and interfaces to the rest of the world in any way we like. I think HouseAgent is an interesting development (hi Maarten, hi Marco :) – and it now uses ZeroMQ, so that might be easy to tie into. Others will be using Homeseer, or XTension, or Domotiga, or MisterHouse, or even… JeeMon? But the point is, I’m not going to make a decision that way – the center of my universe will be structured data. With ZeroMQ and Bencode as glue.

And from there, anything is possible. Including all of the above. Or anything else. Freedom of choice!

Update – if the Bencode format were relaxed to allow whitespace between all elements, then it could actually be pretty-printed in an indented fashion and become very readable. Might be a useful option for debugging.

TK – Measuring distance

In Musings on Jun 14, 2012 at 00:01

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Ehm, well, not quite :) … here’s how people defined and measured distances some 35 centuries ago:


It’s a stone, roughly 1 meter long, which can be found in the Istanbul Archaeology Museum. In more detail:

P5043851  Version 2

P5043851  Version 3

Not terribly convenient. I prefer something like this – have had one of them around here at JeeLabs for ages:

Screen Shot 2012 06 13 at 22 54 28

Then again, both of these measuring devices are quite a long shot (heh) from today’s laser rangefinders:

Screen Shot 2012 06 13 at 22 45 39

For about €82 at Conradno, I don’t have stock options, they are privately owned :) – you get these specs:

Screen Shot 2012 06 13 at 22 49 29

That’s 2 mm accuracy from 0.5 to 50 meters, i.e. one part in 25,000 (0.004%). Pretty amazing technology, considering that it’s based on measuring the time it takes a brief pulse to travel with (almost) the speed of light!

But you’ll need a 9V battery to make this thing work – everything needs electricity in today’s “modern” world.

Goodbye JeeMon

In Software, Musings on Jun 6, 2012 at 00:01

As long-time readers will know, I’ve been working on and off on a project called JeeMon, which bills itself as:

JeeMon is a portable runtime for Physical Computing and Home Automation.

This also includes a couple of related projects, called JeeRev and JeeBus.

JeeMon packs a lot of functionality: first of all a programming language (Tcl) with built-in networking, event framework, internationalization, unlimited precision arithmetic, thread support, regular expressions, state triggers, introspection, coroutines, and more. But also a full GUI (Tk) and database (Metakit). It’s cross-platform, and it requires no installation, due to the fact that it’s based on a mechanism called Starkits.

Screen Shot 2012 05 23 at 19 26 10

I’ve built several version of this thing over the years, also for small ARM Linux boards, and due to its size, this thing really can go where most other scripting languages simply don’t fit – well under 1 Mb if you leave out Tk.

One of (many) things which never escaped into the wild, a complete Mac application which runs out of the box:


JeeMon was designed to be the substrate of a fairly generic event-based / networked “switchboard”. Middleware that sits between, well… everything really. With the platform-independent JeeRev being the collection of code to make the platform-dependent JeeMon core fly.

Many man-years have gone into this project, which included a group of students working together to create a first iteration of what is now called JeeBus 2010.

And now, I’m pulling the plug – development of JeeMon, JeeRev, and JeeBus has ended.

There are two reasons, both related to the Tcl programming language on which these projects were based:

  • Tcl is not keeping up with what’s happening in the software world
  • the general perception of what Tcl is about simply doesn’t match reality

The first issue is shared with a language such as Lisp, e.g. SBCL: brilliant concepts, implemented incredibly well, but so far ahead of the curve at the time that somehow, somewhere along the line, its curators stopped looking out the window to see the sweeping changes taking place out there. Things started off really well, at the cutting edge of what software was about – and then the center of the universe moved. To mobile and embedded systems, for one.

The second issue is that to this day, many people with programming experience have essentially no clue what Tcl is about. Some say it has no datatypes, has no standard OO system, is inefficient, is hard to read, and is not being used anymore. All of it is refutable, but it’s clearly a lost battle when the debate is about lack of drawbacks instead of advantages and trade-offs. The mix of functional programming with side-effects, automatic copy-on-write data sharing, cycle-free reference counting, implicit dual internal data representations, integrated event handling and async I/O, threads without race conditions, the Lisp’ish code-is-data equivalence… it all works together to hide a huge amount of detail from the programmer, yet I doubt that many people have ever heard about any of this. See also Paul Graham’s essay, in particular about what he calls the “Blub paradox”.

I don’t want to elaborate much further on all this, because it would frustrate me even more than it already does after my agonizing decision to move away from JeeMon. And I’d probably just step on other people’s toes anyway.

Because of all this, JeeMon never did get much traction, let alone evolve much via contributions from others.

Note that this isn’t about popularity but about momentum and relevance. And JeeMon now has neither.

If I had the time, I’d again try to design a new programming environment from scratch and have yet another go at databases. I’d really love to spend another decade on that – these topics are fascinating, and so far from “done”. Rattling the cage, combining existing ideas and adding new ones into the mix is such an addictive game to play.

But I don’t. You can’t build a Physical Computing house if you keep redesigning the hammer (or the nails!).

So, adieu JeeMon – you’ve been a fantastic learning experience (which I get to keep). I’ll fondly remember you.

Re-thinking solar options

In AVR, Hardware, Musings on May 28, 2012 at 00:01

So will it ever be possible to run a JeeNode or JeeNode Micro off solar power?

Well, that depends on many things, really. First of all, it’s good to keep in mind that all the low-power techniques being refined right now also apply to battery consumption. If a 3x AA pack ends up running 5 or even 10 years without replacement, then one could ask whether far more elaborate schemes to try and get that supercap or mini-lithium cell to work are really worth the effort.

One fairly practical option would be a single rechargeable EneLoop AA battery, plus a really low-power boost circuit (perhaps I need to revisit this one again). The idea would be to just focus on ultra-low power consumption, and move the task of charging to a more central place. After all, once the solar panels on the roof of JeeLabs get installed (probably this summer), I might as well plug the charger into AC mains here and recharge those EneLoop batteries that way!

Another consideration is durability: if supercaps only last a few months before their capacity starts to drop, then what’s the point? Likewise, the 3.4 mAh Lithium cell I’ve been playing with is rated at “1000 cycles, draining no more than 10% of the capacity”. With luck, that would be about three years before the unit needs to be replaced. But again – if some sort of periodic replacement is involved anyway, then why even bother generating energy at the remote node?

I’m not giving up yet. My KS300 weather station (868 MHz OOK, FS20’ish protocol) has been running for over 3 years now, I’ve never replaced the 3x AA batteries it came with – here’s the last readout, a few hours ago:

     :41   KS300 ookRelay2 humi             77
     :41   KS300 ookRelay2 rain             469
     :41   KS300 ookRelay2 rnow             0
     :41   KS300 ookRelay2 temp             18.2
     :41   KS300 ookRelay2 wind             0

And the original radioBlip node is also running just fine after 631 days:

    1:32   RF12-868.5.3 radioBlip age       631
    1:32   RF12-868.5.3 radioBlip ping      852330

Even the JeeNode Micro running on a CR2023 coin cell is still going strong after 4 months:

    1:42   RF12-868.5.18 radioBlip age      139
    1:42   RF12-868.5.18 radioBlip ping     188449

So ultra-low power is definitely doable, even with an Arduino-compatible design.

No worries – I’ll keep pushing this in various directions, even if just for the heck of it…

Virtuality vs Reality

In Musings on May 23, 2012 at 00:01

The worlds I dabble in at JeeLabs are twofold:

  • Software – a virtual world, artificially constructed, and limited only by imagination
  • Hardware – a real world, where electrons and atoms set the rules and the constraints

I’ve long been pondering about the difference between the two, and how I enjoy both, but in very different ways. And now I think I’ve figured out, at last, what makes each so much fun and why the mix is so interesting.

DSC 3208   DSC 3209

I’ve spent most of my professional life in the software world. This is the place which you can create and shape in whatever way you like. You set up your working environment, you pick and extend your tools, and you get to play with essentially super-natural powers where nearly everything goes.

No wonder people like me get hooked to it – this entire software world is one big addictive game!

The hardware world is very different. You don’t set the rules, you have to discover and obey them. Failure to do so leads to non-functional circuits, or even damage and disaster. You’re at the mercy of real constraints, and your powers are severly limited – by lack of knowledge, lack of instruments, lack of … control.

Get stuff working in either world can be exhilarating and deeply satisfying. Yes! I got it right! It works!

All of this appeals to an introvert technical geek like me, and all of this requires little human interaction, with all its complex / ambiguous / emotional aspects. It’s a competition between the mind and the software / hardware. There are infinitely many paths, careers, and explorations lying ahead. This is the domain of engineers and architects. This is where puzzles meet minds. I love it.

The key difference between software and hardware, when you approach it from this angle, is how things evolve over time: with software, there is no center of gravity – everything you do can become irrelevant or obsolete later on, when a different approach or design is selected. With hardware, no matter how elaborate or ingenious your design, it will have to deal with the realities of The World Out There.

So while after decades of software we still move from concept to concept, and from programming language to programming language, the hardware side more and more becomes a stable domain with fixed rules which we understand better and better, and take more and more advantage of.

In a nutshell: software drifts, hardware solidifies.

Old software becomes useless. Old hardware becomes used less. A very subtle difference!

The software I’ve built in the past becomes irrelevant as it gets superceded by new code and things are simply no longer done they way they used to be. There’s no way to keep using it, beyond a certain point.

Hardware might become too bulky or slow or power-consuming to keep using it, or it might mechanically wear out. But I can still hook up a 40-year old scope and it’ll serve me amazingly well. Even when measuring the latest chips or MOSFETs or LCDs or any other stuff that didn’t exist at the time.

Software suffers from bit rot – this happens especially when not used much. Hardware wears out, but only when used. If you put it away, it can essentially survive for decades and still work.

In practice, this has a huge impact on how things feel when you fool around – eh, I mean experiment – to try and to learn new things.

Software needs to be accompanied by documentation about its internals and it needs to be frequently used and revisited to keep it alive. Writing software is always about adding new cards to an existing house of cards – assuming I can remember what those cards were before. It’s all virtual, and it tends to fade and become stale if not actively managed.

Hardware, on the other hand, lives in a world which exists even when you don’t explore it. Each time I sit down at my electronics bench, I think “hm, what aspect of the real world shall I dive into this time?”.

I love ’em both, even though working on software feels totally different from working on hardware.

Documentation Dilemma’s

In Musings on May 11, 2012 at 00:01

Let’s face it – some parts of the JeeNode / JeePlug documentation isn’t that great. Some of it is incomplete, too hard, missing, obsolete, or in some cases even just plain wrong.

I think that the fact that things are nevertheless workable is mostly because the “plug and play” side of things still tends to work – for most people and in most cases, anyway. You assemble the kits, solder the header, hook things up, plug it into USB, get the latest code, upload an example sketch, and yippie… success!

But many things can and do go wrong – electrically (soldering / breadboarding mistakes), mechanically (bad connections), and especially on the software side of things. Software on the host, but most often the problems are about the software “sketch” running on the JeeNode. You upload and nothing happens, or weird results come out.

Ok, so it doesn’t work. Now what?

There’s a chasm, and sooner or later everyone will have to cross it. That’s when you switch from following steps described on some web page or in some PDF document, to taking charge and making things do what you want, as opposed to replicating a pre-existing system.

To be honest, following instructions is boring – unless they describe steps which are new to you. Soldering for the first time, heck even just connecting something for the first time can be an exhilarating experience. Because it lets you explore new grounds. And because it lets you grow!

As far as I’m concerned, JeeLabs is all about personal growth. Yours, mine, anyone’s, anywhere. Within a very specific domain (Physical Computing), but still as a very broad goal. The somewhat worn-out phrase applies more than ever here: it’s better to teach someone how to fish (which can feed them for a lifetime) than to give them a fish (which only feeds them for a day).

IMO, this should also drive how documentation is set up: to get you going (quick start instructions) and to keep you going, hopefully forever (reference material and pointers to other relevant information). A small part of the documentation has to be about getting a first success experience (“don’t ask why, just do it!”), but the main focus should be on opening up the doors to infinitely many options and adventures. Concise and precise knowledge. Easy to find, to the point, and up to date.

Unfortunately, that’s where things start to become complicated.

I’m a fast reader. I tend to devour books (well, “skimming” is probably a more accurate description). But I don’t really think that thick books are what we need. Sure, they are convenient to cover a large field from A to Z. But they are reducing our options, and discourage creative patterns – What if I try X? What if I combine Y and Z? What if I don’t want to go a certain way, or don’t have exactly the right parts for that?

This weblog on the other hand, is mostly a stream-of-conscience – describing my adventures as I hop from one topic to the next, occasionally sticking to it for a while, and at times diving in to really try and push the envelope. But while it may be entertaining to follow along, that approach has led to over 1000 articles which are quite awkward as documentation – neither “getting started” nor “finding reference details” is very convenient. Worse still, older weblog posts are bound to be obsolete or even plain wrong by now – since a weblog is not (and should not be) about going back and changing them after publication.

I’ve been pondering for some time now about how to improve the documentation side of things. There is so much information out there, and there is so much JeeLabs-specific stuff to write about.

Write a book? Nah, too static, as I’ve tried to explain above.

Write an eBook? How would you track changes if it gets updated regularly? Re-read it all?

A website? That’s what I’ve been doing with the Café, which is really a wiki. While it has sections about software and hardware, I still find it quite tedious (and sluggish) for frequent use.

I’ve been wanting to invest a serious amount of time into a good approach, but unfortunately, that means deciding on such an approach first, and then putting in the blood, sweat, and tears.

My hunch is that a proper solution is not so far away. The weblog can remain the avant garde of what’s going on at JeeLabs, including announcing new stuff happening on the documentation side of things. Some form of web-based system may well be suited for all documentation and reference material. And the forum is excellent in its present role of asking around and being pointed to various resources.

Note that “reference material” is not just about text and images. There is so much information out there that pointers to other web pages are at least as important. Especially if the links are combined with a bit of info so you can decide whether to follow a link before being forced to surf around like a madman.

The trick is to decide on the right system for a live and growing knowledge base. The web is perfect for up-to-date info, and if there’s a way to generate decent PDFs from (parts of) it, then you can still take it off-line and read it all from A to Z on the couch. All I want, is a system which is effective – over a period of several years, preferably. I’m willing to invest quite a bit of energy in this. I love writing, after all.

Suggestions would be welcome – especially with examples of how other sites are doing this successfully.

Back from Istanbul

In Musings on May 5, 2012 at 00:01

Due to the wonders of automation, yours truly was able to sneak away for a few days without missing a beat on the weblog and webshop (but away from the forum) – with Liesbeth and me ending up on the other side of Europe:


The “Blue Mosque”, and lots more fascinating / touristy things. A humbling experience for a Westerner like me.

With apologies for not responding immediately to all emails – I’ll catch up on this in the next few days.


Weblog post 1000 !

In News, Musings on Apr 17, 2012 at 00:01

Today is a huge milestone for JeeLabs. This is weblog post number:

Screen Shot 2012 04 16 at 17 15 32


It all started on October 25th in 2008, with a weblog post about – quite appropriately – the Arduino.

Then it took a few more months to evolve into a daily habit, and yet another few months to set up a shop, but apart from that it has all remained more or less the same ever since.

You might have been following this from the start, and you might even have been going through the long list of daily posts later, but there you have it – a personal account of my adventures in the world of Physical Computing. If anything, these years have been the source of immense inspiration and delight. I’ve been able to re-connect to my inner geek, or rather: my inner ever-curious and joyful child. And to so many like-minded souls – thank you.

“Standing on the shoulder of giants” is a bit over-used as a phrase, but it really does apply when it comes to technology and engineering. What we can do today is only possible because many generations of tinkerers, inventors, and researchers before us have created the foundations and the tools on which we can build today. It feels silly even to try and list them – such a list would be virtually endless.

I’m not a technocrat. I think our IT world has done its share to rob people of numerous meaningful and competence-building jobs, and to introduce new mind-numbing and RSI-inducing repetitive tasks. Our (Western) societies have become de-humanized as more and more screens take over in the most unexpected workplaces, and our car trips and train rides are turning us into very selectively-social beings, reserving our emotions but even our respect and courtesy for our families and the people we choose as our friends. Technology’s impact on daily life is a pretty horrible mess, if you ask me.

But what drives me, are the passion and the creativity and the excitement in the field of technology. Not for the sake of technology, but because that’s one of the major domains where cognition and rationality have free reign. You can learn (and reason) all about history, medicine, psychology, or you can invent (and reason about) things which do new things, be it electrical, mechanical, biological, informational, or otherwise. Technology as a source of boundless evolution and innovation is breath-taking, we “merely” have to tap it and put it to good use.

And what thrills me most is not what I can do in that direction, but what others have done in the past and are still doing every day. Learning about all that existing technology around us is like looking into the minds of the persons who came up with all that stuff, feeling their struggles, their puzzles, and ultimately the solutions they came up with. I’m in awe of all the cleverness that has emerged before us, and even more in awe of the thought that this will no doubt go on forever.

It’s really all about nurturing curiosity, asking questions, and solving the puzzles they bring to the surface:

I have no special talents. I am only passionately curious. — Albert Einstein

Here’s the good news: we all have that ability. We all came into the world the same way. We can all be explorers.

If you start doing this early on in life and hold onto it, you’ll never be hungry and you’ll never get bored. And if you didn’t have that opportunity back then: nothing of substance prevents you from starting today!

We live in amazing times. Ubiquitous internet and access to knowledge. Open source Physical Computing. Online communities with a common language. This weblog is simply my way of reciprocating all these incredible gifts.

Pressure cooker

In Musings on Mar 31, 2012 at 00:01

These past 36 hours have been absolutely fabulous, and exhausting…

First there was the 7th HackersNL meeting in Utrecht. The name of the event is unfortunate, IMO (this whole “hacker” monicker doesn’t sit well with normal people, i.e. 99.9% of humanity), but the presentations were both absolutely fantastic. A wide scale of design topics by David Menting, including his “linear clock” for which he designed custom hardware based on a standard tiny Linux + WiFi board, and then a talk about turning a cheap laser cutter into a pretty amazing unit by ripping out the driver board and software, and replacing it with their own custom hardware with an MBED module plus software (wiki) – by Jaap Vermaas and Peter Brier. Both cutting edge, if you pardon the pun, and above all a pressure cooker where two dozen people get to talk about “stuff”, mostly related to Physical Computing really. Everything is open source.

If you live in the neighborhood of Utrecht, I can highly recommend this recurring meeting, scheduled for the last Thursday of each month – so take note, hope to see you there, one day!

The other event was the Air Quality Egg Workshop, by Joe Saavedra. Basic idea: a sensor unit, to measure air quality in some way, plus an “egg” base station which can tie into Pachube (both ways), relays the sensor data, and includes an RGB color light plus push-button.

Except that it doesn’t exist yet. We built a wired prototype based on a Nanode with SparkFun protoshield, a CO sensor, an NO2 sensor, and a DHT22 temperature/humidity sensor.

Here’s my concoction (three of the sensors were mounted away from the heat generated by the Nanode):

DSC 3002

It’s now sitting next to the JeeLabs server, feeding Pachube periodically. We’ll see how it goes, since apparently these sensors need 24..48 hours to stabilize. Here are some of the readings so far:






What I took away from this, is:

  1. Whee, there sure is a lot more fun stuff waiting to be explored!
  2. When you put a fantastic bunch of creative people together, you get magic!
  3. Not enough time! Would it help to keep flying westwards to cram more hours into a day?


In Musings on Mar 3, 2012 at 00:01

My PC has been updated. I left it unattended for a month, and now I’m powering it up again. It’s got a new motherboard, a new display, and a new OS revision. It’s quiet, because it’s all-SSD now, and it’s actually a bit slower than the previous one.

The above paragraph is a mix of reality and fiction, BTW. Because I’m talking about two things at once – the Mac I work on, and… my brain. Both have changed :)

The past month has been extremely chaotic for me. I’ve been trying to figure out what I really want to do, and how to make it happen. The outcome surprised me: I absolutely want to keep doing what I’ve been doing these past few years, with JeeLabs. So the good news, if you been following along, is that I will. But there will be changes, because the intensity of it all is not sustainable for me, not at the previous energy level anyway. I will spread out stories over more weblog posts – thus also making it easier for you to keep up and follow along.

In this day and age of instant gratification, mass consumption, and immediate mail-order fulfillment, I’m going to go against the grain and buck the trend – by reducing short term the frequency of JeeLabs shop fulfillments, dealing with shop-related tasks less often. The shop will become even more of a secondary activity here, but fulfillment improvements are in the pipeline. The product range will grow further, but the pace and scale of commerce most likely not. It gives me pleasure to send out packages and to stay in contact with the people who are going to use these products. The shop isn’t about volume and turnover, but about allowing others to reproduce and extend some projects I’m coming up with and working on. Because making stuff is fun.

Board part

My passion, my energy, and my time will remain focused on the weblog, or rather on the projects that drive it all. Whether the frequency can stay as is, time will tell. I hope it can – with occasional breaks in the year – because the daily cycle is great fun, keeps me focused, and is clearly being appreciated.

As Seth Godin describes in his manifesto, the schooling system has taken our dreams away. I’ve been lucky to keep (or rather, rediscover) mine, and want to help as much as I can to make sure others will be able to latch onto their dreams as well, with curiosity and creativity as the driving forces – in the context of Physical Computing, that is.

The internet, at least the part I care about, is evolving into an extra-ordinary global learning powerhouse. It started with Wikipedia and led to the inspiring TED presentations, MIT’s Open Courseware, and the Khan Academy (an absolutely astounding initiative which is turning the way education works on its head). There is no excuse anymore for not knowing what you’d like to know, it’s all there.

And as I’m finding out, there is no excuse anymore for not sharing what you know, either.


Watchdog kicking in …

In News, Musings on Feb 2, 2012 at 00:01

History is about to repeat itself… With this 954’th post, I have an important announcement to make: I’m slamming on the brakes and taking a one month break away from this weblog.

It’s a bit radical and unexpected, but there is no way around it. This weblog is “driven by passion”, as you will probably know, and the crazy bit is that there’s just too much going on here to keep things going smoothly. I’ve been running behind on shop fulfillment again, and I’ve been running behind even more on answering emails and with helping out on the forum. First thing I hope this will do, is to let me catch up and regain my footing.


In sharp contrast to last year’s emergency stop, this time it’s not so much lack of ideas or lack of energy, but lack of clear focus and direction. The stories I would love to tell need more time – diving into various aspects of physical computing in considerably more depth and detail than what’s been happening on the weblog lately. And it’s not happening because the daily bite-sized cycle is chopping up my attention (even at times when I have enough weblog posts queued up for many days on end – go figure!). And maybe it’s also a hill climbing issue.

For an interesting insight about attention, see Paul Graham’s essay titled Maker’s Schedule, Manager’s Schedule.

I’ve updated the alphabetical and chronological indexes to all the posts on this weblog, to give you something to go through for the coming weeks. It’s a stopgap measure, but it’ll just have to do – and there should be enough to keep you interested and hopefully also pique your interest and keep you excited in the month ahead.

The difference with last year, is that I’m putting a precise cap on the duration of this “outage”: 30 days from now. That’s when this weblog will resume, probably with some announcements and adjustments to its style and format.

Talk to you one month from now!

PS. If you want to learn about electricity, then there are numerous resources on the web. Let me single out one: a 50-minute video by Walter Lewin at MIT about batteries and power (lecture 10 on this page). You can get a deep understanding of what a battery is, why its internal resistance matters, what power is, how heat comes out, what shorting a battery does, and even sparks. It’s a fantastic presentation, and the video was just picked at random!

DIY versus outsourcing

In Musings on Jan 28, 2012 at 00:01

Currently on the front page of the JeeLabs shop:

Screen Shot 2012 01 27 at 16 15 38

The benefit of doing everything yourself, is that you can make things work exactly as you want them.

The drawback of doing everything yourself, is that you have to do everything yourself…

Having become pretty independent in my work areas, my hobbies, and my income streams over the years, I know all about those trade-offs. Or at least I think I know about most aspects of this DIY-vs-outsourcing range.

It’s a bit like trying to stay on your feet with a floor covered with marbles…

Example: I used to rent a web server (a real physical one, with full root access and Linux on it). No worries about hardware outages or connectivity details. Being housed at an ISP with thousands of servers, means they’ll have round-the-clock watchdogs and support staff, and will jump into action the minute something is seriously wrong.

At the same time, I had total control over the web server software and operating system configuration. With a Linux distribution such as Debian, maintenance was delightfully simple (“apt-get update && apt-get upgrade”).

The flip side is that I had to choose and configure a web server (“lighty” / lighttpd at the time), and technologies to create dynamic database-driven websites (I built my own back then, based on Metakit – my own database).

Did it work? Sure. Did it evolve? Nope. Too busy. Didn’t want to risk breaking anything.

Only thing that setup did was track security updates (automatically). I had two break-ins over the 10 years that this went on. Learned more about rootkits than I care about (they’re evolving to amazingly sophisticated levels).

Did I learn a lot? You bet. And some of that knowledge is priceless and timeless. Big, big benefit.

But I also had to learn lots of stuff I really care very little about. For me, network routing, package installation dependencies, mail server configuration, and lighttpd configuration were a waste of time. The latter because lighttpd wasn’t really kept up to date very actively by its developer(s). Other options became more practical, meaning that all that lighttpd-specific knowledge is now useless to me.

The story is repeating itself right now. Redmine, which I use on is not up to date, because I haven’t found a simple upgrade path. The difference is that it’s not just me not updating my stuff, I now have the same stagnant state with stuff from others. So what’s the point of Redmine? As far as I’m concerned, it’s a dead end (luckily, everything in there is stored in Markdown format – a solid long-term standard which I also use for the forum and the weblog).

With the forum, running on Drupal, it’s different again. Module updates are automated more or less, so I tend to track them from time to time. But Drupal itself is a little harder to update. And sure enough, it’s falling behind… With Drupal, I’m also running into not being knowledgeable enough to put it to really good use.

But the reason for writing this post is a different one – see the message at the top.

For the web shop, I use the Shopify web store service. They have the servers (at Rackspace – very good ops, I’ve used them for a couple of years). And Shopify develop and run the web store software (using Ruby on Rails).

They take care of dealing with nasty things such as possible DoS attacks, heavy data security, financial gateway interfaces – lots of important issues I no longer need to worry about. So far so good.

But they have their own agenda:

  • some things don’t change, and that’s good: it works, the shop is operational
  • some things don’t change, but that’s bad: years have gone by, and they still haven’t got a clue about VAT
  • some things change, and that’s good: improvements to the service, new features for customers
  • some things change, but that’s bad: they change their API and their XML data structures

That last one is what bites me now. I created a little scripted setup whereby I always pull information about orders from their shop database, to fill my database here with all the details, so I can generate paper invoices, and do the fulfillment of orders here. Doing this here was necessary to be able to do the Value Added Tax thing properly, as required by law and as my accountant wants it, of course.

So to summarize, the choices are:

  1. do everything yourself (and pay in time)
  2. outsource everything (and pay in money)
  3. choose a mix (and deal with the interface changes)

Everything is a trade-off, of course. In my case, I’m moving more and more to #1 as far as operational choices are concerned (own server, own fiber connection), and #2 as far as day-to-day software use is concerned (solid, but actively developed open source software, and Apple hardware + Mac OSX for my main workplaces). These choices are optimal for me, in terms of cost and stability.

The choice to host my own servers was made a lot simpler because I’m running VM’s for the different sites, built from ready-to-run images from TurnKey Linux. What makes them (and others, like Bitnami) different, is that all VMs are automatically backed up to the cloud (Amazon S3 in my case). The way TKL does this is really clever, reducing the amount of data in incremental backups, even for all the records stored in MySQL. So not only are my VM’s pre-configured and run out of the box, they automatically self-update and they automatically self-backup – if anything goes completely wrong, I can switch to cloud-based instances and be up and running again in no time.

TurnKey Linux is an example of using third-party stuff to side-step (and in fact avoid) a massive amount of effort, while retaining maximum operational flexibiity. My Amazon S3 bill is a whopping $1.01 per month…

But the web shop setup at Shopify is far from optimal. It was supposed to be choice #2, but ended up being #3 due to the mismatch between what I need (a European shop with correct VAT handling) and what they offer (flashy stuff, aimed at the masses). In hindsight, it was a bad choice, but I really don’t want to do this myself.

Oh well, I’ll suffer the consequences – will fix my scripts and get everything going again by next Monday!

PS. My little presentation yesterday at HackersNL #5 can be found here (PDF) – for those who read Dutch.

New payment options

In Musings on Jan 15, 2012 at 00:01

The JeeLabs Shop has gained a new payment option, as provided by the DIRECTebanking service:

Sb 200x61

This is a German site which supports direct bank-to-bank transfers. Looks like it’s working in 5 countries:

  • Austria
  • Belgium
  • Germany
  • Netherlands
  • Switzerland

I can’t find a trace of UK or Italy in the setup, even though it’s mentioned on their web site. My impression is that this service is still very young – the “Payment Network AG” company behind this was registered last October. But the good news is that their support is responsive and effective, by email as well as by phone.

One benefit for customers is speed: I get immediate notification, avoiding the usual 1..3 day delay normally involved with bank transfers. The other benefit is convenience, since you can complete the payment as part of the order, instead of having to switch to your online bank account and manually copy all the relevant info.

The benefit for me is lower cost: a third of what PayPal charges (it does add up: VAT, payment/bank/shop fees).

The thing with this sort of service, is that it’s very hard for me to get an impression of how well it works in practice. I did a “test payment” while setting things up, but that’s a weak approximation of the whole process when using it for real, and I can only do a more realistic test with my own country and my own bank account.

So if you ever feel an uncontrollable urge to order something from the JeeLabs web shop (yeah, I know, it’s unlikely) and live in one of the above-mentioned countries, then please feel free to give this a go:

Screen Shot 2012 01 14 at 12 40 45

The name is in German (Sofort Überweisung), but the page will be in English by default (all pages are available in multiple languages by clicking on the flag – top right).

Please feel free to email me with anything odd (or neat) which comes up, especially if it doesn’t work as expected of course. I can easily cancel an entire transaction if things get really out of hand.

But with a little luck, life will simply have become one notch simpler with this new option – for everyone!

The Ultimate Bookshelf

In Musings on Dec 25, 2011 at 00:01

I do a lot of reading…

Reading has changed a lot these past few decades. I used to devour books in the library and subscribe to lots of magazines. As a kid, when visiting New York one summer, I spent weeks on the floors of the the New York Public Library – because they had all the back issues of Scientific American and you could read as much as you want!

The thing with SciAm, is that it had a column every month, called The Amateur Scientist – which, in hindsight, was really the ultimate “maker” breeding ground. I don’t think I ever built anything described in it (’cause teenagers don’t have any money), but that did not diminish the fun and learning experience one bit.

A side-effect of all this was that my environment filled itself with books, papers, magazines, and articles.

And although the human mind is incredibly good at remembering where things are, by association, and particularly by how it looks and its location, there comes a point when ya’ can’t find that one friggin’ article back. With computers, things quickly got (much!) worse… no more clues as to which book (file!) is large, which one looks worn-out, what the books (files!) around it look like, or to leaf through it quickly to locate a section (bits!) by its visual appearance.

Besides, most magazines and books are really just meant to be read once. You digest the info, learn from it, and never look back. It seems silly to buy them in dead-tree form, and continuously add more bookshelves for them.

So I started to get more and more books, articles, and magazines in PDF form. They were easy to store, could be browsed as well as searched via keywords. I bought – and still buy – lots of books that way. My favorite PDF shop (for programming-related books) is probably the Pragmatic Programmers – nice collection, well-written and good-looking books, and you get update notifications when books get revised (a key benefit of the electronic format).

My collection of PDFs is growing fast. Purchased as well as downloaded. And now also lots of electronics datasheets.

This reached a point where I decided that I wanted to get rid of the paper stuff, at least for normal technical books to which I have no particular emotional attachment. So I got one of these a couple of years ago:

S510m header

That’s a Fuijistu ScanSnap S510M document scanner. There are newer models now, for Mac and PC. The thing about this scanner is that it’s surprisingly effective. It scans quickly, and does both sides of the page at the same time. But the real gem is the supporting software. It knows what’s color and what’s black and white, it knows what’s up and what’s down, it knows what’s portrait and what’s landscape, and it it knows how to start up the software when you press the big button on the front. Best of all, it comes with OCR software which places the recognized text inside the PDF, and puts it there invisibly – behind the scanned images, so to speak. That sounds crazy, but the result is that the pages you look at are complete photographic reproductions, and yet the document is fully searchable!

To be honest, the OCR process is so time-consuming that I don’t enable it for books & magazines. But for invoices and loose sheets of paper, this is incredibly useful. I do not need to organize it – text search does it all!

I’ve cut up some 10 meters of books already, and turned them into PDFs. Yeah, it hurts a little at first, but hey.

For reading PDFs, I use the Mac’s built-in Preview, which is a lot better (and faster) than Adobe’s, eh… junk.

For locating documents, by file name or by content, there is Spotlight in the Mac, which also works with a server. This search technology is fast enough to instantly locate documents in many dozens of gigabytes of data. And since it’s available to all applications, there are some great front ends for it such as Yep, Leap, and Papers. I’ve been using DEVONthink Pro Office for all my docs and notes, because of its integration with the ScanSnap.

The above is all for the Mac, but there are probably similar offerings for Windows.

But the real revolution is much more recent…

Screen Shot 2011 12 18 at 03 10 33

There’s an “app” for the iPad, called GoodReader. This little bit of software lets me put over a thousand documents on the iPad and actually be able to find stuff, read stuff, and manage stuff. About 25 GB so far. Offline.

Which means I can now manage my entire collection as a folder on the server, add books, reorganize as needed, add tags and quickly access it from multiple Macs through Yep, as well as have the entire set on an iPad.

The Ultimate Bookshelf, no less, if you ask me. Alan Kay’s DynaBook has become an affordable reality.

To put it differently: food for thought – especially slow food for slow (off-line) thought, as far as I’m concerned!

The steepness dilemma

In Musings on Dec 15, 2011 at 00:01

There have been comments occasionally about the steep learning curve involved with stuff from JeeLabs. This is very unfortunate, but perhaps not totally surprising. Nor easy to avoid, I’m afraid…

The thing is, I love to get into new adventures, and I also really want to bring across the joy and excitement of it all. But what’s new for me may not make much sense to you, and what’s new for you may not be new for me.

There is a huge variety in what you, collectively, dear readers, may or may not already know and in what interests y’all. Even if we narrow the field down to “computing stuff tied to the physical world”, as this weblog does.

My approach has been to just throw everything together and write new posts in a fairly chaotic whatever-comes-to-mind-first order. Sometimes about raw electronics or micro-controllers, sometimes about hardware or software techniques, and often simply about what catches my interest and keeps me occupied. My plat du jour, so to speak.

There’s a problem with this, and it’s perhaps gradually getting worse: it may not help you with getting started. This daily weblog has an alphabetical and chronological index, listed at the bottom of each page, and updated from time to time – but that’s a bit like trying to learn how to swim by jumping in at the deep end, isn’t it?


A few days ago, my daughter asked me about how to learn programming. I was shocked – because I don’t know !!!

What I do know is that learning something as complex as programming really well takes years (my take on it is at least a decade, seriously!). Of course you don’t have to learn everything in depth and become a pro at it all. More often than not, we just want to make a nice sandwich, not become a master chef or start a new career.

Malcolm Gladwell has written several times about the “10,000 hours rule”, which says that to get really well at something you have to throw at least 10,000 hours at it. Learning, struggling, wrestling, pondering, agonizing, and… enjoying your progress. For at least 10,000 hours, i.e. 10 years of 4-hours-a-day – being obsessed helps!

Wanna learn something really well? My advice: start today. And prepare yourself for a fascinating marathon.

The trick IMO, is to define success in smaller steps than you might normally do. Got a blinking LED? Celebrate!

Here’s the secret: there’s an incredible (yet vastly under-appreciated) advantage of open source hardware and software. That advantage is that every hurdle can be overcome. You’re not fighting a closed system, nor a puzzle which only others can solve. You’re fighting the challenge of figuring it all out. With nothing but hardware and software which can be 100% inspected and documentation which can be found. When stuck, you can have access to people who know more about it and are often willing to help you along to solve your specific puzzle.

Let me rub it in: there are no show-stoppers in this game. The worst that can happen is that you run into real-world limitations of either atoms or bits or time, but there’s an immense body of knowledge out there. Get ready for this, because here’s a fact for you: if it can be done, then you can do it. And if it can’t you can find out why. This is technology – it works on logic and insight, all the way down.

But there are two constraints: 1) it takes time and effort, and 2) nobody is perfect.

What this means is that sometimes it will take more time and effort to get to the bottom of a problem and solve it. And we all make mistakes, cut corners, run out of steam, or grow impatient at times. Part of the game.

I’m no different. I didn’t get to figure out things better than others. I stumble and fight as much as anyone, of course. But I do spend time and try to push through – especially when I get frustrated. Because I know there’s an answer to it. Always – though sometimes unexpected or unsatisfying (“it couldn’t possibly work, because …”).

Back to the real issue: how to get started with all this stuff.

Ok, to stay close to home let’s assume you want to learn more about “computing stuff tied to the physical world”. If you’re starting from scratch (which is a relative concept), my suggestion would be to look for example projects which interest you, and start off by trying to repeat the same project. Find a web site or a book describing a project which fascinates you, and … spend time with it, just reading. If it sounds too daunting to reproduce, then it probably is – in that case, look for a simpler project to get your feet wet cq. cut your teeth in. You’ll get a much bigger boost from succeeding with a simpler project at first, and then tackling the bigger one.


I used to have lots of practical experience in electronics, from years of fiddling around with it as a teenager. Yet here’s the project I picked as first one to get back into this game: a trivial electronics kit. It was a no-brainer in terms of complexity, and there was virtually no risk that I’d fail at assembling it. Sure enough, it worked. And guess what: this little project got me excited enough again to … write over 900 weblog posts, and spend the last few years fiddling with today’s hardware.

The reason it seems to work for me, is what Steve Jobs once described as: The journey is the reward. So true.

If you can set your goals and expectations such that you get a really nice mix of learning experiences (i.e. struggles ending in new insight) and successes, then you’re in for a wonderful journey. It’ll last a lifetime, if you want it to.

I will try to help where I can, because that’s one of my central goals with this weblog, but I’m not going to turn this site into a handholding step-by-step and just-follow-me kind of place. Because the second goal of this weblog is to encourage creative work. Which is what you get to once you get past some initial hurdles, and are (at least partly) on your way to becoming a 10,000 hour master of some topic aligned with your own interests.

The “steepness” of this weblog is not there to frustrate, of course – it’s unavoidable, IMO. And I encourage you to bite the bullet with each bump you run into. It’s part of the game to be able to find your way in, and when you do you will have gained the experience that everything in this field can be explored, learned, and reasoned about.

I’m not handing out pre-packaged fish, I’m trying to show you the fun that comes from fishing!

Having said that, I do have a request for y’all, dear readers: if you’ve wrestled your way through some of these weblog posts, and came out wishing that something very specifc had been presented differently, or summarized, or linked to, then please do let me know (in the comments or by email). Most people who struggle and come out on top quickly move on to the next challenge, happy they now understand something better than before. But you can do your fellow future readers and strugglers a huge favor by explaining what the difficulty was. It’s often as simple as “if you only had mentioned at the start that …” and things can sometimes becomes so much clearer. I’m at the writing end of this weblog, see, and I never know where the confusion or lack of detail sets in. Please help me learn – about you, about how to reduce unnecessary steepness, and about all my mistakes, of course.

Anyway. Onwards with the adventures!

Driven by passion

In Musings on Oct 30, 2011 at 00:01

After three years, I thought that it might be interesting to “show you around” here at JeeLabs:

DSC 2638

To the right – not visible here – is the main electronics corner, which will be shown in tomorrow’s weblog post because of another little project I’ve been working on. Let me explain the various bits ‘n bobs you see here:

JeeLabs tagged

It might seems odd, but this is all there is to the JeeLabs, ahem, “empire”.

The Vancouver view always brings back lots of good memories from a year-long stay there, a couple of years ago. An amazing place, in terms of cultural diversity and with its breathtaking nature wherever you go.

I also wanted to show you this to illustrate how little is needed to sustain a small shop. It wouldn’t be sufficient if you’re chasing riches and fame, but if all you want is to have fun and keep going, then hey, it’ll do fine.

JeeLabs would have been totally unthinkable on this scale two decades ago.

It’s all about dialogue

In Musings on Sep 25, 2011 at 00:01

Don’t know about you, but I’m having a great time with this weblog!

I’d like to go on a small excursion of what it’s all about, why it matters, and unpredictable stuff, such as the future.

This weblog started roughly three years ago. I love tinkering with technology, I love learning more about it, I love making new things (even if it’s only new for myself). Especially when it’s about mixing software, hardware, and mechanical stuff. I describe myself as an architect, a hacker, and a maker, and I’m proud of it. And I decided to write about it. One day I didn’t, the next day I did – it’s really that easy. You could start doing it too, any day.

A weblog is a publishing medium. Push. From me to you – whoever you are and wherever you are. As long as I enjoy writing it, and as long as you enjoy reading it, we both win.

One crucial aspect of this process is that we need to share the same interests. If I tried to write about culture, nature, politics, or music, chances are that we’d no longer be in sync (and I might have very little interesting to report!). We all differ, we all embody our own unique mix of interests, opinions, and experiences, and there’s no reason whatsoever to assume that a shared interest in technology means we share anything else. The great thing is: it doesn’t matter. We are linked by our humanity, and our diversity is our greatest asset. Vive la différence!

So how does this weblog thing work? Well, from what I’m writing about and have written in the past, you can tell where my passion lies. And you have the simple choice of reading and following the posts on this weblog – or not. From your comments and emails, I think I get an idea who (some of) you are. We’re in sync.

This process excites me. Because it transcends culture, age, background, and all those other aspects in which we differ (and don’t even know about each other). We can share our interests, learn from each other, exchange tips and ideas, and all it takes is an internet connection and the ability to read and write in English, even if that’s not everyone’s native language.

But weblogs publishing is an asymmetric process – there’s no real dialogue going on at all. I don’t really know who reads this. There might be thousands of readers coming back every day, or there might be just those who post comments – I wouldn’t know. I used to care about that, but I no longer do. I don’t collect stats and I don’t “track” visitors. It’s just another distraction and life’s too short. But more important to me, is motivation: my goal is not to have an “important” blog, a big readership, or lots of fans. Nor a big shop or many customers, for that matter. My goal is to have fun with technology, learn as much as I can, invent new stuff, and share to inspire others to do the same. It took me a long summer break to figure this out.

Of course I have my preferences, and of course there are areas where I know more and less about. The field is way too large to dive into every topic, let alone build up expertise in each – although I do consider myself reasonably open minded and knowledgeable about a decent range of technical domains. And those gaps? Well, that’s the challenge, of course: filling one little gap each day – day in, day out!

So what does this mean for the future?

I see no reason why any of this should stop. It’s proven to be sustainable for me, and there’s plenty of material to go into and talk about to last a lifetime. As you may have noticed, I’m moving away from a pure hardware focus in this weblog. The central theme will definitely remain “Physical Computing in and around the house”, but there’s more to it than the ATmega + RFM12B that form a JeeNode, and I’d like to explore a wider range of topics, including software and data processing, and probably also mechanical aspects (construction, CNC, 3DP, bots):


I do have a little request I’d like to make: whenever you read a post on this weblog and have a suggestion or insight which is relevant, please consider adding a comment. I tend to go with the flow (of ideas), and I tend to pick the easy low-hanging fruit first. Suggestions made in recent days on all this scary 220V power measurement stuff have helped me greatly to better understand what’s going on and to come up with more experiments to set up to try and figure it all out. I encourage you to point me in the right direction and to point out mistakes.

Who knows, it might lead to a post which is more useful to you. We’ll all benefit!


In Musings on Sep 1, 2011 at 00:01

Well, that was a pretty strange “break”, as far as summers go… mostly cloudy!

For the first half of the summer, I’ve learned to relax and not go online, even when at home. And for the second half, we’ve been going out, mostly short trips around the country – like a few days by train + bike in “Drenthe”:

And a quick trip to Paris:

Paris? Yes, that’s Paris. There’s a vineyard at the Montmartre, not far from the Sacré Coeur.

So is this:

Don’t believe me? Ok, here’s a more traditional image:

(all vacation pictures by Liesbeth, as usual)

Anyway, it’s good to be back. I’m looking forward to smell the solder fumes, burn my fingers, and see chips go up in smoke again – eh, something like that … ;)

AA boost ripple

In Musings on Jun 20, 2011 at 00:01

The AA Power Board contains a switching boost converter to step the voltage from a single AA battery up to the 3.3V required by a JeeNode.

Nifty stuff. Magic almost… if you take the water analogy, then it’s similar to pushing water up a mountain! Wikipedia has a schematic with the basic idea:

Boost Circuit

Think of the coil as a rubber band (I’ll use a gravitational force analogy here), then closing the switch is like stretching it from the current voltage to ground. Opening the switch is equivalent to letting it go again – causing the rubber band to contract, pulling the end back up and then exceding the original height (voltage) as it overshoots. The diode then sneakily gets hold of the rubber band at its highest point. The analogy works even better if you imagine a cup of water attached to the end. Well, you get the picture…

The trick is to repeat this over and over again, with a very efficient switch and a good rubber band, eh… I mean inductor. The way these boost regulators work, you’ll see that they constantly seek the proper voltage (feeding a storage pool at the end, in the form of a capacitor).

Enough talk. Let’s look at it with a scope:


What you’re seeing is not the output voltage, which is of course 3.3V, but the variation in output voltage, which is measured in millivolts. IOW, 45 times a second, the regulator is overshooting the desired output by about 20 mV, and then it falls back almost 20 mV under 3.3V, at which point the booster kicks in again.

Let’s load the circuit lightly with a 10 kΩ resistance, i.e. 330 µA current draw:


No fundamental change, but the horizontal axis is now greatly enlarged, because the discharge is more substantial, causing the boost frequency to go to 2.2 KHz.

With a 1 kΩ load, i.e. 3.3 mA current draw, you can see the boost working a bit harder to charge up, i.e. the slope towards ≈ 20 mV above 3.3V is more gradual:


Keep in mind that the difference is also due to yet more magnification on the horizontal time axis. The boost converter is cycling at 21.1 KHz now.

With a 330 Ω load, i.e. 10 mA current draw, the wavevorm starts getting a few high-frequency spikes:


The total regulation is still good, though, with about 25 mV peak-to-peak ripple.

Now let’s simulate what happens with the RFM12B transmitter on, using a 100 Ω load, i.e. 33 mA current:


Looks like the regulator needs more time to charge than to discharge, at this power level. Still the characteristic “hunting” towards the proper voltage level.

With a 68 Ω / 50 mA load, the regulator decides to use more force, losing a bit of its fine touch:


The scope’s frequency measurement was off here, it probably got confused by the high frequency components in the signal. Repetion rate appears to be ≈ 65 KHz. But now the total ripple has increased to about 70 mV.

That’s probably about as high as we’re going to need for a JeeNode with some low-power sensors attached. But hey, why stop here, right?

Here’s the output with a 47 Ω load, i.e. about 70 mA:


Whoa… that’s a ± 0.05 V swing, regulation is starting to suffer. I also just found out that the scope software has peak-to-peak measurement logic built in (and more). No need to estimate values from the divisions anymore.

Note that a 70 mA current at the end will translate to some 200 mA current draw on the battery – that’s the flip side of boost regulators: you only get higher voltage by drawing a hefty current from the input source as well.

Until this point, I used a standard 1.5V alkaline battery, but it’s not fresh and showing signs of exhaustion at these power levels (the output was a bit erratic).

To push even further, I switched to a fully charged Eneloop battery, which supplies 1.2 .. 1.3V and has a much lower internal resistance. This translates to being able to supply much larger currents (over 1 A) without the output voltage dropping too much. In this case, the change didn’t have much effect on the measurements, but I was worried that continued testing would soon deplete the alkaline battery and skew the results.

To put it all in perspective, here is the output with the same 47 Ω load, but showing actual DC voltage levels:


So you see, its still a fairly well regulated power supply at 70 mA, though not quite up to 3.3V, as it should be.

One last test, using a 33 Ω resistor, which at 3.3V means we’ll be pulling a serious 100 mA from this circuit:


Measuring these values with a multimeter gives me 3.16 V @ 89 mA, while the resitance reads as 32.7 Ω – there’s some inconsistency here, which might be caused by the high-frequency fluctations in the output, I’m not sure.

But all in all, the AA Power Board seems to be doing what it’s supposed to do, with sufficient oomph to drive the ATmega, the RFM12B in transmit mode, and a bit of extra circuitry. A bit jittery, but no sweat!

Update – With a 10 µF capacitor plus 10 kΩ load the limits don’t change much, just the discharge shape:


The same, at higher horizontal magnification:


Note that AC coupling distorts the vertical position, it’s still ± 18 mV ripple, even if the up peak appears higher.

Something needs to change

In Musings on Feb 27, 2011 at 00:01

The previous post was about explaining which walls I have been hitting. Many thanks for your comments on that!

The task ahead is to move on! This must not become a blog about my ability to function (or not) in the context of JeeLabs. I’ve been doing a lot of thinking, and talking to people around here.

There’s a pattern. It goes as follows: I start on a big new challenge. Push really hard, get totally in the flow, and get lots of things done. This can last for days, or in the case of JeeLabs: several years. Until, gradually, other mechanisms start taking over, governed by obligations and expectations, essentially.

The reason this has worked so well with JeeLabs, is that the weblog was simply a report of what was happening anyway. A diary. Easy to do, and useful for me as well, to go back and review what I did. The shop just gave it more direction: making stuff happen was already a given, so making stuff which others can use as well was really just “low hanging fruit”. Easy to include, and very helpful to stay focused.

I think that over these past two years, I’ve unconsciously moved deeper and deeper into this pipeline. From doing it all as challenge and exploration, came the desire to describe it all more and more on the weblog. And from there it all evolved into making sure an increasing portion of this would end up as products in the shop.

It’s not quite the Peter Principle, but in a way, I’ve gradually drifted away from what this was all about: exploration, learning, and yes, also sharing. That’s why I started JeeLabs, and that’s what I want to continue doing with JeeLabs as much as ever.

I came across some interesting articles these past few days. Seth Godin talks about business needing to be of the right size. In my case, that means: sustainable. No more. No less. I’m confident that I can figure this one out.

Paul Graham talks about Maker’s vs. Manager’s schedules. Real life has a way of interfering with makers. Tinkering requires concentration, for all but the most trivial and obvious projects. This would explain exactly what happened here – as I kept ahead of the curve with weblog posts and shop items, all was well. I was in the flow and tinkering all day in the fascinating and endless world of physical computing. The emphasis was on the right stuff, and the rest followed effortlessly. Really. The weblog was oodles of fun, even with a daily post, and so was the shop, which is filled with interesting and new experiences about the world of atoms, production, and fulfillment.

I don’t want to list the projects here which I have already started up or new ones I would love to go into. It’s all fun, except that even just thinking about listing them drives home the fact that they are all out of reach for me!

Got to track inventories, order stuff, find second sources, juggle the cash flow, get stuff assembled and tested, deal with back-orders and new orders, handle sales / tech support emails, and more. Welcome to doing business, eh?

I’ll share a secret with you: I liked so much doing the daily weblog when it went well, that I’ve been pondering for the last week about how to resume this weblog on a daily basis. Conclusion, alas: it can’t be done. I need to be on a maker’s schedule again, to use Paul Graham’s terms. And both the weblog and the shop make that impossible.

Something needs to change.

No more daily weblog. Maybe after the summer, if I can get ahead of the curve again. Instead, I’d like to do a couple of regular columns – such as the Easy Electrons series, which I really want to keep going. Maybe a second series, but no promises yet. And posts on an irregular basis, when there is something substantial to report. I’m not going to water down the posts and write about trivialities. Nor am I going to just report about what others do elsewhere. You’ve got the same access to internet as I do. The JeeLabs weblog will remain about original content. For noise and fluff, I’m sure you have plenty of choices elsewhere.

The webshop is currently not in optimal shape. Too many out-of stock cases popping up all the time. I’m solving this by scaling up. Getting components by the thousands where needed, and getting products assembled by the hundreds where possible. I’m also going to do something painful: raise prices. I’m serious about JeeLabs. It is going to stay, and it needs to be run in a serious, sustainable manner. I can pour in my time and energy. But the figures have to add up, in a way which matches the scale at which JeeLabs operates. There are some economies of scale, but obviously not in the way DigiKey or Apple can operate :)

The shortages won’t go away overnight. I ordered 500 relays in January. Expected a first batch end of that month, only to be told a week ago that it was “pushed back” to the end of April. I came across a second source, so hopefully mid March I can provide relays anyway. ATmega shortages are over. Same for several other important items. I’ve got outstanding orders and agreements for hundreds of units for just about all items. I understand the risks and I’m learning the ropes. I just need to get better at it so it won’t take so much of my time in the long run.

Because in the end, JeeLabs is all about exploring and inventing. And, once those are back in the picture, sharing.



In Musings on Feb 9, 2011 at 00:01

Sorry, no post today. A cold and flu got the best of me.

Life resumes

In Musings on Jan 8, 2011 at 00:01

This thing came in recently – yippie!

Dsc 2410

Eh, 10.000 of them in fact – that ought to be enough for a while! ;)

In the past two days, I’ve been going through all the back-orders and sending out over 50 of them in fact. Note that most orders are bigger orders, since all the small and easy ones had already been dealt with.

This also brought out a serious shortcoming in my store’s inventory tracking system. I can see how many items are left (by going through the shelves and counting stuff), but I can’t see how much of those items are “already taken”, i.e. reserved for current back-orders.

Now in general, that’s not such a big deal, because most items are supposed to be in stock and the back-order is (should be!) low on most days. It was just a matter of making a mental note and taking care of the low-stock stuff.

But with so many items sold-but-not-yet-shipped, I’ve started seeing an avalanche effect: finding out that most of the stuff on the shelves was going much faster than anticipated while taking care of back-orders, and even running into some new shortages. Yuck.

This has happened to Ether Cards (now resolved), and has just happened to Carrier Boards as well (will be resolved in 2nd half of this month). Oh, and then there’s that 2×16 LCD which I think are in customs right now.

Summary: if you’re waiting for stuff, please hang in there. There’s more coming this weekend, but I’m not going to go all out 7 days a week and risk my health again. Been there, done that – I can now confirm from experience that it’s counter-productive.

In the atoms world, some limits, constraints, and delays are hard. Can’t do much else but respect that.

The good news is that I’ve just implemented some new tracking logic in my shop database, giving me real-time insight in those “already taken” figures. It’s nice to be on the consuming end of a database system for a change (I used to be on the producing end in a previous life).

Sorry for this somewhat off-topic post, but I’m going to call this Feierabend and enjoy the rest of this Friday evening watching a TV series with a nice sip of whisky. More substantial content to resume tomorrow.

Meilleurs voeux de Paris

In Musings on Jan 1, 2011 at 00:01

Happy new year!

We’re celebrating with a brief trip to Paris, visiting friends.

We’re just south of Paris actually, in a tiny village called Vauhallan. Won’t have the opportunity to go to the “real” Paris today, so here’s a mini-impression from this delightful little village …

Regular transmissions will resume tomorrow :)


In Musings on Dec 31, 2010 at 00:01

What? Last day of the year already? Hey! Where did the rest go?


It’s been a very exciting year at Jee Labs. But I don’t really want to rehash any accomplishments here, nor make all sorts of hand-waving predictions. Life’s too short for that sort of drum beating.

For me, 2010 has been the year where JeeLabs became totally real. The stuff I want to keep doing. The place I want to be. The people I want to meet. The adventures I want to be in. The experiences I want to share. The innate curiosity I want to nurture.

In 2010, I met lots of fascinating people and made several precious new friends. Not just online and virtually – THAT was really the biggest surprise of all: there’s this mostly-virtual world, yet it allows profoundly extending and deepening everything that happens in the real world – who would have imagined this? Certainly not me.

This month’s back-order fiasco has been a wakeup call. Atoms are not bits. I’m still learning. People tell me that I’m good with bits – well, I’m determined to get good with atoms too. There’s just way too much fun ahead in this Computing stuff tied to the physical world field.

Evidently, next month will be about recovery. But merely overcoming that would still be defeat. I don’t ever want to end up in this month’s situation again. It saps all the energy and it saps all the pride. Not to mention the failure to deliver in a timely manner and to meet all reasonable expectations. Never again.

There’s a lot that needs to be improved. The shop is awkward, the wiki documentation is incomplete, plug testing needs to catch a wider range of issues, known software problems need to be fixed, potential delays need to be communicated sooner, shipping needs a tracking option. Some of this is a matter of resources (my time, mostly), some of it is stuff I will need to get better at, and some of it is simply a matter of getting the priorities right.

Time will tell.

For 2010, I’d like to thank everyone who helped JeeLabs move forward and dive into numerous exciting projects.

Haagsche Hopje

May the year 2011 bring you – and everyone close to you – lots of curiosity and creative / fulfilling activities.

Gas consumption

In Musings on Dec 24, 2010 at 00:01

The gas consumption at JeeLabs is enormous these days – some 15..20 m3 per day right now. One reason for this is that our house is well-insulated but very open. All the warm air tends to move 2..3 flights up, even though we try to keep all the doors upstairs tightly shut.

The trip to Germany a few days ago provided an interesting opportunity to get a better insight in how all this heating works.

Our thermostat is set up to heat the house from roughly 6:00 (6AM) to 23:00 (11PM). It’s a fairly advanced unit with some predictive logic to attain those settings, which is why it actually starts about an hourly early:

Screen Shot 2010 12 22 at 12.53.21

The above graph shows two superimposed heating cycles, with the current one still in progress (it was around 13:00 when I took that snapshot). As you can see, the heater is almost flat out, with some extra peaks during hot water use.

Here’s the gas consumption over the past 7 days:

Screen Shot 2010 12 22 at 12.51.45

The gray bands are sunset/sunrise, ie. day/night periods.

What I did was turn down the heating on Friday morning, when we left for our trip. The normal setpoint is 19..20°C, but the thermostat has a “vacation mode” which changes that into a permanent 14°C.

As you can see, the house took almost a day to cool off. Not bad, knowing that it was permanently freezing outside at that time.

On Saturday, the heating starts up a bit again, and then stays on at a reduced level most of the time, i.e. day and night, until we got back late Monday evening. Which is when I restored the normal cycle.

The interesting bit is the end effect of getting back to normal. Here are the same readings, now as totals over the entire day:

Screen Shot 2010 12 22 at 13.21.47

Same pattern as before, of course: 17th and 18th almost nothing, then slightly lower consumption rates to keep the house at 14°C, and finally on the 21st a big push to get back to our normal comfy levels.

Here are the same values, numerically:

  • 15th – 15.71 m3
  • 16th – 16.83 m3
  • 17th – 3.52 m3
  • 18th – 5.79 m3
  • 19th – 12.40 m3
  • 20th – 14.02 m3
  • 21st – 28.16 m3

I should add that outside temperatures were a bit lower on the 19th and 20th, so these consumption levels cannot be compared 100% accurately.

But what stands out is that heating up the house back to 19..20° takes almost as much energy as what was saved on the days before. In other words: you can try to save all you like by turning the heater low when leaving the house – if you come back and want to get it back to the original level again, you basically have to add almost as much energy back in as if you hadn’t turned the heater down in the first place!

Heating is not a matter of “on = comfy, off = energy saving”, but one of keeping a whole pile of stones and concrete at a certain temperature. And this holds true even in very cold times. Apparently, the amount of stored energy is substantial compared to the amount of energy loss, and having a slightly cooler house doesn’t affect the rate of energy loss all that much.

This probably also explains why our gas consumption can still be 25% lower than average in this neighbourhood, despite the fact that many people are away and work elsewhere – while I keep the house heated all day long… (just a bit more sparingly than most, I guess).

Snowed in

In Musings on Dec 23, 2010 at 00:01

There are worse times to be immobilized, than in this wintery time – with large parts of Northern Europe completely snowed in.

Here’s the view from JeeLabs, i.e. Houten / Netherlands, right now:

Dsc 2404

Dsc 2401

Dsc 2403

I find it absolutely gorgeous. With all the beauty of this scenery in plain view – and the nearly 30°C temperature differential across our double paned windows as reminder of the huge comfort brought about by yet another marvel of modern engineering.

Schönen Gruß aus Braunschweig!

In Musings on Dec 20, 2010 at 00:01

Not to worry, this weblog remains “mostly” English :)

Just wanted to say hello while on a very brief weekend-trip to Germany, visiting my brother…

I’ll leave you with a couple of visual impressions from this trip, including a visit to Wolfsburg (Alberto Giacometti and the “Phaeno” Science Museum) and the “Weinachtsmarkt” …

Regular scheduled transmissions to resume tomorrow…

What is “power” – part 2

In Musings on Dec 4, 2010 at 00:01

To continue yesterday’s post, let’s go into that last puzzle:

Why does the 1x AA Power Board run out of juice 3 times as fast as a 3x AA battery pack?

The superficial answer would be: it’s one battery instead of three, so obviously it’ll last 1/3rd as long.

But that’s not quite the whole story…

The AA Power Board contains a switching regulator called a boost converter. Switching regulators are a lot more efficient than ordinary “linear” voltage regulators. They play games with energy conversion into electrical and magnetic fields. And by doing so, they mess with the rule that current is always the same everywhere.

But let me first explain what a linear regulator does:

Screen Shot 2010 12 02 at 22.12.51

Don’t laugh – that’s the essence of a linear voltage regulator: a variable resistor!

Well, it’s far more complex than that in reality. But the machinery inside a linear voltage regulator is all about wasting energy. The goal is to waste just the right amount to get the desired 3.3V on the output pin, regardless of changes in input voltage and current draw. Functionally, all the regulator does is continuously adjust its internal resistance to get the right output.

If you think about it, there’s in fact little else you can do with resistors. They exist to drop voltage, and by doing so, they generate heat, even if the amount is minimal and usually irrelevant.

The other type of regulator is the switching regulator. Huge topic, way beyond the scope of this post (and way over my head, in fact). I’m bringing it up because the AA Power Board uses a switching regulator to “boost” the voltage from say 1.5V to 3.3V.

So how does one boost voltage?

The hydraulic analogy would be to pump water from one level to a higher level using only water power (height and flow). There’s an ingenious pump called a hydraulic ram which can do that. The one I’ve seen in action works by letting water flow and then quickly interrupting that flow. All of a sudden, the water has nowhere to go and pressure builds up. All you need is an outlet pointing up, and the water will go there as only option.

The flow will quickly stop, so the trick is to repeat this cycle, and then – in a pulsating fashion – you actually can get water to climb up. Voilá, a higher voltage!

That’s also how the AA Power Board works. It contains an efficient switch, which pulses the current flow, and (in most cases) an inductor which transforms current changes into a magnetic field, and vice versa. The inductive “kick” is what makes it possible to play games with current vs. voltage without turning it all into heat.

But you don’t get anything for free. Apart from circuit losses, you lose the conversion factor w.r.t. power – drawing 10 mA @ 3.3V will require 30 mA @ 1.1V – with circuit losses increasing that slightly further. You can’t ever get more watts out of this than you put in!

So, roughly speaking, to get 3.3V from a 1.2V AA NiMH battery, you need to draw about 3 times as much current from the battery as what will go into the target circuit.

Which is why a 2000 mA single AA battery behaves roughly like a 650 mAh battery delivering 3.3V via the boost converter. With our circuit drawing 10 mA, that will last 650 mAh / 10 mA = 65 hours.

That’s roughly a third as long as the 3x AA battery pack. QED.

There are some losses and inefficiences. Even with a switching regulator, you should expect no more than 90..95% efficiency under good conditions. But this is nowhere near the inefficiency of a linear regulator with a high input voltage. On a 9V battery, the on-board regulator of a JeeNode will be less than 40% efficient.

Note that there are also down-converting switching regulators (called buck converters), and these do have much higher efficency levels. Even the AA Power Board is able to handle over 5V on its input, and still deliver a 3.3V output, by going into a buck conversion mode. In which case the input current will be less than 10 mA to deliver 10 mA @ 3.3V on its output – something a linear regulator simply cannot do.

Conclusion: if you want very power-efficient solutions, look carefully at what voltages to use for supplying your circuits, and use switching regulators when the voltage differential is substantial. Linear regulators can only drop voltage, and can only do so by wasting energy.

There is another benefit to using a boost converter: it lets you suck the last breath out of batteries. The AA Power Board can be used with batteries with only 0.85V or so left in them, and if kept connected and running, it’ll work all the way down to 0.6V or so. You can be assured that by the time an AA Power Board gives up, its battery will have been completely drained!

One last note about the AA Power Board: maximum efficiency is achieved with an input voltage between 2.4V and 3.0V, so if you want to optimize, consider using either 2 AA (or AAA) cells, or a 3V battery such as the CR123A, which is half the size of an AA and a great source of energy with about 900 mAh of oomph…

But just to put all this into perspective: if all you want is a good solid source of power for a JeeNode and everything attached to it, use a 3x (or 4x if you have to) AA battery pack. Or use power from USB.

Me, I’ll stick to single rebranded Eneloops @ 1.2 .. 1.3V.

What is “power”?

In Musings on Dec 3, 2010 at 00:01

Here’s something which may be totally obvious to some, yet clear as a mud to others…

Voltage, current, power – what are they? Here are some puzzles I’ll go into:

  • Why does a 4x AA battery pack run out almost as fast as a 3x AA battery pack?
  • Why does a 9V battery last about 1/4th as long as a 3x AA pack?
  • Why does the 1x AA Power Board run out of juice 3 times as fast as a 3x AA battery pack?

Let’s take it one step at a time. An often-used analogy for electricity is water (see hydraulic analogy). To simplify, let’s say that electricity flows from a high voltage to a low voltage, such as ground. Likewise, water flows from a high location to a lower location. So let’s make the analogy that high voltage equals water high above the ground.

This is what happens in a circuit where a 3.6V battery powers a JeeNode:

Screen Shot 2010 12 02 at 19.56.58

While in the battery, the voltage is “at” 3.6V. When it goes through the on-board voltage regulator, it is made to drop to 3.3V, and then that electricity flows through the ATmega, RFM12B, etc, to ground.

Let’s assume the circuit draws 10 mA. The thing about current is that it’s doesn’t change across a circuit, like voltage does. Using the water analogy: current is the amount of water flowing. And no matter where it flows, the amount at the top is the same as the amount lower down. It might trickle down in different ways, but the amount into the whole circuit is the same as the amount coming out:

Screen Shot 2010 12 02 at 20.13.33

So what we have is a battery, where the electricty “starts out”, and then it traverses first the voltage regulator, then the ATmega, etc, and then it flow back into the battery at 0V, which in effect “pumps” it back up to 3.6V. And the the cycle repeats.

I’m taking many liberties here. Electricity doesn’t really flow from + to -, and there’s no pumping involved either. But as a mental model, this actually works pretty well.

So what’s “power” then, eh?

Well, power is defined as “voltage times current”. I’ve added the calculations in that second diagram. As you can see, with a 10 mA current consumption, the battery generates 36 mW, of which the voltage regulator consumes (i.e. wastes) 3 mW, and the ATmega, etc, get the remaining 33 mW.

What you may not realize, is that “consuming power” is basically equivalent to “turning electricity into heat” – because that’s what happens, essentially. Think about it: the JeeNode is really just a mini electric heating. It isn’t very much heat, and it happens over a long stretch of time. But in the end, when the battery is dead, you’ve done nothing but heat up the surroundings a teeny bit…

Well, almost: a small amount will have been emitted as radio energy when the RFM12B is transmitting.

Ok, so now let’s try to answer the above three questions.

Why does a 4x AA battery pack run out almost as fast as a 3x AA battery pack?

This is due to the voltage regulator. If you feed it say 4.8V, instead of 3.6V, it will simply waste that extra energy: the voltage drop over the regulator will be 1.5V instead of 0.3V, so that the output of he regulator stays at 3.3V. That’s the whole purpose of the regulator after all: to deliver a constant voltage, regardless of the voltage placed on its input pin.

Here’s what would happen if you put 9V on the voltage regulator:

Screen Shot 2010 12 02 at 20.32.10

And here’s how that works out in terms of power consumption:

Screen Shot 2010 12 02 at 20.33.05

(correction: the 5.3V – bottom middle – should have been 5.7V)

In other words: you can raise the voltage all you like, it won’t have any effect on the amount of power needed or used by the ATmega, etc. They will always get 3.3V, and will continue to draw 10 mA as before.

The only thing that happens, is that the voltage regulator works a little harder, and wastes a bit more power by turning it into more heat!

Conclusion: if your circuit doesn’t need the higher voltage to work properly, power it at the lowest practical voltage. Keep in mind that the “low-drop” voltage regulator on the JeeNode likes to have at least 0.1..0.2V to do its job properly. Both 3x AA packs and LiPo batteries are just about perfect for JeeNodes.

Another very important lesson from this is that if you’re trying out stuff, and you notice that the voltage regulator is getting very hot because some part of your circuit draws a lot of current, then you should try to reduce (!) the voltage you’re feeding into it: you’ll help the regulator, by giving it less power to eat up and waste.

Why does a 9V battery last about 1/4th as long as a 3x AA pack?

Now with the above explanation, it should be clear that the 9 volts won’t give you a longer-running JeeNode. But why is it so much shorter?

The reason is that not all batteries contain the same amount of energy. The capacity of a battery is specified in terms of milli-Ampere-Hour: an AA battery often has over 2000 mAh. This means it can supply 2000 mA for one hour. Or 1000 mA for 2 hours, 500 mA for 4, etc. And then it’s empty.

Energy is defined as Voltage x Current x Time (or equivalently: Power x Time). The unit is watt hour.

So the amount of power you get when draining an AA battery in one hour (voltage x current) is: 1.5V x 2000mA = 3.0 watt. Consquently, the amount of energy in a a 3x AA pack is 9.0 watt hour.

For a standard 9V battery, the figure is around 500 mAh. This is 9V x 500 mAh = 4.5 watt hour of energy.

Great, so a 9V battery has half as much energy as a 3x AA battery pack, and should last about half as long, right?

Wrong! – go back to that first discussion about feeding the voltage regulator with 9V instead of 3.6V: it just turns that extra voltage into heat.

The way to estimate lifetimes, is to use the current draw as starting point. We assumed in all these examples that the circuit is drawing a constant 10 mA.

On a 3x AA pack (or 4x AA, for that matter), this means we get 2000 mAh / 10 mA = 200 hours of run time.

But on a 9V battery, we’ll only get 500 mAh / 10 mA = 50 hours of run time!

Conclusion: don’t use 9V battery packs for JeeNode projects. They are an expensive way to waste energy, and you’ll keep running to the shop to get new ones.

Why does the 1x AA Power Board run out of juice 3 times as fast as a 3x AA battery pack?

Before even going into that, the first puzzling fact about running a JeeNode off a single AA is really: how can a 3.3V circuit run off a 1.5V power source in the first place? Think about it. As you know, most electrical circuits don’t work at all when the supply voltage is too low.

It’s equivalent to asking: how can you get water which flows at a certain level to lift itself to a higher level?

Hint: there exists an ingenious type of water pump which can do this!

To be continued in tomorrow’s post…

Hand-made envelope

In Musings on Nov 13, 2010 at 00:01

Yesterday, I got a small cardboard box from India with some geared DC motors I’d like to experiment with. It came wrapped in this:

Dsc 2246

Quite sturdy and hard to remove, in fact. Note also the text on the declaration for customs: “Parts” – heh, yeah, that’s all they need to know :)

And it’s… hand-stitched!

Dsc 2247

Amazing. I now have this mental picture of someone with needle and thread, sitting in lotus position in the shipping department, in charge of wrapping up all the outgoing packages :)

I find these small glimpses of the cultural richness on our planet one of the most thrilling aspects of doing business these days. Long live human diversity!

Biting a mini-bullet

In Musings on Sep 28, 2010 at 00:01

After some agonizing over the infinite number of trade-offs available these days, I’ve finally made a couple of big decisions w.r.t. Internet and Jee Labs.

Until now, all the web sites for Jee Labs have been running on a rented dedicated web server located in Germany. That little setup has served me extremely well, running some 5 years with just a (precautionary) HD swap about halfway down the road. Downtime over these years has been less than 24 hours I think, in total – or as they say: “three 9’s” (99.9% uptime).

The current server machine is showing its age though, so some form of upgrade and transition is needed in the not too distant future. Preferably one which can again last 5 years or more. Content changes all the time – but a server really shouldn’t need to. It’s a commodity by now.

As it so happens, Fiber to the home (FTTH) is currently being rolled out around here at Jee Labs. Meaning: fast uploads, not just downloads. Which changes the landscape – it’s no longer necessary to rent something, a server at home will work just fine. Jee Labs is not a bank or some high-profile company setup. I don’t need a team of support personnel to get around-the-clock support. If it fails, I can fix it. And if I’m not there to fix it, then Jee Labs has a bigger problem than just its websites…

So the plan is now to move all of the Jee Labs internet “operations” to … Jee Labs in Houten, The Netherlands. Not hastily – there is no rush. But still.

Now I need a server. One which I won’t outgrow. One which won’t break – or at least if it ever does: one which is easily replaceable. I’ve traveled across the planet in my searches. I’ve seen it all. Amazingly small, amazingly cheap, amazingly powerful, amazingly robust, and amazingly simple. None of them is everything at the same time. But I did end up with a setup that fits me.

I’ve decided to base the system on the latest Mac Mini:

Server Hero 20100615

Server edition, i.e. no DVD but a second hard disk. High end stuff, more than I’ve ever plunked down for a server. Still under €0.50 a day, if it lasts and delivers over the planned 6-year lifetime. Modern in terms of capabilities, modern in terms of noise level, and modern in terms of energy consumption. I’m pleased with this decision.

Inside, it’s going to run a couple of virtual machines. That will give me a level of (manually managed) fail-over which I’ve never had before. Security is crucial, but there too, the VM’s should add some good security barriers. One of the VM’s will be used for home automation.

I’m already running parts of the setup at Jee Labs as virtual machines, so the task ahead is to continue that migration, and then bring it together on the new server.

I’ll do my best to minimize the hickups, for this weblog and for the rest of the public-facing activities at Jee Labs and Equi 4 Software. There will no doubt be some disruptions and even foul-ups. But the hard part for me is over – the direction and the goal are clear now. The rest is … just work :)

Getting organized

In Musings on Aug 22, 2010 at 00:01

Nothing like a good vacation to forget everything…

An now, coming back, I’m faced with the fact that I don’t always remember where things are, and having way too much stuff piled up in several places.

Time to do someting about it:

Dsc 1827

Simple cardboard boxes with 5×30 and 10×30 cm footprints. Tons of ’em…

I’m already tracking most of my inventory, including pictures and locations, but the problem was that these locations were too broad. Rummaging through various piles of, ehm, junk can get boring and tedious after a while!

Now I’ve got to think about where everything should go. Shop products in one place, projects-in-progress in another, lab supplies in a third, not to mention the shipping department.

Jee Labs is about fun, not size, after all – I don’t really want to expand beyond the current 4 x 7 meter space currently in use (apart from a few large supply boxes and tools in the basement), which is going to require a bit of self-discipline. There’s still a fair bit of unused wall space, so that’s good.

Ah, the wonders of the “atoms” world… :)

What a year it’s been…

In Musings on Jul 13, 2010 at 00:01

One year ago, the first serious PCB designs were “taped out” (heh, if that isn’t an anachronism by now!) – this is when the first batch of JeeNode v3 boards was produced, with all the ports and pins that have by now become a standard around here.

One year later, there are 4 JeeNode variants and over 20 “plugs” / add-ons – all part of a happy JeeFamily :)

What’s next? Well, I don’t have a crystal ball. But I do know what’s coming next because of some recent projects behind the scenes … and I can tell you that there will be several new plugs starting mid-August.

Another announcement I’d like to make now, is that after the summer more of the production will be out-sourced (here in the Netherlands), to free my time for work on new hardware and software development.

As you probably know, Jee Labs is just me, moi, and myself – with a few great people helping out behind the scenes. The major difference with traditional companies is that I’m neither driven by a boss, nor (primarily) by revenue, but by interest. Which means that you can have a considerably larger influence on where Jee Labs is going than you might think… all you need to do is speak up, preferably in the discussion forum, and point out neat / useful / practical stuff. I won’t guarantee that I’ll follow everyone’s lead, but I’m as keen as anyone to go where the neat stuff is regarding physical computing.

Speaking of neat stuff…

Franz Achatz sent in a great email today, describing what he’s been doing, complete with pictures and screen dumps. Here’s the latest addition to his RFM12B-based WSN – a fridge sensor (posted with permission):


All in a neat little box, with the GSM-type antenna sticking out:


The sensor is a 1-wire Dallas sensor, to allow tracking the current temperature inside the fridge.

And here’s the software side of it, all created by Franz with the current JeeMon software:

Screen Shot Small

(Click here for the full-size image)

Given how young JeeMon currently is, I’m amazed to see just how much it can already be made to do…

The story I’d like you to take home from this is not how great JeeNodes or JeeMon are (they’re not, they are still far too young and simplistic), but how much freedom you have when everything is open source, hardware as well as software.

It’s time for me to start winding down (with 30..39°C of humid heat, it’s almost a necessity, even…). There will be one or two more queued-up posts on the weblog, and then it’ll be set to read-only mode. In fact, all of internet will become read-only a few days from now, as far as I’m concerned. I’ll be away only part of this summer, but even when I’m in I won’t respond to emails – sorry.

If you ever get bored, there are now 550 posts on this weblog – feel free to browse around, and enjoy :)

TTYL, as they say!


In Musings on Jul 10, 2010 at 00:01

A couple of days ago an email titled “Au secours! LCD ne marche pas” came in, with this picture:

Foto am 29 06 2010 um 20.07 #2

Two JeeNode tinkerers at the Computational Art department of the UdK in Berlin, looking sad and disappointed. And they guessed – correctly! – that sending an email from Germany, written in English, and with a French title would draw my attention :)

The email contained enough technical details to be able to resolve everything with a quick email reply from me (it was the brightness pot, which can be finecky!).

Then this came back:

Foto am 29 06 2010 um 20.56

Yeay – they look happy again! :)

(above pictures shown with permission)

Isn’t that what support is all about? It’s easy to forget that at the end of the day, support is for (and by) people, not technology.

Thank you, Alberto and Petja, for making my day and reminding me.

Mail order

In Musings on Jul 7, 2010 at 00:01

One of the things I was totally ignorant about when starting out on the Jee Labs adventure, was the whole process of running an internet shop. The real physics and mechanics of it, not just some imagined “process”.

Of course it was clear from the outset that it would be about packaging and shipping stuff. But what does it come down to, on a day-to-day basis? Does it add much overhead? What do you need? Is it tedious / boring?

To start at the end: no, it’s actually fun! You get to give something to people. And a surprising number of names on the orders even come back once in a while, which tells me that someone, somewhere appreciated this and liked it enough to get even more JeeStuff. Which is very rewarding: I get to come up with stuff and make it, and then I get to “give” it to people all around the world (of course it’s a sale, but for me it still feels like giving).

So what happens after the obvious assembly of boards and packaging of kits, etc? Well, I pick all the required items, and put them in a padded envelope – with lots of sizes to pick from:

Dsc 1669

Little did I know about how much room all that “sealed air” takes up!

To keep shipping costs as low as possible, I try to always fit all the stuff into one envelope. In fact, I think I’ve only had to ship in a cardboard box once until now, although I do have some extra large envelopes (25x38x3 cm) for bigger workshop packages, etc. And there are limits to this type of frugal packaging, as someone pointed out:


So much for that Carrier Box. What did that postman do? Stand on it?

This is now solved by sending Carrier Boxes in slightly larger padded envelopes, btw. Apparently that gives just enough added cushioning to prevent this sort of damage.

Oh, and here’s a fun detail – check out the Jee Labs postal scale:

Dsc 1759

Seriously. That’s how each package here is weighed, to determine how much postage to apply. I can’t think of a nicer way to honor the makers of roughly a century ago. A timeless piece of engineering, with cast iron foot and all. Here’s a puzzle for you, if you haven’t seen such a postal scale before: try to figure out how it can have two ranges: 0..50 g and 0..250 g – it’s a very clever yet simple trick, as with all great inventions.

The next step is to add postage, which is now done with these state-of-the-art (ahem) digital stamps:

Dsc 1347

The convenience being that I’ve got a whole pile of them printed in advance, for each rate – instead of having to transfer up to 8 stamps.

And then it’s off to the mailbox: a 3-minute walk if I get everything done before 17:00 (5 pm), or a 10-minute walk to a more central mailbox which gets emptied at 19:00 (7 pm).

So there you have it – a peek into the Jee Labs kitchen, eh, I mean shipping department :)

There IS a reason

In Musings on May 12, 2010 at 00:01

Yesterday’s post was an attempt to explain what I’m doing, and how the bigger issues cause me to wander around a lot, working on secondary projects while trying not to stray too far from the main direction – which is to experiment with fun stuff in the home, around the topics of energy use and environmental monitoring. And a whiff of domotics… when it serves a useful purpose.

Ok, so Jee Labs is about JC’s Environmental Electronics. Doh.

Today I’d like to go into why I’m working on this stuff.

Whenever you ask people why they do what they do, the usual answers are: money, prestige, influence. But the most exciting answers in my book are from those who chase their dreams: because they can or because they want to see where it leads to. Fortunately, these answers do come up, once in a while.

Here are some “why” answers from me:

  • Why environmental? Because we’re on a dangerous course. I’m ashamed of what my species is doing, yet I share full responsibility. Unfortunately, I don’t know how to change the rest of the world. Them. Out there. But maybe I can change the small world I live in. Me, my family and friends. My living space.

  • Why electronics? Because it’s what I loved doing when I was a teenager. It was my biggest passion, before computers took over that spot. I would love nothing more than share that passion. If I can somehow reach some kid, somewhere, to discover the magic of exploration and invention, then that would be fantastic.

  • Why microcontrollers? Because they bring together everything I like: electronics, logic, code, mechanical design. And because nowadays, they are so low-cost and so darn easy to work with. Incredibly robust (hey, you can plug ’em in backwards!) yet infinitely malleable (its all code, just change the flash memory!).

  • Why wireless? Because wireless is as close to magic as technology will ever get. Making things happen somewhere else with invisible power, literally!

  • Why sensors? Because it’s about time our technology started paying more attention to the “real” world out there. Out with the big and noisy machines, which operate in a strictly controlled fashion. The future belongs to sentient systems, which fit in, investigate, respect, respond to, take care of, and even protect our most valued aspects of life.

  • Why networks? Because this world is about information. Data which does not reach the right places and persons, has no value.

  • Why the home? Because that’s where people live. Factories, offices, and commutes are all artifacts of the industrial revolution. That was long ago. We’re living in the internet revolution now. Being in a specific place to make something happen is losing its grip on our lives.

Ok, so maybe that last one is pushing things a bit … :)

Now some more focused why’s

  • Why JeeNodes? Because Arduino’s got it almost right, but shields are simply not modular enough to encourage real mix-and-match tinkering. Single-purpose shields are made for consumption and they restrict needlessly (try stacking them, and feel the pain!).

  • Why Ports and Plugs? Same reason, really. Because I want everyone to be able to experiment with combinations of sensor and actuator functions. JeeNodes are not about consuming (“I create a neat combination kit with all sorts of choices fixed in advance and you build a copy of it”) but about de-constructing and re-constructing stuff. Analyze and synthesize. Take it apart, combine it in other ways. Go and try something new, please!

  • Why RFM12B’s? Because they are low cost and more than capable enough. Mesh, frequency hopping, TDMA, sure… if you want to dabble in complexity, go for it. Go swim in network protocol “stacks”. Add in a micro-kernel to deal with all the required parallelism. Go overboard in failure modes and recovery mechanisms. Use beefier chips. But count me out. I can live with imperfect packet delivery, and simple manual configuration of a few dozen nodes. I cheerfully pass w.r.t. all this “self-inflicted complexity”.

  • Why 3.3V? Because more and more of the new and interesting sensors operate only up to 3.6V or so. And wireless chips, and Ethernet chips. And because LiPo batteries are very good power sources: very low self-discharge, very fast recharge times, and available in a huge range of sizes and capacities.

  • Why JeeMon? Because I want the software equivalent of a breadboard to explore lots and lots of ideas, and it doesn’t exist – not equally simple and equally powerful as a breadboard, not to my knowledge anyway. I think we haven’t even scratched the surface of software design yet, and the potential for real modularity and simplification. Hardware is much further along, in that respect.

  • Why Tcl? Because there seems to be nothing quite like it, in terms of simplicity, expressive power, flexibility, robustness, portability, scalability, and deployment. I don’t mean in terms of each individual issue, but in terms of the combination of those aspects. As a package deal, Tcl embodies a surprisingly clever and effective set of trade-offs. I could probably dismiss Tcl on every single issue in isolation, and name another language which would be preferable – but no single language can go where Tcl goes.

  • Why multi-platform? Because I want to create interesting solutions on the desktop as well as on small Linux boards. I’m fascinated by the idea of moving solutions around, modularizing larger systems into loosely coupled sub-systems, and migrating some of those pieces to dedicated miniature hardware platforms.

  • Why open source? Because it simplifies my life – I can work in the open, share and discuss everything, and benefit from every mode of collaboration imaginable. And because it simplifies your life. If you don’t like what I do, you have three options, instead of just the first one: 1) ignore me, 2) take what you like and change everything else, and 3) make your case and bring up arguments to steer me into a better direction. I’m against lock-in, so if there’s anything I can do to further reduce inter-independencies, let me know.

  • Why no standards, such as XML or ZigBee? Because in this context, standards make no sense. The context is an environment where you can choose every data structure and every side of the communication. In a world where everyone speaks a different language, you need dictionaries, translators, and interpreters. They are all essential, useful, and valuable. I should know, I speak 4 languages, 3 of them regularly within my own family (the fourth being English). But the compactness of well-chosen words and the intricacy of their nuances really take off when you’re totally on the same wavelength. XML has many virtues, but “impedance matching” and compactness are not amongst them. Standards stand in the way of creativity. XML and ZigBee add no value in this context, just tons of complexity, which then creates its own set of problems and distractions.

Speaking of complexity…

This post is starting to become a little too complex as well. So let me summarize and simplfy it as follows: why am I doing all this stuff at Jee Labs? To share my excitement, to convince those interested in technology that there are infinitely many fascinating adventures ahead in the land of Physical Computing, to give me an interesting and useful context to try out lots of new software ideas, and … for the sheer fun of hacking around and learning.

Oh, and because I can, and want to see where it leads to, of course :)

PS. The “normal” weblog posts will resume tomorrow, i.e. how to set up JeeMon for the reflow project.

Packaging madness

In Musings on Mar 4, 2010 at 00:01

I ran out of little zip-lock bags. The ones I put small 6-pin headers in, and such.

So I ordered a few more by direct mail, from a Dutch office supplies shop on the web.

Sure enough, one day later – a big courier delivery truck from DHL stops by and delivers a 25×35 cm box with the requested goods:


They could have dropped it in a padded envelope, added €1.60 postage, and dropped it in the mail. Which – in the Netherlands – is guaranteed to reach me in 24 hours, IOW just as quickly, along with all my other mail. Delivered to my doorstep by our friendly mailman, who services the whole neighborhood … on bike.

Instead, I’ve caused this ridiculous packaging and delivery nonsense. Yuck.

Update – I’ve re-ordered some more (slightly larger) bags from another Dutch supplier suggested by someone after reading this post. Not only did that supplier do the right thing – they even alerted me to the fact that another item would have pushed postage up to an unreasonable level, and proposed to omit that extra item. Hurray for vendors who use their common sense when serving their customers!

Open – Some notes (5/5)

In Musings on Jan 11, 2010 at 00:01

(This is the last part of a 5-part series on open source – parts 1, 2, 3, 4, 5)


Jee Labs is an open source shop – both open source hardware and open source software.

Nothing earth-shattering: I’m exploring what’s possible in the domain of physical computing, and have been focusing on simple ATmega-based wireless sensor networks for in and around the house. A simple variation on the Arduino theme, really. In fact, it’s pretty hard not to be compatible with an Arduino when your stuff is based on the same Atmel AVR chips. Because in terms of basic functions and software, an Arduino is (at least right now) essentially simply an ATmega on a board. Plus a serial port boot loader.

This “computing stuff tied to the physical world” is a lot of fun. And a lot of work.

There’s a substantial amount of software required for all this, from dealing with wireless to collecting, storing, and presenting measurement data. And that’s just going one way – to also robustly (and securely) control things in the house, there’s also this whole field of domotics to go into.

I love doing software development. I spend most of my time on it.

And with Jee Labs involved in both OSH and OSS, there is finally a way to dedicate all my time to this stuff.

It’s a very simple mechanism: the hardware side funds the software side.

Does that make Jee Labs a commercial undertaking? Yep. Small shop, but it’s about as direct as it gets:

Stuff sold => food on the table => hunger gone => happy coding.

So I’m hired by the shop to do the hardware and software side of things. As long as there is funding, I get to spend all my time on this stuff. All of it in the open, described on a public weblog, and with all designs usable by anyone to do whatever they like with it. Try it out, hack it, extend it, rip it apart, get rich with it, or ignore it – it’s all up to you. There’s nothing to steal, because your gain is not my loss. On the contrary.

Quite a few people have already done things like take the RF12 driver (software) and use it with their own hardware designs. Many others are taking some sensors (hardware) and tying it into their stuff with their own software. Cool – way cool, in fact. Hardly a day goes by without something encouraging or rewarding happening on some front.

The trick is sustainable funding. Which means I need to stay on the ball, and figure out what others would like to see from me. That’s good – normal market economics, really. There’s a lot of uncharted terrain, still waiting to be discovered, explored, and turned into projects and products. Just gotta look, listen, and keep moving. Which is a lot easier when everything is out in the “open” – hardware, software, … and ideas.

This concludes my mini-series on Open Source. It’s time to return to the techy stuff again!

Open? Really? (4/5)

In Musings on Jan 10, 2010 at 00:01

(This is part 4 of a 5-part series on open source – parts 1, 2, 3, 4, 5)

Screen shot 2010-01-06 at 00.06.41.png

For OSS (S as in software), “open” is all about collaboration. Yeah, sure, there’s a lot of re-invention and not everyone is willing to share when it matters (i.e. real innovation), but even just the collaboration of developers (producers) and users (consumers) is fruitful in the sense that bugs get exposed, investigated, and fixed. Even if asymmetric, this leads to faster evolution and better software.

And even if not always, OSS projects do tend to attract a lot of talent, and talent breeds creative solutions and excellence. Because of the low barrier to entry, the potential talent pool is huge. It doesn’t take a lot of talented / motivated people to make exceptional software. Collaboration among developers is the norm, not the exception – and the visibility helps, even if only by attributing status to the team participants.

With OSH (H as in hardware), things are a bit different. Sure, there too the exchange between producers and consumers helps create better products, no matter how asymmetric the groups are. Got a problem? Submit it to the forum, and it’ll be out for all to see so the producer(s) better get his/her/their act together.

But the numbers are different. Very different. The barriers to entry are a lot higher – designing hardware and actually building working “stuff” is hard and requires specific skills. It’s error prone, time-consuming, and above all it costs real money. You need equipment and inventory, the prototype turnaround times take ages when you’re doing this on a budget, and serious testing can be extremely difficult. Not to mention getting production QA up to scratch and the QC yield high enough.

Want some examples? Try “Chumby” or “Adafruit”, each driven by amazingly talented and dedicated people. Exercise for the reader: find out how many people are behind each of these OSH shops…

What about the Arduino boards? Well, to be honest, I have no idea who designed them, nor what discussion is going on about them for future developments. The Arduino “Mega” sort of dropped out of the sky once it was on the shelf.

Because, you see, selling hardware generates income. And any income, say N dollars, divided by anything but 1 would be less than N. What seems to be happening right now, is that OSH producers are indeed “open”, but only after the designs have been created, tested, and manufactured. All the hardware work has been done, all the choices and decisions have been made, and all the time-consuming steps have already been taken.

So OSH “shops” are in fact putting a fairly large distance between themselves and any potential contributors (and competitors). Want to see, use, or even alter what they do? Fine – but not before they’ve got their revenue streams set up.

Don’t get me wrong. This is absolutely everybody’s full right. To be an entrepreneur means you get to take the initiative and you get to make all the choices you want. That’s the whole point. It’s just not really open.

I’m not even sure it could be done any differently. At some point, as producer, you’ve got to have an advantage over others to be able to justify the time, effort, and money spent on whatever it is you do. And given that most supermarkets want money to hand over the ingredients for your meals, incentives tend to work best if they too are in terms of money…

But there’s preciously little collaboration going on. Ideas aren’t flowing freely, far from it. And as with OSS, some people are waiting on the sidelines, trying to soak up as much info as they can, and keeping ideas to themselves to try and create their own revenue source when the time is ripe.

There’s nothing inherently “wrong” with that.

I’m the same – ask me about what my plans are, how I solved some tricky issues, and I might give you an evasive answer… because I’m afraid someone might “beat me to the punch”, or become a competitor.

It looks like OSH is being treated as a zero-sum game, i.e. “your gain is my loss” and vice-versa. I’m not sure it actually is, even w.r.t. money. In fact, as far as ideas go, I’m convinced that it definitely isn’t – ideas breed more ideas, infinitely so. We just need to figure out and implement the mechanisms which encourage the flow of ideas. Right now, neither OSS nor – especially – OSH are quite there yet.

In the first part of this series I started out by saying that there is hypocrisy involved. Whoa, that’s a big word. Let me explain…

First, allow me to summarize: with OSS there is usually no money involved – it all runs on voluntary contributions and team participation. With OSH, on the other hand, there is always money in the equation, because “stuff” is involved, i.e. real atoms, not just bits. And once money flows, normal / healthy economics will lead to revenues, profits, and income, no matter how modest or at what scale. Fine. Cool, even.

Except… with physical computing, there is always both hardware and software (call it “firmware” if you like). The hardware makes things possible, the software makes things happen. Either one is useless without the other. Yin and yang. Yet hardware and software are quite different beasts, and require quite distinct skill sets, so the people doing (“good at”) the hardware are often not the same as the people doing (“good at”) the software.

If I were involved in the hardware side of physical computing, and wanted to get rich (a real possibility, since there is money involved), I’d work like crazy on the hardware, and then kick it out into the world (i.e. sell it) with the minimum amount of software I can get away with. I wouldn’t put it that way of course, but I’d say “hey, I did my part, now someone else kick in to finish the other part”.

Then I’d sit back, or rather: participate just enough to keep up appearances. Build a community, take the lead, create the infrastructure for (software-only!) collaboration, and – pardon the bluntness – rake in the money…

If I were involved in the software side of physical computing, I’d design my own hardware and set up a shop to generate a revenue stream, so that I’d have at least a chance of funding some of my software efforts. Because I wouldn’t know how to sustain the necessary Open Source Software development for it in any other way.

I’ll leave it to the reader to be the judge as to what extent the above is happening today, and I urge you to look well beyond what I am doing with my little Jee Labs setup…

All I’ll add to conclude with, is that Open Source Hardware is not nearly as open and collaborative on the hardware side and in the design phase at it could be, and that the Open Source Software efforts which need to take place to make such hardware tick are not funded nearly as well as they should be to become truly attractive / sustainable.

And that’s a crying shame. Because somewhere along the way, we lost the ability to freely exchange ideas and benefit as a whole community from the potential of absolutely explosive creative growth. As it stands today, OSS and OSH are fine at spreading and even democratizing technology, but in effect stifling innovation. Open? How?

In tomorrow’s last episode – my feeble attempt to come clean, with Jee Labs. No climax. No grand finale.

Open Source Hardware (3/5)

In Musings on Jan 9, 2010 at 00:01

(This is part 3 of a 5-part series on open source – parts 1, 2, 3, 4, 5)

Screen shot 2010-01-06 at 00.04.22.png

On the side of open source hardware (OSH), the situation is less rosy.

The tricky part with OSH is that it’s about “stuff”. And stuff costs money right from the outset – to collect, to manufacture, to move around, to keep in stock, to use, to dispose of (OSS also has costs, btw – but indirectly, in the form of invested time, effort, and expertise).

There’s absolutely nothing wrong with money. Once money enters the picture, you can do things like add a margin to cover the costs. Or add a fat margin to cover more than the costs. Or pay others to get involved and bring in specific expertise. Which is way cool. Now we’re talking economy. And sustainability. And incentive. And of course: bread-on-the-table, i.e. income.

Oh yes, this is potent stuff. Markets. Resources. Competition. Profits. Investments. Capitalism!

The reason I’m pointing it out is that it’s a fundamental difference between OSH and OSS. And it’s obvious – everyone knows that to use software, you just gotta go to the proper site and download it. And to use stuff, you gotta purchase it and wait for it to be delivered on your doorstep (or go shopping and take it home, whatever).

We’ve all just been through the Christmas season. Shop ’til you drop. Pay and get.

But wait a minute… that’s for any kind of stuff. What’s “open source” hardware got to do with this? Well, you see, that’s where it gets a bit unusual.

Open source hardware means that the producers “open up” about what they are doing. In the electronics / physical computing / kits domain, the idea is to make the designs for the products openly available, often in the form of schematics and printed circuit board designs.

What’s the point? Well, you’re not locked-in, presumably. If you don’t like the design, you could take it and modify it according to your own wishes. Amazingly, this doesn’t seem to happen a lot. I can think of three reasons: economies of scale, interface conventions, and tools/skills.

The production of printed circuits is very much driven by economies of scale. It’s relatively expensive to create all the masks to produce good 2-layer PCB’s, with silk screens, via’s, solder masks, etc. So it makes a lot of sense for a supplier to have a certain number of PCBs made at once, and then sell individual boards at a cost lower than one-off production could. This is the traditional manufacturer’s sweet spot – it just happens to apply to tiny shops as well now, instead of just car manufacturers.

And hey, you know what? It’s great: all these manufacturers do us a service, by producing tons of goods of which we only need one, at fantastically attractive prices.

Another reason for not just starting from what there is and inventing tons of variations, is that an important aspect of a design is its interface with the rest of the world. This is particularly obvious with physical computing. Once you change interfaces (whether electrical, mechanical, or logical), you risk reducing inter-operability. This doesn’t apply just to OSH – think of all the “after-market” accessories in numerous domains. Many product choices end up affecting a lot more than that product itself over time.

The third reason why OSH isn’t spurring as much collaboration and innovation is that there’s often a fairly steep learning curve involved. It took me quite a lot of time and effort to learn how to design printed circuits, and how to have them manufactured. In fact, I made a ridiculous number of mistakes along the way, and had to throw away lots of failed trials. That’s not just wasted time and effort – that’s wasted “stuff”, and real money.

So it’s not surprising that not everyone grabs OSH designs and starts tinkering with them and producing their own boards. The process is non-trivial, takes time, and wastes money. We’d much rather buy ready-made products, or at least partially-made (including kits), with all the errors resolved so we can avoid them.

Again, this is great. Some people out there are willing to do the hard work, and others pick the fruits of their labor and compensate them for it.

Just keep in mind that here too, OSH is totally different from OSS. In a way, OSH solved the problem which OSS was never able to: to create a natural mechanism for rewarding time and effort spent.

So in a way, OSH could be called a success story. OSH seems to be picking up steam like crazy these days. Need an example? Try “Arduino”.

But wait, not so fast…

To be continued tomorrow – Open? Really?

Open Source Software (2/5)

In Musings on Jan 8, 2010 at 00:01

(This is part 2 of a 5-part series on open source – parts 1, 2, 3, 4, 5)

Screen shot 2010-01-06 at 00.05.56.png

With open source software (OSS), the immediate costs are generally very low. Get a computer and an internet connection (which you probably would anyway), and you’ve got all the expenses covered to benefit from OSS – and participate in its development. Infinite audience, negligible variable costs, very low barrier to entry. Other than your own time, skills, and effort.

The result? Explosive growth – SourceForge lists some 160,000 projects (or 380,000, depending where you look). Its two top projects (file sharing), have been downloaded half a billion times each.

Is it good? Sure, let a thousand OSS projects bloom, or more, why not.

But there’s also a lot of wheel re-invention going on (I should know, I’m an expert on that!). Some of it, and I fear quite a bit more than most people would be inclined to acknowledge, is extremely unfortunate: “hey, feature/app X in language Y is neat, let’s redo it in language Z!”

Is OSS leading to innovation? I’m not so sure. I suspect that when people stumble upon a potentially truly great idea, they will be tempted to re-consider whether they really want to share that idea with the world, and risk diluting it – by others adding more ideas (in the best scenario), or by others adopting it and succeeding at drawing more attention and claiming credit for it.

Another major hurdle is that turning an innovative idea into an innovative solution requires a lot of hard work. For “big” innovations, you’ll need to get all the right volunteers involved and motivated, have excellent people skills, and show true leadership. It’s easy to hit a wall somewhere along the road and lose interest before the product is finished (I should know, I’ve been there far too often).

It wouldn’t surprise me if for all the people sharing their ideas and working on them in public, there were at least as many others soaking up everything they can find and thinking “hey, maybe I can do something with this and become famous” – or even rich, by switching to a closed source software model. Even the GPL can’t fully prevent that, as long as you keep your software secret (and don’t get caught by having “similar bugs”!). The mere prospect of that happening can drain all motivation from an open source developer (see also: Prisoner’s Dilemma).

So all in all, my impression is that OSS isn’t all that innovative and collaborative as it’s often cracked up to be. Do you need an example? How about “Google”, which was built on the shoulders of open source. Sure, they do give back a lot, also in the form of open source. In fact, they provide what is arguably the greatest free service on the internet. But genuine collaboration w.r.t. their core innovations? No way.

There’s nothing “wrong” at all with this, btw. It’s just not really open.

To be continued tomorrow… Open Source Hardware

Open Source (1/5)

In Musings on Jan 7, 2010 at 00:01

(This is the first part of a 5-part series on open source – parts 1, 2, 3, 4, 5)

Jee Labs is an open source shop – both open source hardware and open source software.

There is money involved in open source. And there is hypocrisy. Let me explain…

The basic idea of open source, is literally to make the source of things freely available – and more or less allow reuse. With software, it usually comes down to making the source code publicly available to anyone. With hardware, the common approach is to make all design files publicly available to anyone.

40bsd.png 88x31.png 40gnugpl.gif

Depending on the model, you’re either allowed to do anything you like with it (all the way to hiding it inside your own for-sale stuff), or you have to at least maintain the proper attribution to the original author(s), or you restrict the use to projects which accept to be bound by the same rules as yours – preventing anyone from running away (and perhaps getting rich) with the original stuff and not letting anyone know they did so.

The “do anything you like with it” approach basically says: you can’t really steal it, because the author treats his/her work more as a form of self-expression than as promotion of something which deserves to be rewarded (whether monetary of in terms of fame). The latter “only on my terms” approach says “hey, as author I’m entitled to benefit at least as much as anyone else if there is any commercial value”.

I am (thank goodness) not a lawyer, so please treat all of the above as no more than my personal attempt to summarize what this is all about.

Newsgroups have overflowed and probably even been shut down because of debates on the differences between all the “licensing” approaches – (flame) wars have been waged about all this, and probably still do. Been there, done that. Yawn.

But in my view, all the open source approaches are minor variations of the same basic concept. The term “open source” itself is actually an amazing success, in that it manages to capture the essence despite all the factions and disagreements. It’s about volunteer effort, the power of gifts, competence, creativity, and the freedom of choice and re-use this approach offers.

Above all, open source is about enabling collaboration. Because, you see, the big differentiator of open source is that all ideas can be out in the open. When there is no need to protect ideas, there is also no need to restrict the flow of information. And, well… we’ve got this thing called the internet which happens to be friggin’ good at shuttling information (and ideas) around the globe.

So far so good. But there are a couple of weird things going on…

To be continued tomorrow… Open Source Software