Computing stuff tied to the physical world

Archive for 2015

Turning the page on 2015

In Book on Dec 29, 2015 at 23:01

As the last few days of 2015 pass, I’d like to reflect on the recent past but also look forward to things to come. For one, the JeeLabs weblog is now thriving again: the new weekly post-plus-articles format has turned out to suit me well. It keeps me going, it’s oodles of fun to do, and it avoids that previous trap of getting forced into too-frequent daily commitments.

Apart from a summer break, every week in 2015 has been an opportunity to explore and experiment with physical computing topics, several ARM µCs, and various software ideas.

Here are the last two articles for 2015:

I’d like to close off 2015 with a deeply worrisome but nevertheless hopeful note. While this is a totally technology-focused weblog, it has not escaped me that we live in very troubling times. Never in history have so many people been on the run, fleeing home and country for the most basic of all human needs: a safe place to live. We’ve all seen Aryan Kurdi’s fate:

Aylan kurdi

An innocent three-year old boy, born in the wrong place at the wrong time, trying to escape from armed conflict. He could have been me, he could have been you. His tragic fate and that of many others could have been avoided. Europe offers a peaceful and prosperous home for half a billion people – accommodating one percent more is the least we can do.

I’m proud to see countries rise to the occasion, and put humanity and the planet first. Let’s cherish our compassion as well as our passion, our understanding as well as our creativity. For 2016, I wish you and yours a very open, respectful, and empathy-rich planet.

(For comments, visit the forum area)

Tying up 2015’s loose ends

In Book on Dec 23, 2015 at 00:01

As the end of 2015 is approaching and now that the new server setup has been completed, it’s time to clean up some remaining loose ends. Spring cleaning is early, here at JeeLabs!

Next week will be a good time for reflection and my new year’s resolutions. For now, I just want to go into some topics which didn’t seem to fit anywhere else. In daily doses, as usual:

I’m pleased with the new Odroid XU4 server so far. The Mac Mini is being re-purposed as Liesbeth’s new Mac – a nice SSD-based setup with 8 GB of RAM. Its Core 2 Duo @ 2.66 GHz will be a step up from the 5-year old 1.4 GHz MacBook Air’s she’s been working on.

Which frees up that 11″ MBA for my own portable use again – it’s a fantastic little laptop!

(For comments, visit the forum area)

Switching to a new server

In Book on Dec 16, 2015 at 00:01

As you may know, the various websites here at JeeLabs are served locally. Houten offers very fast Fiber-To-The-Home connections, my ISP (XS4ALL) is excellent, and I don’t see the point of putting everything in the cloud. Sure, it’s a bit more work to keep going, but this way I can delude myself into thinking that I am “the master of my own universe” …

This Mac Mini server has been doing all the work for several years now:

Screen shot 2010 06 15 at 124120

But for various reasons it’s now time to revisit that setup and simplify things further:

As you’ll see, I’m jettisoning a lot of dead weight. The resulting server is much cheaper, consumes far less energy, is more robust, has fewer moving parts, is easier to manage, and handles page requests much faster than before. What is there not to like about this, eh?

(For comments, visit the forum area)

A diversion into FPGAs

In Book on Dec 9, 2015 at 00:01

Last week’s exploration of “processing with limited computing power” a few decades ago has led me into another direction which turned out to be mesmerising and addictive…

All due to a chip called a Field Programmable Logic Array, which usually looks like this:


That’s a lot of pins – large FPGA’s can have over 1,000 pins, in fact!

What are they? What’s the point? Why are they so hard to use? Can we play with them?

Read on to find out, as usual there will be articles coming this week to explore the “field”:

Once again, there is an awful lot of ground to cover this week, but with a bit of luck, it’ll end up being a decent bird’s eye view of what this FPGA (and CPLD) stuff is all about…

(For comments, visit the forum area)

A fingernail vs a moon lander

In Book on Dec 2, 2015 at 00:01

Small microcontroller chips, modern laptops/desktops – the range of computing power is enormous nowadays. So enormous, that it can be pretty hard to grasp the magnitude.

Screen Shot 2015 12 02 at 14 34 29

This week, I’m going to do some explorations, using a HY-Tiny board with an STM32F103 on it. Or to be put it differently: a fairly low-end but popular 32-bit ARM Cortex M3 µC, running at 72 MHz, with 128 KB flash and 20 KB RAM. It draws up to 40 mA at 3.3V.

Let’s find out what this little gadget is capable of:

So there you have it: a µC the size of a fingernail, vastly outperforming the Apollo Guidance Computer used to put man on the moon, less than half a century ago. You’ve got to wonder: would we still be able to perform this feat, using just an STM32F103? – I have my doubts…

(For comments, visit the forum area)

Pi-based STM32F103 development

In Book on Nov 25, 2015 at 00:01

There are many ways to experiment with embedded development, which is what the Arduino IDE really is all about. But before diving more into using the Arduino IDE with STM32 µCs, I’d like to mention another option, based on a Raspberry Pi (A+, B+, or 2):

Aia side

It’s called the ARMinARM board, by a company called OnAndOffables: a board with an STM103RE, sitting on top of the RasPi, with lots of connections to communicate through.

I’ll briefly touch on alternatives, but I like this setup because of how these two boards work together. Unlike a Win/Mac/Lin setup, this combo has everything-in-one for development.

So here goes for this week’s episode of articles, dished out in small daily doses:

As you will see, there’s quite a bit of potential in this little 7x6x3 cm package!

(For comments, visit the forum area)

Programmer PCB Triple Play

In Book on Nov 18, 2015 at 00:01

To follow up on last week’s upload articles, I’m going to turn a couple of these boards into Black Magic Probe programmers:

DSC 5256

From left to right: an STM Nucleo F103RB (any Nucleo will do, though), a board from eBay (many vendors, search for “STM32F103C8T6 board”), and Haoyu’s HY-TinySTM103T.

And let’s do it right, with a couple of custom PCB’s – here are three versions (the first one will keep its original firmware, actually). Each does have slightly different usage properties:

Each of these supports uploading, serial “pass-through” connection, and h/w debugging. Building one of these, or getting a pre-built & pre-loaded one, is a really good investment.

(For comments, visit the forum area)

Talking to an STM32

In Book on Nov 11, 2015 at 00:01

When dealing with ARM µCs and boards based on them, there’s always one big elephant in the room: how to upload software to them, and how to talk to them via serial or USB.

The available options, choices, trade-offs, and even just their properties can be staggering. This week’s article series is about what the problem is, an overview of the many different ways to address them, and how to get started with the minimal fuss and/or minimal cost.

Laptop to chip

On the menu for this week are four dishes, served on a daily basis:

Fortunately, we’ll need very little to get going. We can pull ourselves up by our bootstraps!

(For comments, visit the forum area)

The world of STM32

In Book on Nov 4, 2015 at 00:01

As announced last week, I’ll be switching (pouring?) my efforts into a new series of µC’s for a while, so there’s quite a bit of work ahead to get going, and I really don’t want to dwell too much on the basics. The whole point is to get further, not redo everything and get nowhere!

So for this week, there will be a whole slew of posts. As usual, each next post will be ready on successive days, as I figure it all out and try to stay ahead of the game :)

Here is a sneak preview of the main character in this exciting new adventure:

DSC 5209

(For comments, visit the forum area)

Making a sharp turn

In Book on Oct 28, 2015 at 00:01

During my tinkering over the past two weeks, I hit a snag while trying to hook up the LPC824 µC to the Arduino IDE. Nothing spectacular or insurmountable, but still…

This week, I have two articles for you, explaining what this is all about:

Warning: there are going to be some changes w.r.t. where I’ll be going next…


(For comments, visit the forum area)

IDE w/ LPC824, part 2

In Book on Oct 21, 2015 at 00:01

Let’s get that upload going. Remember, this is about adding a “hardware platform” to the Arduino IDE so it can compile and upload files to a Tinker Pico, based on the LPC824 µC.

There are two parts to this: 1) getting all the IDE setup files right so that it knows how to compile & upload, and 2) implementing pinMode(), digitalWrite(), etc. so they can be used with LPC824 microcontrollers in the same way as with ATmega’s and other µC’s.

So let’s tackle these issues one by one, shall we?

Here’s my very first compile and upload to a Tinker Pico:


(For comments, visit the forum area)

Arduino IDE w/ LPC824

In Book on Oct 14, 2015 at 00:01

What will it take to support the LPC824 µC, i.e. the Tinker Pico, from the Arduino IDE?

Screen Shot 2015 10 08 at 20 55 09

As the Arduino IDE has been evolving and growing over the years, more and more µC platforms have been added – both officially and unofficially – after the initial ATmega µC’s. One of the earlier ones was Energia, which was created as a fork to support TI’s MSP430 chips, but as the IDE matured, it has become easier to support several other architectures in the mainstream build as a simple add-on, such as ESP8266 and STM32.

Sooo… let’s try and find out how hard it would be to include our LPC824 µC into the mix:

Update: the following two articles are postponed to next week. Sorry for the bad planning:

  • A quick hack to make it compile – Fri
  • Uploading via the IDE – Sat

So far, this is merely an exploration, as I said. A complete port will take considerably more effort. I’m also considering moving a bit beyond the digitalRead/Write conventions…

(For comments, visit the forum area)

Meet the Tinker Pico (again)

In Book on Oct 7, 2015 at 00:01

It’s time to get some new hardware out the door, which I’ve been doodling with here at JeeLabs for quite some time, and which some of you might like to tinker with as well.

The first new board is the Tinker Pico, which was already pre-announced a while ago. Here’s the final PCB, which is being sent off to production now, soon to appear in the shop:

Screen Shot 2015 10 02 at 15 47 24

As before, each week consists of one announcement like this, and one or more articles with further details, ready for release on successive days of the week – three in this case:

This is merely a first step, running pretty basic demo code. Stay tuned for more options…

(For comments, visit the forum area)

Getting back in the groove

In Musings on Sep 30, 2015 at 00:01

This will be the last post in “summer mode”. Next week, I’ll start posting again with articles that will end up in the Jee Book, as before – i.e. trying to create a coherent story again.

The first step has just been completed: clearing up my workspace at JeeLabs. Two days ago, every flat surface in this area was covered with piles of “stuff”. Now it’s cleaned up:

IMG 0154

On the menu for the rest of this year: new products, and lots of explorations / experiments in Physical Computing, I hope. I have an idea of where to go, but no definitive plans. There is a lot going on, and there’s a lot of duplication when you surf around on the web. But this weblog will always be about trying out new things, not just repeating what others are doing.

My focus will remain aimed at “Computing stuff tied to the physical world” as the JeeLabs byline says, in essentially two ways: 1) to improve our living environment in and around the house, and 2) to have fun and tinker with low-cost hardware and open source software.

For one, I’d like to replace the wireless sensor network I’ve been running here, or at least gradually evolve all of the nodes to new ARM-based designs. Not for the sake of change but to introduce new ideas and features, get even better battery lifetimes, and help me further in my quest to reduce energy consumption. I’d also like to replace my HouseMon 0.6 setup which has been running here for years now, but with virtually no change or evolution.

An idea I’d love to work on is to sprinkle lots of new room-node like sensors around the house, to find out where the heat is going – then correlate it to outside temperature and wind direction, for example. Is there some window we can replace, or some other measure we could take to reduce our (still substantial) gas consumption during the cold months? Perhaps the heat loss is caused by the cold rising from our garage, below the living room?

Another long-overdue topic, is to start controlling some appliances over wireless, not just collecting the data from what are essentially send-only nodes. Very different, since usually there is power nearby for these nodes, and they need good security against replay-attacks.

I’ll want to be able to see the basic “health” indicators of the house at a glance, perhaps shown inconspicuously on a screen on the wall somewhere (as well as on a mobile device).

As always, all my work at JeeLabs will be fully open source for anyone to inspect, adopt, re-use, extend, modify, whatever. You do what you like with it. If you learn from it and enjoy, that’d be wonderful. And if you share and give back your ideas, time, or code: better still!

Stay tuned. Lots of fun with bits, electrons, and molecules ahead :)

Shedding weight

In Musings on Sep 23, 2015 at 00:01

I’ve been on a weight loss diet lately. In more ways than one…

As an old fan of the Minimal Mac weblog (now extinct), I’ve always been intrigued by simplification. Fewer applications, a less cluttered desk (oops, not there yet!), simpler tools, and leaner workflows. And with every new laptop over the years, I’ve been toning down the use of tons of apps, widgets, RSS feeds, note taking systems, and reminders.

Life is about flow and zen, not about interruptions or being busy. Not for me, anyway.

One app for all my documents (DevonThink), one app for all my quick notes (nvAlt), one programming-editor convention (vim/spacemacs), one off-line backup system (Arq), one on-line backup (Time Machine), one app launcher / search tool (Spotlight) … and so on.

I’ve recently gone back to doing everything on a single (high-end Mac) laptop. No more tinkering with two machines, Dropbox, syncing, etc. Everything in one place, locally, with a nice monitor plugged in when at my desk. That’s 1920×1200 pixels when on the move, and 2560×1600 otherwise, all gorgeously retina-sharp. I find it amazing how much calmer life becomes when things remain the same every time you come back to it.

I don’t have a smartphone, which probably puts me in the freaky Luddite category. So be it. I now only keep a 4 mm thin credit-card sized junk phone in my pocket for emergency use.

We’ve gone from an iPad each to a shared one for my wife Liesbeth and me. It’s mostly used for internet access and stays in the living room, like newspapers did in the old days.

I’ve gone back to using an e-paper based reader for when I want to sit on the couch or go outside and read. It’s better than an iPad because it’s smaller, lighter, and it’s passively lit, which is dramatically better in daylight than an LCD screen. At night I read less, because in the end it’s much nicer to wake up early and go enjoy daylight again. What a concept, eh?

While reading, I regularly catch myself wanting to access internet. Oops, can’t do. Great!

As for night-time habits: it’s astonishing how much better I sleep when not looking at that standard blueish LCD screen in the evening. Sure, I do still burn the midnight oil banging away on the keyboard, but thanks to a utility called f.lux the screen white balance follows the natural reddening colour shift of the sun across the day. Perfect for a healthy sleep!

Our car sits unused for weeks on end sometimes, as we take the bike and train for almost everything nowadays. It’s too big a step to get rid of it – maybe in a few years from now. So there’s no shedding weight there yet, other than in terms of reducing our CO2 footprint.

And then there’s the classical weight loss stuff. For a few months now, I’ve been following the practice of intermittent fasting, combined with picking up my old habit of going out running again, 2..3 times per week. With these two combined, losing real weight has become ridiculously easy – I’ve shed 5 kg, with 4 more to go until the end of the year.

Eat less and move more – who would have thought that it actually works, eh?

But hey, let me throw in some geek notes as well. Today, I received the Withings Pulse Ox:

DSC 5144

(showing the heart rate sensor on the back – the front has an OLED + touch display)

It does exactly what I want: tell the time, count my steps, and measure my running activity, all in a really small package which should last well over a week between charges. It sends its collected data over BLE to a mobile device (i.e. our iPad), with tons of statistics.

Time will tell, but I think this is precisely the one gadget I want to keep in my pocket at all times. And when on the move: keys, credit cards, and that tiny usually-off phone, of course.

Except for one sick detail: why does the Withings “Health Mate” app insist on sending out all my personal fitness tracking data to their website? It’s not a show-stopper, but I hate it. This means that Withings knows all about my activity, and whenever I sync: my location.

So here’s an idea for anyone looking for an interesting privacy-oriented challenge: set up a Raspberry as firewall + proxy which logs all the information leaking out of the house. It won’t address mobile use, but it ought to provide some interesting data for analysis over a period of a few months. What sort of info is being “shared” by all the apps and tools we’ve come to rely on? Although unfortunately, it won’t be of much use with SSL-based sessions.

Bandwagons and islands

In Musings on Sep 16, 2015 at 00:01

I’ve always been a fan of the Arduino ecosystem, hook, line, and sinker: that little board, with its AVR microcontroller, the extensibility, through those headers and shields, and the multi-platform IDE, with its simple runtime library and access to all its essential hardware.

So much so, that the complete range of JeeNode products has been derived from it.

But I wanted a remote node, a small size, a wireless radio, flexible sensor options, and better battery lifetimes, which is why several trade-offs came out differently: the much smaller physical dimension, the RFM radio, the JeePort headers, and the FTDI interface as alternative for a built-in USB bridge. JeeNodes owe a lot to the Arduino ecosystem.

That’s the thing with big (even at the time) “standards”: they create a common ground, around which lots of people can flock, form a community, and extend it all in often quite surprising and innovative ways. Being able to acquire and re-use knowledge is wonderful.

The Arduino “platform” has a bandwagon effect, whereby synergy and cross-pollination of ideas lead to a huge explosion of projects and add-ons, both on the hardware as on the software side. Just google for “Arduino” … need I say more?

Yet sometimes, being part of the mainstream and building on what has become the “baseline” can be limiting: the 5V conventions of early Arduino’s doesn’t play well with most of the newer sensor chips these days, nor is it optimal for ultra low-power uses. Furthermore, the Wiring library on which the Arduino IDE’s runtime is based is not terribly modular or suitable for today’s newer µC’s. And to be honest, the Arduino IDE itself is really quite limited compared to many other editors and IDE’s. Last but definitely not least, C++ support in the IDE is severely crippled by the pre-processing applied to turn .ino files into normal .cpp files before compilation.

It’s easy to look back and claim 20-20 vision in hindsight, so in a way most of these issues are simply the result of a platform which has evolved far beyond the original designer’s wildest dreams. No one could have predicted today’s needs at that point in time.

There is also another aspect to point out: there is in fact a conflict w.r.t. what this ecosystem is for. Should it be aimed at the non-techie creative artist, who just wants to get some project going without becoming an embedded microelectronics engineer? Or is it a playground for the tech geek, exploring the world of physical computing, diving in to learn how it works, tinkering with every aspect of this playground, and tracing / extending the boundaries of the technology to expand the user’s horizon?

I have decades of software development experience under my belt (and by now probably another decade of physical computing), so for me the Arduino and JeeNode ecosystem has always been about the latter. I don’t want a setup which has been “dumbed down” to hide the details. Sure, I crave for abstraction to not always have to think about all the low-level stuff, but the fascination for me is that it’s truly open all the way down. I want to be able to understand what’s under the hood, and if necessary tinker with it.

The Arduino technology doesn’t have that many secrets any more for me, I suspect. I think I understand how the chips work, how the entire circuit works, how the IDE is set up, how the runtime library is structured, how all the interrupts work together, yada, yada, yada.

And some of it I’m no longer keen to stick to: the basic editing + compilation setup (“any editor + makefiles” would be far more flexible), the choice of µC (so many more ARM fascinating variants out there than what Atmel is offering), and in fact the whole premise of using an edit-compile-upload-run seems limiting (over-the air uploads or visual system construction anyone?).

Which is why for the past year or so, I’ve started bypassing that oh-so-comfy Arduino ecosystem for my new explorations, starting from scratch with an ARM gcc “toolchain”, simple “makefiles”, and using the command-line to drive everything.

Jettisoning everything on the software side has a number of implications. First of all, things become simpler and faster: less tools to use, (much) lower startup delays, and a new runtime library which is small enough to show the essence of what a runtime is. No more.

A nice benefit is that the resulting builds are considerably smaller. Which was an important issue when writing code for that lovely small LPC810 ARM chip, all in an 8-pin DIP.

Another aspect I very much liked, is that this has allowed me to learn and subsequently write about how the inside of a runtime library really works and how you actually set up a serial port, or a timer, or a PWM output. Even just setting up an I/O pin is closer to the silicon than the digitalWrite(...) abstraction provided by the Arduino runtime.

… but that’s also the flip side of this whole coin: ya gotta dive very deep!

By starting from scratch, I’ve had to figure out all the nitty gritty details of how to control the hardware peripherals inside the µC, tweaking bit settings in some very specific way before it all started to work. Which was often quite a trial-and-error ordeal, since there is nothing you can do other than to (re-) read the datasheet and look at proven example code. Tinker till your hair falls out, and then (if you’re lucky) all of a sudden it starts to work.

The reward for me, was a better understanding, which is indeed what I was after. And for you: working examples, with minimal code, and explained in various weblog posts.

Most of all this deep-diving and tinkering can now be found in the embello repository on GitHub, and this will grow and extend further over time, as I learn more tricks.

Embello is also a bit of an island, though. It’s not used or known widely, and it’s likely to stay that way for some time to come. It’s not intended to be an alternative to the Arduino runtime, it’s not even intended to become the ARM equivalent of JeeLib – the library which makes it easy to use the ATMega-based JeeNodes with the Arduino IDE.

As I see it, Embello is a good source of fairly independent examples for the LPC8xx series of ARM µC’s, small enough to be explored in full detail when you want to understand how such things are implemented at the lowest level – and guess what: it all includes a simple Makefile-based build system, plus all the ready-to-upload firmware.bin binary images. With the weblog posts and the Jee Book as “all-in-one” PDF/ePub documentation.

Which leaves me at a bit of a bifurcation point as to where to go from here. I may have to row back from this “Embello island” approach to the “Arduino mainland” world. It’s no doubt a lot easier for others to “just fire up the Arduino IDE” and load a library for the new developments here at JeeLabs planned for later this year. Not everyone is willing to learn how to use the command line, just to be able to power up a node and send out wireless radio packets as part of a sensor network. Even if that means making the code a bit bulkier.

At the same time, I really want to work without having to use the Arduino IDE + runtime. And I suspect there are others who do too. Once you’ve developed other software for a while, you probably have adopted a certain work style and work environment which makes you productive (I know I have!). Being able to stick to it for new embedded projects as well makes it possible to retain that investment (in routine, knowledge, and muscle memory).

Which is why I’m now looking for a way to get the best of both worlds: retain my own personal development preferences (which a few of you might also prefer), while making it easy for everyone else to re-use my code and projects in that mainstream roller coaster fashion called “the Arduino ecosystem”. The good news is that the Arduino IDE has finally evolved to the point where it can actually support alternate platforms, including ARM.

We’ll see how it goes… all suggestions and pointers welcome!


In Musings on Sep 9, 2015 at 00:01

No techie post this time, just some pictures from a brief trip last week to Magdeburg:

IMG 0727

… and on the inside, even more of a little playful fantasy world:

IMG 0700

This was designed by the architect Friedensreich Hundertwasser at the turn of this century. It was the last project he worked on, and the building was in fact completed after his death.

Feels a bit like an Austrian (and more restrained) reincarnation of Antoni Gaudí to me.

A playful note added to a utilitarian construction – I like it!

Space tools

In Musings on Sep 2, 2015 at 00:01

It’s a worrisome sign when people start to talk about tools. No real work to report on?

With that out of the way, let’s talk about tools :) – programming tools.

Everyone has their favourite programmer’s editor and operating system. Mine happens to be Vim (MacVim) and Mac OSX. Yours will likely be different. Whatever works, right?

Having said that, I found myself a bit between a rock and a hard place lately, while trying out ClojureScript, that Lisp’y programming language I mentioned last week. The thing is that Lispers tend to use something called the REPL – constantly so, during editing in fact.

What’s a REPL for?

Most programming languages use a form of development based on frequent restarts: edit your code, save it, then re-run the app, re-run the test suite, or refresh the browser. Some development setups have turned this into a very streamlined and convenient fine art. This works well – after all, why else would everybody be doing things this way, right?

Edit file

But there’s a drawback: when you have to stop the world and restart it, it takes some effort to get back to the exact context you’re working on right now. Either by creating a good set of tests, with “mocks” and “spies” to isolate and analyse the context, or by repeating the steps to get to that specific state in case of interactive GUI- or browser-based apps.

Another workaround, depending on the programming language support for it, is to use a debugger, with “breakpoints” and “watchpoints” set to stop the code just where you want it.

But what if you could keep your application running – assuming it hasn’t locked up, that is? So it’s still running, but just not yet doing what it should. What if we could change a few lines of code and see if that fixes the issue? What if we could edit inside a running app?

What if we could in fact build an app from scratch this way? Take a small empty app, define a function, load it in, see if it works, perhaps call the function from a console-style session running inside the application? And then iterate, extend, tweak, fix, add code… live?

This is what people have been doing with Lisp for over half a century. With a “REPL”:

Edit repl

A similar approach has been possible for some time in a few other languages (such as Tcl). But it’s unfortunately not mainstream. It can take quite some machinery to make it work.

While a traditional edit-save-run cycle takes a few seconds, REPL-based coding is instant.

A nice example of this in action is in Tim Baldridge’s videos about Clojure. He never starts up an application in fact: he just fires up the REPL in an editor window, and then starts writing little pieces of code. To try it out, he hits a key combination which sends the parenthesised form currently under the cursor to the REPL, and that’s it. Errors in the code can be fixed and resent at will. Definitions, but also little test calls, anything.

More substantial bits of code are “require’d” in as needed. So what you end up, is keeping a REPL context running at all times, and loading stuff into it. This isn’t limited to server-side code, it also works in the browser: enter “(js/alert "Hello")” and up pops a dialog. All it takes is the REPL to be running inside the browser, and some websocket magic. In the browser, it’s a bit like typing everything into the developer console, but unlike that setup, you get to keep all the code and trials you write – in the editor, with all its conveniences.


Another recent development in ClojureScript land is Figwheel by Bruce Hauman. There’s a 6-min video showing an example of use, and a very nice 45-min video where he goes into things in a lot more detail.

In essence, Figwheel is a file-driven hot reloader: you edit some code in your editor, you save the file, and Figwheel forces the browser (or node.js) to reload the code of just that file. The implementation is very different, but the effect is similar to Dan Abramov’s React Hot Reloader – which works for JavaSript in the browser, when combined with React.

There are some limitations for what you can do in both the REPL-based and the Figwheel approach, but if all else fails you can always restart things and have a clean slate again.

The impact of these two approaches on the development process are hard to understate: it’s as if you’re inside the app, looking at things and tweaking it as it runs. App restarts are far less common, which means server-side code can just keep running as you develop pieces of it further. Likewise, browser side, you can navigate to a specific page and context, and change the code while staying on that page and in that context. Even a scroll position or the contents of an input box will stay the same as you edit and reload code.

For an example Figwheel + REPL setup running both in the browser and in node.js at the same time, see this interesting project on GitHub. It’s able to do hot reloads on the server as well as on (any number of) browsers – whenever code changes. Here’s a running setup:

Edit figwheel

And here’s what I see when typing “(fig-status)” into Figwheel’s REPL:

Figwheel System Status
Autobuilder running? : true
Focusing on build ids: app, server
Client Connections
     server: 1 connection
     app: 1 connection

This uses two processes: a Figwheel-based REPL (JVM), and a node-based server app (v8). And then of course a browser, and an editor for actual development. Both Node.js and the browser(s) connect into the Figwheel JVM, which also lets you type in ClojureScript.


So what do we need to work in this way? Well, for one, the language needs to support it and someone needs to have implemented this “hot reload” or “live code injection” mechanism.

For Figwheel, that’s about it. You need to write your code files in a certain way, allowing it to reload what matters without messing up the current state – “defonce” does most of this.

But the real gem is the REPL: having a window into a running app, and peeking and poking at its innards while in flight. If “REPL” sounds funny, then just think of it as “interactive command prompt”. Several scripting languages support this. Not C, C++, or Go, alas.

For this, the editor should offer some kind of support, so that a few keystrokes will let you push code into the app. Whether a function definition or a printf-type call, whatever.

And that’s where vim felt a bit inadequate: there are a few plugins which try to address this, but they all have to work around the limitation that vim has no built-in terminal.

In Emacs-land, there has always been “SLIME” for traditional Lisp languages, and now there is “CIDER” for Clojure (hey, I didn’t make up those names, I just report them!). In a long-ago past, I once tried to learn Emacs for a very intense month, but I gave up. The multi-key acrobatics is not for me, and I have tons of vim key shortcuts stashed into muscle memory by now. Some people even point to research to say that vim’s way works better.

For an idea of what people can do when they practically live inside their Emacs editor, see this 18-min video. Bit hard to follow, but you can see why some people call Emacs an OS…

Anyway, I’m not willing to unlearn those decades of vim conventions by now. I have used many other editors over the years (including TextMate, Sublime Text, and recently Atom), but I always end up going back. The mouse has no place in editing, and no matter how hard some editors try to offer a “vim emulation mode”, they all fail in very awkward ways.

And then I stumbled upon this thing. All I can say is: “Vim, reloaded”.

Wow – a 99% complete emulation, perhaps one or two keystrokes which work differently. And then it adds a whole new set of commands (based on the space bar, hence the name), incredibly nice pop-up help as you type the shortcuts, and… underneath, it’s… Emacs ???

Spacemacs comes with a ton of nice default configuration settings and plugins. Other than some font changes, some extra language bindings, I hardly change it. My biggest config tweak so far has been to make it start up with a fixed position and window size.

So there you have it. I’m switching my world over to ClojureScript as main programming language (which sounds more dramatic than it is, since it’s still JavaScript + browser + node.js in the end), and I’m switching my main development tool to Emacs (but that too is less invasive than it sounds, since it’s Vim-like and I can keep using vim on remote boxes).

Clojure and ClojureScript

In Musings on Aug 26, 2015 at 00:01

I’m in awe. There’s a (family of) programming languages which solves everything. Really.

  • it works on the JVM, V8, and CLR, and it interoperates with what already exists
  • it’s efficient, it’s dynamic, and it has parallelism built in (threaded or cooperative)
  • it’s so malleable, that any sort of DSL can trivially be created on top of it

As this fella says at this very point in his videoState. You’re doing it wrong.

I’ve been going about programming in the wrong way for decades (as a side note: the Tcl language did get it right, up to a point, despite some other troublesome shortcomings).

The language I’m talking about re-uses the best of what’s out there, and even embraces it. All the existing libraries in JavaScript can be used when running in the browser or in Node.js, and similarly for Java or C# when running in those contexts. The VM’s, as I already mentioned also get reused, which means that decades of research and optimisation are taken advantage of.

There’s even an experimental version of this (family of) programming languages for Go, so there again, it becomes possible to add this approach to whetever already exists out there, or is being introduced now or in the future.

Due to the universal reach of JavaScript these days, on browsers, servers, and even on some embedded platforms, that really has most interest to me, so what I’ve been putting my teeth into recently is “ClojureScript”, which specifically targets JavaScript.

Let me point out that ClojureScript is not another “pre-processor” like CoffeScript.

“State. You’re doing it wrong.”

As Rich Hickey, who spoke those words in the above video quickly adds: “which is ok, because I was doing it wrong too”. We all took a wrong turn a few decades ago.

The functional programming (FP) people got it right… Haskell, ML, that sort of thing.

Or rather: they saw the risks and went to a place where few people could follow (monads?).

FP is for geniuses

What Clojure and ClojureScript do, is to bring a sane level of FP into the mix, with “immutable persistent datastructures”, which makes it all very practical and far easier to build with and reason about. Code is a transformation: take stuff, do things with it, and return derived / modified / updated / whatever results. But don’t change the input data.

Why does this matter?

Let’s look at a recent project taking the world by storm: React, yet another library for building user interfaces (in the browser and on mobile). The difference with AngularJS is the conceptual simplicity. To borrow another image from a similar approach in CycleJS:

Screen Shot 2015 08 16 at 16 08 35

Things happen in a loop: the computer shows stuff on the screen, the user responds, and the computer updates its state. In a talk by CycleJS author Andre Staltz, he actually goes so far as treat the user as a function: screen in, key+mouse actions out. Interesting concept!

Think about it:

  • facts are stored on the disk, somewhere on a network, etc
  • a program is launched which presents (some of it) on the screen
  • the user interface leads us, the monkeys, to respond and type and click
  • the program interprets these as intentions to store / change something
  • it sends out stuff to the network, writes changes to disk (perhaps via a database)
  • these changes lead to changes to what’s shown on-screen, and the cycle repeats

Even something as trivial as scrolling down is a change to a scroll position, which translates to a different part of a list or page being shown on the screen. We’ve been mixing up the view side of things (what gets shown) with the state (some would say “model”) side, which in this case is the scroll position – a simple number. The moment you take them apart, the view becomes nothing more than a function of that value. New value -> new view. Simple.

Nowhere in this story is there a requirement to tie state into the logic. It didn’t really help that object orientation (OO) taught us to always combine and even hide state inside logic.

Yet I (we?) have been programming with variables which remember / change and loops which iterate and increment, all my life. Because that’s how programming works, right?

Wrong. This model leads to madness. Untraceable, undebuggable, untestable, unverifiable.

In a way, Test-Driven-Design (TDD) shows us just how messy it got: we need to explicitly compare what a known input leads to with the expected outcome. Which is great, but writing code which is testable becomes a nightmare when there is state everywhere. So we invented “mocks” and “spies” and what-have-you-not, to be able to isolate that state again.

What if everything we implemented in code were easily reducible to small steps which cleanly compose into larger units? Each step being a function which takes one or more values as state and produces results as new values? Without side-effects or state variables?

Then again, purely functional programming with no side-effects at all is silly in a way: if there are zero side-effects, then the screen wouldn’t change, and the whole computation would be useless. We do need side-effects, because they lead to a screen display, physical-computing stuff such as motion & sound, saved results, messages going somewhere, etc.

What we don’t need, is state sprinkled across just about every single line of our code…

To get back to React: that’s exactly where it revolutionises the world of user interfaces. There’s a central repository of “the truth”, which is in fact usually nothing more than a deeply nested JavaScript data structure, from which everything shown on the web page is derived. No more messing with the DOM, putting all sorts of state into it, having to update stuff everywhere (and all the time!) for dynamic real-time apps.

React (a.k.a. ReactJS) treats an app as a pipeline: state => view => DOM => screen. The programmer designs and writes the first two, React takes care of the DOM and screen.

I’ll get back to ClojureScript, please hang in there…

What’s missing in the above, is user interaction. We’re used to the following:

    mouse/keyboard => DOM => controller => state

That’s the Model-View-Controller (MVC) approach, as pioneered by Smalltalk in the 80’s. In other words: user interaction goes in the opposite direction, traversing all those steps we already have in reverse, so that we end up with modified state all the way back to the disk.

This is where AngularJS took off. It was founded on the concept of bi-directional bindings, i.e. creating an illusion that variable changes end up on the screen, and screen interactions end up back in those same variable – automatically (i.e. all taken care of by Angular).

But there is another way.

Enter “reactive programming” (RP) and “functional reactive programming” (FRP). The idea is that user interaction still needs to be interpreted and processed, but that the outcome of such processing completely bypasses all the above steps. Instead of bubbling back up the chain, we take the user interaction, define what effect it has on the original central-repository-of-the-truth, period. No figuring out what our view code needs to do.

So how do we update what’s on screen? Easy: re-create the entire view from the new state.

That might seem ridiculously inefficient: recreating a complete screen / web-page layout from scratch, as if the app was just started, right? But the brilliance of React (and several designs before it, to be fair) is that it actually manages to do this really efficiently.

Amazingly so in fact. React is faster than Angular.

Let’s step back for a second. We have code which takes input (the state) and generates output (some representation of the screen, DOM, etc). It’s a pure function, i.e. it has no side effects. We can write that code as if there is no user interaction whatsoever.

Think – just think – how much simpler code is if it only needs to deal with the one-way task of rendering: what goes where, how to visualise it – no clicks, no events, no updates!

Now we need just two more bits of logic and code:

  1. we tell React which parts respond to events (not what they do, just that they do)

  2. separately, we implement the code which gets called whenever these events fire, grab all relevant context, and report what we need to change in the global state

That’s it. The concepts are so incredibly transparent, and the resulting code so unbelievably clean, that React and its very elegant API is literally taking the Web-UI world by storm.

Back to ClojureScript

So where does ClojureScript fit in, then? Well, to be honest: it doesn’t. Most people seem to be happy just learning “The React Way” in normal main-stream JavaScript. Which is fine.

There are some very interesting projects on top of React, such as Redux and React Hot Loader. This “hot loading” is something you have to see to believe: editing code, saving the file, and picking up the changes in a running browser session without losing context. The effect is like editing in a running app: no compile-run-debug cycle, instant tinkering!

Interestingly, Tcl also supported hot-loading. Not sure why the rest of the world didn’t.

Two weeks ago I stumbled upon ClojureScript. Sure enough, they are going wild over React as well (with Om and Reagent as the main wrappers right now). And with good reason: it looks like Om (built on top of React) is actually faster than React used from JavaScript.

The reason for this is their use of immutable data structures, which forces you to not make changes to variables, arrays, lists, maps, etc. but to return updated copies (which are very efficient through a mechanism called “structural sharing”). As it so happens, this fits the circular FRP / React model like a glove. Shared trees are ridiculously easy to diff, which is the essence of why and how React achieves its good performance. And undo/redo is trivial.

Hot-loading is normal in the Clojure & ClojureScript world. Which means that editing in a running app is not a novelty at all, it’s business as usual. As with any Lisp with a REPL.

Ah, yes. You see, Clojure and ClojureScript are Lisp-like in their notation. The joke used to be that LISP stands for: “Lots of Irritating Little Parentheses”. When you get down to it, it turns out that there are not really much more of those than parens / braces in JavaScript.

But notation is not what this is all about. It’s the concepts and the design which matter.

Clojure (and ClojureScript) seem to be born out of necessity. It’s fully open source, driven by a small group of people, and evolving in a very nice way. The best introduction I’ve found is in the first 21 minutes of the same video linked to at the start of this post.

And if you want to learn more: just keep watching that same video, 2:30 hours of goodness. Better still: this 1 hour video, which I think summarises the key design choices really well.

No static typing as in Go, but I found myself often fighting it (and type hints can be added back in where needed). No callback hell as in JavaScript & Node.js, because Clojure has implemented Go’s CSP, with channels and go-routines as a library. Which means that even in the browser, you can write code as if there where multiple processes, communicating via channels in either synchronous or asynchronous fashion. And yes, it really works.

All the libraries from the browser + Node.js world can be used in ClojureScript without special tricks or wrappers, because – as I said – CLJ & CLJS embrace their host platforms.

The big negative is that CLJ/CLJS are different and not main-stream. But frankly, I don’t care at this point. Their conceptual power is that of Lisp and functional programming combined, and this simply can’t be retrofitted into the popular languages out there.

A language that doesn’t affect the way you think about programming, is not worth knowing — Alan J. Perlis

I’ve been watching many 15-minute videos on Clojure by Tim Baldridge (it costs $4 to get access to all of them), and this really feels like it’s lightyears ahead of everything else. The amazing bit is that a lot of that (such as “core.async”) catapults into plain JavaScript.

As you can probably tell, I’m sold. I’m willing to invest a lot of my time in this. I’ve been doing things all wrong for a couple of decades (CLJ only dates from 2007), and now I hope to get a shot at mending my ways. I’ll report my progress here in a couple of months.

It’s not for the faint of heart. It’s not even easy (but it is simple!). Life’s too short to keep programming without the kind of abstractions CLJ & CLJS offer. Eh… In My Opinion.

A feel for numbers

In Musings on Aug 19, 2015 at 00:01

It’s often really hard to get a meaningful sense what numbers mean – especially huge ones.

What is a terabyte? A billion euro? A megawatt? Or a thousand people, even?

I recently got our yearly gas bill, and saw that our consumption was about 1600 m3 – roughly the same as last year. We’ve insulated the house, we keep the thermostat set fairly low (19°C), and there is little more we can do – at least in terms of low-hanging fruit. Since the house has an open stairway to the top floors, it’s not easy to keep the heat localised.

But what does such a gas consumption figure mean?

For one, those 1600 m3/y are roughly 30,000 m3 in the next twenty years, which comes to about €20,000, assuming Dutch gas prices will stay the same (a big “if”, obviously).

That 30,000 m3 sounds like a huge amount of gas, for just two people to be burning up.

Then again, a volume of 31 x 31 x 31 m sounds a lot less ridiculous, doesn’t it?

Now let’s tackle it from another angle, using the Wolfram Alpha “computational knowledge engine”, which is a really astonishing free service on the internet, as you’ll see.

How much gas is estimated to be left on this planet? Wolfram Alpha has the answer:

Screen Shot 2015 08 18 at 11 36 14

How many people are there in the world?

Screen Shot 2015 08 18 at 11 39 09

Ok, let’s assume we give everyone today an equal amount of those gas reserves:

Screen Shot 2015 08 18 at 11 44 02

Which means that we will reach our “allowance” (for 2) 30 years from now. Now that is a number I can grasp. It does mean that in 30 years or so it’ll all be gone. Totally. Gone.

I don’t think our children and all future generations will be very pleased with this…

Oh, and for the geeks in us: note how incredibly easy it is to get at some numerical facts, and how accurately and easily Wolfram Alpha handles all the unit conversions. We now live in a world where the well-off western part of the internet-connected crowd has instant and free access to all the knowledge we’ve ammassed (Wikipedia + Google + Wolfram Alpha).

Facts are no longer something you have to learn – just pick up your phone / tablet / laptop!

But let’s not stop at this gloomy result. Here’s another, more satisfying, calculation using figures from an interesting UK site, called Electropedia (thanks, Ard!):

[…] the total Sun’s power it intercepted by the Earth is 1.740×10^17 Watts

When accounting for the earth’s rotation, seasonal and climatic effects, this boils down to:

[…] the actual power reaching the ground generally averages less than 200 Watts per square meter

Aha, that’s a figure I can relate to again, unlike the “10^17” metric in the total above.

Let’s google for “heat energy radiated by one person”, which leads to this page, and on it:

As I recall, a typical healthy adult human generates in the neighborhood of 90 watts.

Interesting. Now an average adult’s calorie intake of 2400 kcal/day translates to 2.8 kWh. Note how this nicely matches up (at least roughly): 2.8 kWh/day is 116 watt, continuously. So yes, since we humans just burn stuff, it’s bound to end up as mostly heat, right?

But there is more to be said about the total solar energy reaching our little blue planet:

Integrating this power over the whole year the total solar energy received by the earth will be: 25,400 TW X 24 X 365 = 222,504,000 TeraWatthours (TWh)

Yuck, those incomprehensible units again. Luckily, Electropedia continues, and says:

[…] the available solar energy is over 10,056 times the world’s consumption. The solar energy must of course be converted into electrical energy, but even with a low conversion efficiency of only 10% the available energy will be 22,250,400 TWh or over a thousand times the consumption.

That sounds promising: we “just” need to harvest it, and end all fossil fuel consumption.

And to finish it off, here’s a simple calculation which also very much surprised me:

  • take a world population of 7.13 billion people (2013 figures, but good enough)
  • place each person on his/her own square meter
  • put everyone together in one spot (tight, but hey, the subway is a lot tighter!)
  • what you end up, is of course 7.13 billion square meters, i.e. 7,130,000,000 m3
  • sounds like a lot? how about an area of 70 by 100 km? (1/6th of the Netherlands)

Then, googling again, I found out that 71% of the surface of our planet is water.

And with a little more help from Wolfram Alpha, I get this result:

Screen Shot 2015 08 18 at 14 18 41

That’s 144 x 144 meters per person, for everyone on this planet. Although not every spot is inhabitable, of course. But at least these are figures I can fit into my head and grasp!

Now if only I could understand why we can’t solve this human tragedy. Maths won’t help.

Lessons from history

In Musings on Aug 12, 2015 at 00:01

(No, not the kind of history lessons we all got treated to in school…)

What I’d like to talk about, is how to deal with sensor readings over time. As described in last week’s post, there’s the “raw” data:

raw/rf12/868-5/3 "0000000000038d09090082666a"
raw/rf12/868-5/3 "0000000000038e09090082666a"
raw/rf12/868-5/3 "0000000000038f090900826666"

… and there’s the decoded data, i.e. in this case:

sensor/BATT-2 {"node":"rf12/868-5/3","ping":592269,
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592270,
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592271,

In both cases, we’re in fact dealing with a series of readings over time. This aspect tends to get lost a bit when using MQTT, since each new reading is sent to the same topic, replacing the previous data. MQTT is (and should be) 100% real-time, but blissfully unaware of time.

The raw data is valuable information, because everything else derives from it. This is why in HouseMon I stored each entry as timestamped text in a logfile. With proper care, the raw data can be an excellent way to “replay” all received data, whether after a major database or other system failure, or to import all the data into a new software application.

So much for the raw data, and keeping a historical archive of it all – which is good practice, IMO. I’ve been saving raw data for some 8 years now. It requires relatively little storage when saved as daily text files and gzip-compressed: about 180 Mb/year nowadays.

Now let’s look a bit more at that decoded sensor data…

When working on HouseMon, I noticed that it’s incredibly useful to have access to both the latest value and the previous value. In the case of these “BATT-*” nodes, for example, having both allows us to determine the elapsed time since the previous reception (using the “time” field), or to check whether any packets have been missed (using the “ping” counter).

With readings of cumulative or aggregating values, the previous reading is in fact essential to be able to calculate an instantaneous rate (think: gas and electricity meters).

In the past, I implemented this by having each entry store a previous and a latest value (and time stamp), but with MQTT we could actually simplify this considerably.

The trick is to use MQTT’s brilliant “RETAIN” flag:

  • in each published sensor message, we set the RETAIN flag to true
  • this causes the MQTT broker (server) to permanently store that message
  • when a new client connects, it will get all saved messages re-sent to it the moment it subscribes to a corresponding topic (or wildcard topic)
  • such re-sent messages are flagged, and can be recognised as such by the client, to distinguish them from genuinely new real-time messages
  • in a way, retained message handling is a bit like a store-and-forward mechanism
  • … but do keep in mind that only the last message for each topic is retained

What’s the point? Ah, glad you asked :)

In MQTT, a RETAINed message is one which can very gracefully deal with client connects and disconnects: a client need not be connected or subscribed at the time such a message is published. With RETAIN, the client will receive the message the moment it connects and subscribes, even if this is after the time of publication.

In other words: RETAIN flags a message as representing the latest state for that topic.

The best example is perhaps a switch which can be either ON or OFF: whenever the switch is flipped we publish either “ON” or “OFF” to topic “my/switch”. What if the user interface app is not running at the time? When it comes online, it would be very useful to know the last published value, and by setting the RETAIN flag we make sure it’ll be sent right away.

The collection of RETAINed messages can also be viewed as a simple key-value database.

For an excellent series of posts about MQTT, see this index page from HiveMQ.

But I digress – back to the history aspect of all this…

If every “sensor/…” topic has its RETAIN flag set, then we’ll receive all the last-known states the moment we connect and subscribe as MQTT client. We can then immediately save these in memory, as “previous” values.

Now, whenever a new value comes in:

  • we have the previous value available
  • we can do whatever we need to do in our application
  • when done, we overwrite the saved previous value with the new one

So in memory, our applications will have access to the previous data, but we don’t have to deal with this aspect in the MQTT broker – it remains totally ignorant of this mechanism. It simply collects messages, and pushes them to apps interested in them: pure pub-sub!

Doodling with decoders

In Musings on Aug 5, 2015 at 00:01

With plenty of sensor nodes here at JeeLabs, I’ve been exploring and doodling a bit, to see how MQTT could fit into this. As expected, it’s all very simple and easy to do.

The first task at hand is to take all those “OK …” lines coming out of a JeeLink running RF12demo, and push them into MQTT. Here’s a quick solution, using Python for a change:

import serial
import paho.mqtt.client as mqtt

def on_connect(client, userdata, flags, rc):
    print("Connected with result code "+str(rc))

def on_message(client, userdata, msg):
    # TODO pick up outgoing commands and send them to serial
    print(msg.topic+" "+str(msg.payload))

client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message

client.connect("localhost", 1883, 60) # TODO reconnect as needed

ser = serial.Serial('/dev/ttyUSB1', 57600)

while True:
    # read incoming lines and split on whitespace
    items = ser.readline().split()
    # only process lines starting with "OK"
    if len(items) > 1 and items[0] == 'OK':
        # convert each item string to an int
        bytes = [int(i) for i in items[1:]]
        # construct the MQTT topic to publish to
        topic = 'raw/rf12/868-5/' + str(bytes[0])
        # convert incoming bytes to a single hex string
        hexStr = ''.join(format(i, '02x') for i in bytes)
        # the payload has 4 extra prefix bytes and is a JSON string
        payload = '"00000010' + hexStr + '"'
        # publish the incoming message
        client.publish(topic, payload) #, retain=True)
        # debugging                                                         
        print topic, '=', hexStr

Trivial stuff, once you install this MQTT library. Here is a selection of the messages getting published to MQTT – these are for a bunch of nodes running radioBlip and radioBlip2:

raw/rf12/868-5/3 "0000000000038d09090082666a"
raw/rf12/868-5/3 "0000000000038e09090082666a"
raw/rf12/868-5/3 "0000000000038f090900826666"

What needs to be done next, is to decode these to more meaningful results.

Due to the way MQTT works, we can perform this task in a separate process – so here’s a second Python script to do just that. Note that it subscribes and publishes to MQTT:

import binascii, json, struct, time
import paho.mqtt.client as mqtt

# raw/rf12/868-5/3 "0000000000030f230400"
# raw/rf12/868-5/3 "0000000000033c09090082666a"

# avoid having to use "obj['blah']", can use "obj.blah" instead
# see end of
C = type('type_C', (object,), {})

client = mqtt.Client()

def millis():
    return int(time.time() * 1000)

def on_connect(client, userdata, flags, rc):
    print("Connected with result code "+str(rc))

def batt_decoder(o, raw):
    o.tag = 'BATT-0'
    if len(raw) >= 10: = struct.unpack('<I', raw[6:10])[0]
        if len(raw) >= 13:
            o.tag = 'BATT-%d' % (ord(raw[10]) & 0x7F)
            o.vpre = 50 + ord(raw[11])
            if ord(raw[10]) >= 0x80:
                o.vbatt = o.vpre * ord(raw[12]) / 255
            elif ord(raw[12]) != 0:
                o.vpost = 50 + ord(raw[12])
        return True

def on_message(client, userdata, msg):
    o = C();
    o.time = millis()
    o.node = msg.topic[4:]
    raw = binascii.unhexlify(msg.payload[1:-1])
    if msg.topic == "raw/rf12/868-5/3" and batt_decoder(o, raw):
        #print o.__dict__
        out = json.dumps(o.__dict__, separators=(',',':'))
        client.publish('sensor/' + o.tag, out) #, retain=True)

client.on_connect = on_connect
client.on_message = on_message

client.connect("localhost", 1883, 60)

Here is what gets published, as a result of the above three “raw/…” messages:

sensor/BATT-2 {"node":"rf12/868-5/3","ping":592269,
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592270,
sensor/BATT-2 {"node":"rf12/868-5/3","ping":592271,

So now, the incoming data has been turned into meaningful readings: it’s a node called “BATT-2”, the readings come in roughly every 64 seconds (as expected), and the received counter value is indeed incrementing with each new packet.

Using a dynamic scripting language such as Python (or Lua, or JavaScript) has the advantage that it will remain very simple to extend this decoding logic at any time.

But don’t get me wrong: this is just an exploration – it won’t scale well as it is. We really should deal with decoding logic as data, i.e. manage the set of decoders and their use by different nodes in a database. Perhaps even tie each node to a decoder pulled from GitHub?

Could a coin cell be enough?

In Musings on Jul 29, 2015 at 00:01

To state the obvious: small wireless sensor nodes should be small and wireless. Doh.

That means battery-powered. But batteries run out. So we also want these nodes to last a while. How long? Well, if every node lasts a year, and there are a bunch of them around the house, we’ll need to replace (or recharge) some battery somewhere several times a year.

Not good.

The easy way out is a fat battery: either a decent-capacity LiPo battery pack or say three AA cells in series to provide us with a 3.6 .. 4.5V supply (depending on battery type).

But large batteries can be ugly and distracting – even a single AA battery is large when placed in plain sight on a wall in the living room, for example.

So… how far could we go on a coin cell?

Let’s define the arena a bit first, there are many types of coin cells. The smallest ones of a few mm diameter for hearing aids have only a few dozen mAh of energy at most, which is not enough as you will see shortly. Here some coin cell examples, from Wikipedia:

Coin cells

The most common coin cell is the CR2032 – 20 mm diameter, 3.2 mm thick. It is listed here as having a capacity of about 200 mAh:

A really fat one is the CR2477 – 24 mm diameter, 7.7 mm thick – and has a whopping 1000 mAh of capacity. It’s far less common than the CR2032, though.

These coin cells supply about 3.0V, but that voltage varies: it can be up to 3.6V unloaded (i.e. when the µC is asleep), down to 2.0V when nearly discharged. This is usually fine with today’s µCs, but we need to be careful with all the other components, and if we’re doing analog stuff then these variations can in some cases really throw a wrench into our project.

Then there are the AAA and AA batteries of 1.2 .. 1.5V each, so we’ll need at least two and sometimes even three of them to make our circuits work across their lifetimes. An AAA cell of 10.5×44.5 mm has about 800..1200 mAh, whereas an AA cell of 14.5×50.5 mm has 1800..2700 mAh of energy. Note that this value doesn’t increase when placed in series!


Let’s see how far we could get with a CR2032 coin cell powering a µC + radio + sensors:

  • one year is 365 x 24 – 8,760 hours
  • one CR2032 coin cell can supply 200 mAh of energy
  • it will last one year if we draw under 23 µA on average
  • it will last two years if we draw under 11 µA on average
  • it will last four years if we draw under 5 µA on average
  • it will last ten years if we draw under 2 µA on average

An LPC8xx in deep sleep mode with its low-power wake-up timer kept running will draw about 1.1 µA when properly set up. The RFM69 draws 0.1 µA in sleep mode. That leaves us roughly a 10 µA margin for all attached sensors if we want to achieve a 2-year battery life.

This is doable. Many simple sensors for temperature, humidity, and pressure can be made to consume no more than a few µA in sleep mode. Or if they consume too much, we could tie their power supply pin to an output pin on the µC and completely remove power from them. This requires an extra I/O pin, and we’ll probably need to wait a bit longer for the chip to be ready if we have to power it up every time. No big deal – usually.

A motion sensor based on passive infrared detection (PIR) draws 60..300 µA however, so that would severely reduce the battery lifetime. Turning it off is not an option, since these sensors need about a minute to stabilise before they can be used.

Note that even a 1 MΩ resistor has a non-negligible 3 µA of constant current consumption. With ultra low-power sensor nodes, every part of the circuit needs to be carefully designed! Sometimes, unexpected consequences can have a substantial impact on battery life, such as grease, dust, or dirt accumulating on an openly exposed PCB over the years…

Door switch

What about sensing the closure of a mechanical switch?

In that case, we can in fact put the µC into deep power down without running the wake-up timer, and let the wake-up pin bring it back to life. Now, power consumption will drop to a fraction of a microamp, and battery life of the coin cell can be increased to over a decade.

Alternately, we could use a contact-less solution, in the form of a Hall effect sensor and a small magnet. No wear, and probably easier to install and hide out of sight somewhere.

The Seiko S-5712 series, for example, draws 1..4 µA when operated at low duty cycle (measuring 5 times per second should be more than enough for a door/window sensor). Its output could be used to wake up the µC, just as with a mechanical switch. Now we’re in the 5 µA ballpark, i.e. about 4 years on a CR2032 coin cell. Quite usable!

It can pay off to carefully review all possible options – for example, if we were to instead use a reed relay as door sensor, we might well end up with the best of both worlds: total shut-off via mechanical switching, yet reliable contact-less activation via a small magnet.

What about the radio

The RFM69 draws from 15 to 45 mA when transmitting a packet. Yet I’m not including this in the above calculations, for good reason:

  • it’s only transmitting for a few milliseconds
  • … and probably less than once every few minutes, on average
  • this means its duty cycle can stay well under 0.001%
  • which translates to less than 0.5 µA – again: on average

Transmitting a short packet only every so often is virtually free in terms of energy requirements. It’s a hefty burst, but it simply doesn’t amount to much – literally!


Aiming for wireless sensor nodes which never need to listen to incoming RF packets, and only send out brief ones very rarely, we can see that a coin cell such as the common CR2032 will be able to support nodes for several years. Assuming that the design of both hardware and software was properly done, of course.

And if the CR2032 doesn’t cut it – there’s always the CR2477 option to help us further.

Forth on a DIP

In Musings on Jul 22, 2015 at 00:01

In a recent article, I mentioned the Forth language and the Mecrisp implementation, which includes a series of builds for ARM chips. As it turns out, the mecrisp-stellaris-... archive on the download page includes a ready-to-run build for the 28-pin DIP LPC1114 µC, which I happened to have lying around:

DSC 5132

It doesn’t take much to get this chip powered and connected through a modified BUB (set to 3.3V!) so it can be flashed with the Mecrisp firmware. Once that is done, you end up with a pretty impressive Forth implementation, with half of flash memory free for user code.

First thing I tried was to connect to it and list out all the commands it knows – known as “words” in Forth parlance, and listed by entering “words” + return:

$ lpc21isp -termonly -control x /dev/tty.usbserial-AH01A0EG 115200 0
lpc21isp version 1.97
Terminal started (press Escape to abort)

Mecrisp-Stellaris 2.1.3 with M0 core for LPC1114FN28 by Matthias Koch
words words
--- Mecrisp-Stellaris Core ---

2dup 2drop 2swap 2nip 2over 2tuck 2rot 2-rot 2>r 2r> 2r@ 2rdrop d2/ d2*
dshr dshl dabs dnegate d- d+ s>d um* m* ud* udm* */ */mod u*/ u*/mod um/mod
m/mod ud/mod d/mod d/ f* f/ 2!  2@ du< du> d< d> d0< d0= d<> d= sp@ sp!
rp@ rp!  dup drop ?dup swap nip over tuck rot -rot pick depth rdepth >r r>
r@ rdrop rpick true false and bic or xor not clz shr shl ror rol rshift
arshift lshift 0= 0<> 0< >= <= < > u>= u<= u< u> <> = min max umax umin
move fill @ !  +!  h@ h!  h+!  c@ c!  c+!  bis!  bic!  xor!  bit@ hbis!
hbic!  hxor!  hbit@ cbis!  cbic!  cxor!  cbit@ cell+ cells flash-khz
16flash!  eraseflash initflash hflash!  flushflash + - 1- 1+ 2- 2+ negate
abs u/mod /mod mod / * 2* 2/ even base binary decimal hex hook-emit
hook-key hook-emit?  hook-key?  hook-pause emit key emit?  key?  pause
serial-emit serial-key serial-emit?  serial-key?  cexpect accept tib >in
current-source setsource source query compare cr bl space spaces [char]
char ( \ ." c" s" count ctype type hex.  h.s u.s .s words registerliteral,
call, literal, create does> <builds ['] ' postpone inline, ret, exit
recurse state ] [ : ; execute immediate inline compileonly 0-foldable
1-foldable 2-foldable 3-foldable 4-foldable 5-foldable 6-foldable
7-foldable constant 2constant smudge setflags align aligned align4,
align16, h, , ><, string, allot compiletoram?  compiletoram compiletoflash
(create) variable 2variable nvariable buffer: dictionarystart
dictionarynext skipstring find cjump, jump, here flashvar-here then else if
repeat while until again begin k j i leave unloop +loop loop do ?do case
?of of endof endcase token parse digit number .digit hold hold< sign #> f#S
f# #S # <# f.  f.n ud.  d.  u.  .  evaluate interpret hook-quit quit eint?
eint dint ipsr nop unhandled reset irq-systick irq-fault irq-collection
irq-adc irq-i2c irq-uart

--- Flash Dictionary --- ok.

That’s over 300 standard Forth words, including all the usual suspects (I’ve shortened the above to only show their names, as Mecrisp actually lists these words one per line).

Here’s a simple way of making it do something – adding 1 and 2 and printing the result:

  • type “1 2 + .” plus return, and it types ” 3 ok.” back at you

Let’s define a new “hello” word:

: hello ." Hello world!" ;  ok.

We’ve extended the system! We can now type hello, and guess what comes out:

hello Hello world! ok.
----- + <CR>

Note the confusing output: we typed “hello” + a carriage return, and the system executed our definition of hello and printed the greeting right after it. Forth is highly interactive!

Here’s another definition, of a new word called “count-up”:

: count-up 0 do i . loop ;  ok.

It takes one argument on the stack, so we can call it as follows:

5 count-up 0 1 2 3 4  ok.

Again, keep in mind that the ” 0 1 2 3 4 ok.” was printed out, not typed in. We’ve defined a loop which prints increasing numbers. But what if we forget to provide an argument?

count-up 0 1 2 [...] Stack underflow

Whoops. Not so good: stack underflow was properly detected, but not before the loop actually ran and printed out a bunch of numbers (how many depends on what value happened to be in memory). Luckily, a µC is easily reset!

Permanent code

This post isn’t meant to be an introduction to Mecrisp (or Forth), you’ll have to read other documentation for that. But one feature is worth exploring: the ability to interactively store code in flash memory and set up the system so it runs that code on power up. Here’s how:

compiletoflash  ok.
: count-up 0 do i . loop ;  ok.
: init 10 count-up ;  ok.

In a nutshell: 1) we instruct the system to permanently add new definitions to its own flash memory from now on, 2) we define the count-up word as before, and 3) we (re-)define the special init word which Mecrisp Forth will automatically run for us when it starts up.

Let’s try it, we’ll reset the µC and see what it types out:

$ lpc21isp -termonly -control x /dev/tty.usbserial-AH01A0EG 115200 0
lpc21isp version 1.97
Terminal started (press Escape to abort)

Mecrisp-Stellaris 2.1.3 with M0 core for LPC1114FN28 by Matthias Koch
0 1 2 3 4 5 6 7 8 9 

Bingo! Our new code has been saved in flash memory, and starts running the moment the LPC1114 chip comes out of reset. Note that we can get rid of it again with “eraseflash“.

As you can see, it would be possible to write a full-blown application in Mecrisp Forth and end up with a standalone µC chip which then works as instructed every time it powers up.


Forth code runs surprisingly fast. Here is a delay loop which does nothing:

: delay 0 do loop ;  ok.

And this code:

10000000 delay  ok.

… takes about 3.5 seconds before printing out the final “ok.” prompt. That’s some 3 million iterations per second. Not too shabby, if you consider that the LPC1114 runs at 12 MHz!

RFM69s, OOK, and antennas

In Musings on Jul 15, 2015 at 00:01

Recently, Frank @ SevenWatt has been doing a lot of very interesting work on getting the most out of the RFM69 wireless radio modules.

His main interest is in figuring out how to receive weak OOK signals from a variety of sensors in and around the house. So first, you’ll need to extract the OOK information – it turns out that there are several ways to do this, and when you get it right, the bit patterns that come out snap into very clear-cut 0/1 groups – which can then be decoded:

FS20 histo 32768bps

Another interesting bit of research went into comparing different boards and builds to see how the setups affect reception. The good news is that the RFM69 is fairly consistent (no extreme variations between different modules).

Then, with plenty of data collection skills and tools at hand, Frank has been investigating the effect of different antennas on reception quality – which is a combination of getting the strongest signal and the lowest “noise floor”, i.e. the level of background noise that every receiver has to deal with. Here are the different antenna setups being evaluated:

RFM69 three antennas 750x410

Last but not least, is an article about decoding packets from the ELV Cost Control with an RFM69 and some clever tricks. These units report power consumption every 5 seconds:


Each of these articles is worth a good read, and yes… the choice of antenna geometry, its build accuracy, the quality of cabling, and the distance to the µC … they all do matter!

Greasing the “make” cycle on Mac

In Musings on Jul 8, 2015 at 00:01

I’m regularly on the lookout for ways to optimise my software development workflow. Anything related to editing, building, testing, uploading – if I can shave off a little time, or better still, find a way to automate more and get tasks into my muscle memory, I’m game.

It may not be everyone’s favourite, but I keep coming back to vim as my editor of choice, or more specifically the GUI-aware MacVim (when I’m not working on some Linux system).

And some tasks need to be really fast. Simply Because I Do Them All The Time.

Such as running “make”.

So I have a keyboard shortcut in vim which saves all the changes and runs make in a shell window. For quite some time, I’ve used the Vim Tmux Navigator for this. But that’s not optimal: you need to have tmux running locally, you need to type in the session, window, and pane to send the make command to (once after every vim restart), and things … break at times (occasional long delays, wrong tmux pane, etc). Unreliable automation is awful.

Time to look for a better solution. Using as few “moving parts” as possible, because the more components take part in these custom automation tricks, the sooner they’ll break.

The following is what I came up with, and it works really well for me:

  • hitting “,m” (i.e. “<leader>m“) initiates a make, without leaving my vim context
  • there needs to be a “Terminal” app running, with a window named “⌘1” open
  • it will receive this command line:  clear; date; make $MAKING

So all I have to do is leave that terminal window open – set to the proper directory. I can move around at will in vim or MacVim, run any number of them, and “,m” will run “make”.

By temporarily setting a value in the “MAKING” shell variable, I can alter the make target. This can be changed as needed, and I can also change the make directory as needed.

The magic incantation for vim is this one line, added to the ~/.vimrc config file:

    nnoremap <leader>m :wa<cr>:silent !makeit<cr>

In my ~/bin/ directory, I have a shell script called “makeit” with the following contents:

    exec osascript >/dev/null <<EOF
        tell application "Terminal"
            repeat with w in windows
                if name of w ends with "⌘1" then
                    do script "clear; date; make $MAKING" in w
                end if
            end repeat
        end tell

The looping is needed to always find the proper window. Note that the Terminal app must be configured to include the “⌘«N»” command shortcut in each window title.

This all works out of the box with no dependency on any app, tool, or utility – other than what is present in a vanilla Mac OSX installation. Should be easy to adapt to other editors.

It can also be used from the command line: just type “makeit”.

That’s all there is to it. A very simple and clean convention to remember and get used to!

Low-power mode :)

In Musings on Jul 1, 2015 at 00:01

First, an announcement:

Starting now, the JeeLabs Weblog is entering “low-power mode” for the summer period. What this means: one weblog post every Wednesday, but no additional articles.

While in low-power mode, I’ll continue to write about fun stuff, things I care about, bump into, and come up with – and maybe even some progress reports about some projects I’m working on. But no daily article episodes, and hence no new material for the Jee Book:

Jeebook cover

Speaking of which: The Jee Book has been updated with all the articles from the weblog for this year so far and has tripled in size as a result. Only very light editing has been applied at this point – but in case you want it all in a single convenient e-book, there ya go!

Have a great, inspiring, and relaxing summer! (cq winter, if you’re down under)

(For comments, visit the forum area)

FTDI over WiFi: esp-bridge

In Book on Jun 24, 2015 at 00:01

Time to try something different – this week, the JeeLabs blog and articles will be written by Thorsten von Eicken, as guest author. I’m very excited by the project he has created and will gladly step aside to let him introduce it all -jcw

Has it ever happened to you that you put together a nice JeeNode or Arduino project, test it all out on your bench, then mount it “in the field” and … it doesn’t quite work right? Or the next day it malfunctions? Well, to me it happens all the time. Or I want to add another feature to my greenhouse controller or my weather station but I don’t want to bring it back to my bench for a few days for a full rework and re-test. Instead I’d love to troubleshoot or tune the code remotely, sitting comfortably at home in the evening while my project is mounted somewhere out there.

My first attempt at solving this situation was to deploy a BeagleBone Black with a WiFi dongle and a serial cable (a RaspberryPi or Odroid could work just as well):

20150622 112023  1

Essentially I turned a $55 BBB plus a little $10 WiFi dongle into a remote FTDI widget. In the first moment I was in heaven: it’s an inexpensive solution in the grand scheme of things and it worked great. But then I became addicted and one BBB was insufficient: I wanted to hook more than one remote JeeNode up! Well, suddenly the prospect of buying 3-4 such set-ups didn’t look so inexpensive anymore!

That’s when the esp8266 WiFi modules caught my eye at a cost under $5. Here is a WiFi module with a processor and a UART and, most importantly, an SDK (software development kit). The SDK suggested that I could implement the key functions to remotely watch debug output of a JeeNode as well as reprogram it, just like I was doing with the BBB. My mind started racing: what could I make it do? What would a complete hardware solution cost? How much power would it use? What is its WiFi range?

DSC 5121

At that point I chatted with JC about my little project, which I called “esp-bridge”, and got a cautiously interested response with ingredients ranging from “sounds very cool!” to “it’s gonna use way too much power”. Not to be easily deterred I thought it would take me about a week to write some code. That was 4 months ago, and while I have a day job I have spent a lot of time on the esp-bridge project since. In the end, I hope you will benefit from all this because through this week’s episode of the JeeLabs blog you will indeed be able to complete the journey in one week! The plan for the week is as follows:

As the week develops (and beyond) I would love to hear your feedback and questions. The last post will be written just-in-time so I can try to answer anything that comes up before then. For general discussion, please use the esp-link thread on the site or open a new thread in the same forum. If you download the software from GitHub to put your own esp-bridge together and run into bugs or mis-features please post to the GitHub issues tracker.

Quick clarifications about names: esp-bridge refers to the hardware & software gadget described in this blog series while esp-link is the name of the software itself.

(For comments, visit the forum area)

Moving up a (few) level(s)

In Book on Jun 17, 2015 at 00:01

All the posts and articles so far have been about staying “close to the silicon” on ARM µCs. Direct access to I/O registers to control and read out pins, and to activate the built-in hardware interfaces. It gets tedious after a while, and worst of all: very repetitive.

I’ve been coding for the LPC8xx chips in C/C++ with virtually no runtime library support. The reason was to expose all the raw stuff that happens as a µC starts up, very dumb, and needs to be told how to get a notion of time (SysTick), how to read/set/clear/toggle pins, how to sleep in various low-power modes, how to talk to the serial interface, yada, yada…

Figuring out how things work at the lowest level is a fascinating adventure. It’s no more than an indirect jump on reset, a run-time stack, and processing machine code – forever. But let’s not deal with these details all the time. Let’s think about sensors, conditions, decisions, actions – let’s define the behaviour of a system, not just deal with technicalities.

There are numerous ways to move up in abstraction level on embedded µCs. This week’s episode is about the bigger picture and the three broad categories of these approaches:

Warning: there are no conclusions at the end. I’m still exploring and evaluating. I’ve been on many long and winding roads already. I’m not too impressed by most of them. Treating a µC (any board) as the centre of a project feels wrong, particularly in the context of multi-node home monitoring and automation – which is still, after all, the focus of this weblog.

We really need to look at the bigger picture. How to evolve and manage the designs we create over a longer period of time. Old stuff needs to fit in, but it shouldn’t prevent us from finding a broader view. Adding and managing single nodes should not be the main focus.

(For comments, visit the forum area)

Code for the LPC8xx

In Book on Jun 10, 2015 at 00:01

Putting a chip on a board, as with the Tinker Pico, is one thing. Getting it to do something is quite another matter. We’ll want to develop tons of software for it and upload the code.

DSC 5118

The uploading bit has already been solved by using a modified FTDI board, with some exciting new options coming up in the next few months. How’s that for a teaser, eh?

This week’s episode is about the software side of things. Toolchains, runtimes, build tools, IDE’s, that sort of thing. Brought to you – as usual – in a series of daily bite-sized articles:

I’m having a hard time making up my mind on which path to choose as future code base. The curse of having too many choices, each with their own trade-offs and compromises.

(For comments, visit the forum area)

Introducing the LPC824

In Book on Jun 3, 2015 at 00:01

There are more chips in NXP’s LPC8xx ARM µC family. We’ve seen the delightful 8-DIP LPC810, which packs a lot of power and is an interesting way to get started with ARM, and we’ve seen the LPC812 which is available in a tiny TSSOP-16 package, yet has 4 times the flash and RAM memory of the LPC810.

But there’s one more interesting member in this family: the LPC824, which can easily compare (and exceed) the specification of that workhorse of the Arduino world, the venerable ATmega328. This week’s episode is about getting familiar with the LPC824:

The LPC824 could make a really nice foundation for remote sensor nodes: loads of modern hardware peripherals, ultra-low sleep mode power consumption, and plenty of I/O pins.

Screen Shot 2015 06 01 at 15 06 59

I’m really excited about this chip. As with all the LPC8xx chips, there’s a “switch matrix” to connect (nearly) any h/w function to any pin. This means that you don’t really have to care much about pinouts and figuring out up front which pins to use for which tasks. This offers a lot of flexibility when designing general-purpose boards with a bunch of headers on them.

(For comments, visit the forum area)

RFM69 on ATmega

In Book on May 27, 2015 at 00:01

Now that we have the RFM69 working on Raspberry Pi and Odroid C1, we’ve got all the pieces to create a Wireless Sensor Network for home monitoring, automation, IoT, etc.

But I absolutely don’t want to leave the current range of JeeNodes behind. Moving to newer hardware should not be about making existing hardware obsolete, in my book!


The JeeNode v6 with its on-board RFM12 wireless radio module, Arduino and IDE compatibility, JeePorts, and ultra-low power consumption has been serving me well for many years, and continues to do so – with some two dozen nodes installed here at JeeLabs, each monitoring power consumption, house temperatures, room occupancy, and more. It has spawned numerous other products and DIY installations, and the open-source JeeLib library code has opened up the world of low-cost wireless signalling for many years. There are many thousands of JeeNodes out there by now.

There’s no point breaking what works. The world wastes enough technology as it is.

Which is why, long ago a special RF69-based “compatibility mode” driver was added to JeeLib, allowing the newer RFM69 modules to interoperate with the older RFM12B’s. All you have to do, is to add the following line of code to your Arduino Sketches:

#define RF69_COMPAT 1

… and the RFM69 will automagically behave like a (less featureful) RFM12.

This week is about doing the same, but in reverse: adapting JeeLib’s existing RF12 driver, which uses a specific packet format, to make an RFM12 work as if it were an RFM69:

As I’ve said, I really don’t like to break what works well. These articles will show you that there is no need. You can continue to use the RFM12 modules, and you can mix them with RFM69 modules. You can continue to use and add Arduino-compatible JeeNodes, etc. in your setup, without limiting your options to explore some of the new ARM-based designs.

Let me be clear: there are incompatibilites, and they do matter at times. Some flashy new features will not be available on older hardware. I don’t plan to implement everything on every combination, in fact I’ve been focusing more and more on ARM µC’s with RFM69 wireless, and will most likely continue to do so, simply to manage my limited time.

Long live forward compatibility, i.e. letting old hardware inter-operate with the new…

RFM69 on Raspberry Pi

In Book on May 20, 2015 at 00:01

With the Micro Power Snitch sending out packets, it’d be nice to be able to receive them!

This week is about turning a Raspberry Pi, or a similar board such as an Odroid-C1, into a Linux-based central node for what could become a home-based Wireless Sensor Network.

All it takes is an RFM69 radio module and a little soldering:

DSC 5086

On the menu for this week’s episode:

And before you know it, you’ll be smack in the middle of this century’s IoT craze…

(For comments, visit the forum area)

Micro Power Snitch, success!

In Book on May 13, 2015 at 00:01

We’ve come to the eighth and final episode of the Micro Power Snitch story: it’s working! The circuit is transmitting wireless packets through the RFM69 radio module running on nothing but harvested electromagnetic energy. Install once, run forever!

But there are still several little details, optimisations, and edge cases we need to take care of – which is what this week’s articles are all about. As always, one article per day:

The MPS triggers on any appliance drawing ≥ 500W (on 230 VAC, or 250W for 115 VAC):

DSC 5089

As always on the JeeLabs weblog: everything is open source – you are welcome to build and adapt this circuit for your own purposes. If you do, please consider sharing your suggestions/findings/improvements on the forum, for others to learn and benefit as well.

(For comments, visit the forum area)

Micro Power Snitch, part 7

In Book on May 6, 2015 at 00:01

The two problems with projects powered by harvested energy, are: 1) running out of juice, and 2) falling into a state of limbo and not (or not consistently) getting out of it again.

Here is an example of both happening – a successful µC startup and then a radio failure:


In the case of the Micro Power Snitch, the energy coming in may vary greatly, since it will be proportional to the current drawn by the appliance being monitored by our MPS.

We’re going to have to tread carefully in this week’s episode:

As usual, each of the above articles will be ready on successive days.

Planned for next week: a few more loose ends, and then my closing notes.

(For comments, visit the forum area)

Micro Power Snitch, part 6

In Book on Apr 29, 2015 at 00:01

And you thought the MPS was finished, or perhaps abandoned, eh?

This week resumes the development of a little LPC810-/RFM69-powered setup which runs on parasitic power, harvested from either phase of a current-carrying AC mains wire.

To summarise, the last step in the project I went through a few weeks ago, was to create a PCB for it. Here were some builds, with all the hacks and hooks added to try things out:

DSC 5041

As it turns out, I had made so many mistakes, one after the other, that it really set me back, irritated me to no end, and kept me busy with complete silliness… but all is well again now.

The articles in this week’s “back in business” MPS episode are:

And with a bit of luck, by next week I’ll have figured out how to send out brief RF packets on an ridiculously minimal energy budget. Because that’s what “The Snitch” is all about!

(For comments, visit the forum area)

We interrupt this program

In Book on Apr 22, 2015 at 00:01

… for an important announcement from our sponsor:


Just kidding, of course!

But it got your attention, right? Good. That’s what this week’s episode is about: taking care of something unexpected, i.e. processing tasks without planning them ahead all the time.

On the menu for the coming days:

Interrupts – when handled properly – are extremely powerful and can deal with “stuff” in the background. But there are a lot of tricky cases and hard-to-debug failure modes. It’s worth getting to grips with them really well if you want to avoid – unexpected – failures.

(For comments, visit the forum area)

Dataflash via SPI

In Book on Apr 15, 2015 at 00:01

One of the things I’m going to need at some point is additional flash memory. Since the simplest way these days to add more memory is probably via SPI, that’s what I’ll use.

This week’s episode is about connecting an SPI chip, implementing a simple driver, code re-use, the hidden dangers of solderless breadboards, and point-to-point soldering:

DSC 5020

The end result is several megabytes of extra storage, using only 4 I/O pins. Data logger? Serial port audit? Storage for audio, video, images? Your next novel? It’s all up to you!

(For comments, visit the forum area)

Analog on the cheap

In Book on Apr 8, 2015 at 00:01

Digital chips can’t do analog directly – you need an A/D converter for that, right?

Not so fast. Just as pulse-width modulation (PWM) can be used to turn a purely digital signal into a varying analog voltage, there are tricks to measure analog values as well.

All you need is the proper software, a simple analog comparator, and these components:

DSC 5014  Version 2

This week’s episode examines delta-sigma modulation and shows you how it’s done:

There’s in fact a little gem hidden inside those LPC8xx chips which allows implementing this without bogging down the µC. Just one interrupt whenever a measurement is ready. But the basic idea is applicable to any microcontroller with a built-in analog comparator.

(For comments, visit the forum area)

Emulating EEPROM

In Book on Apr 1, 2015 at 00:01

The LPC8xx series has no EEPROM memory. This type of Electrically Erasable memory is very useful to store configuration settings which need to be retained across power cycles:

DSC 5005

But we can emulate it with flash memory, despite the somewhat different properties.

Here’s the why’s, what’s, and how’s of this software trick – in this week’s article series:

There’s quite a bit of detail involved to get this right, as you will see.

(For comments, visit the forum area)

From LPC810 to LPC812

In Book on Mar 25, 2015 at 00:01

No, I haven’t abandoned the Micro Power Snitch!

But some projects are trickier than others, and this one didn’t want to play along when I tried it again on my new PCB design. I fixed one glaring wiring mistake so far, but there’s more going on. Since Mr. Murphy seems to be enjoying himself again, I’m going to let him enjoy his little “victory” for a while and come back to the MPS at a (slightly) later date.

Instead, let’s move up the ladder a little and experiment with another ARM µC:

DSC 4977

On the right: four times as much memory (both flash and RAM) and twice as many pins.

As you can see, packaging is everything – and bigger is not always more…

(For comments, visit the forum area)

Micro Power Snitch, last part

In Book on Mar 18, 2015 at 00:01

In this concluding part (for now) about the Micro Power Snitch, which feeds off the magnetic field around an AC mains power cable when in use, I’ll look into how the whole circuit behaves when it comes to actually sending out wireless radio packets.

The daily articles in this week’s final MPS episode are:

Update – With apologies (again!), I’m going to postpone the following posts:

It’s a huge challenge to manage the incoming energy so that everything keeps going!

(For comments, visit the forum area)

Micro Power Snitch, part 4

In Book on Mar 11, 2015 at 00:01

At the end of the day, the Micro Power Snitch (MPS) is really about powering up and down robustly under all circumstances. There will be times when there is no energy coming from the Current Transformer, and there will be times when energy levels hover around the go/no-go point of the circuit. That’s the hard part: being (very) decisive at all times!

This week’s episode looks at an improved circuit with a dozen individual components or so. As you will see, it has excellent snap-action behaviour, and does the right thing on the “up” ramp as well as on the “down” ramp, regardless of how slowly those voltage levels change.

DSC 4958  Version 2

Here is the daily sequence for this week:

With a complete MPS circuit design worked out, we’re finally getting somewhere.

(For comments, visit the forum area)

Micro Power Snitch, part 3

In Book on Mar 4, 2015 at 00:01

There’s trouble ahead for the MPS: it’s not reliable enough yet, and can enter a “zombie mode” whereby the µC won’t have enough voltage to start up, while drawing so much current (relatively speaking) that the energy source isn’t able to raise the voltage further.

Leading to some beautiful pictures, but nevertheless totally undesired behaviour:


The daily articles in this week’s episode are:

So far, it looks like this second MPS design solves the problems of the first. Progress!

(For comments, visit the forum area)

Micro Power Snitch, part 2

In Book on Feb 25, 2015 at 00:01

So far, there’s an idea, a circuit to collect energy, and sample code to make an LPC810 come alive every so often, briefly. It’s time to tie things together and see what happens!

As usual, one article per day in this week’s episode:

As you will see in Saturday’s article, this first MPS design is indeed able to power an LPC810 µC after some modifications, and keep it alive “under certain conditions”.

DSC 4953

But there’s a very nasty skeleton in the closet – stay tuned!

(For comments, visit the forum area)

Micro Power Snitch

In Book on Feb 18, 2015 at 00:01

It’s time to tackle a fairly ambitious challenge: let’s try to make an LPC810 run off “harvested” energy and use it to periodically send out a wireless packet.

This week will be a short intro into the matter, with more to follow later:

To lift the veil a bit, here’s the energy I’m going to try to harvest:

Ct shape

This is the voltage from a Current Transformer (CT), when unloaded. Such a CT needs to be “clamped” around one of the wires in an AC mains cable, and will then generate a voltage which is proportional to the amount of current flowing in that wire.

Well, maybe, sort of…

(For comments, visit the forum area)

Uploading over serial

In Book on Feb 11, 2015 at 00:01

This week is about uploading firmware over a serial communication link, and interacting with the uploaded firmware.

DSC 4935

First a quick recap on how it all works, then a little diagnostic tool, and then a little utility with some new powers. Your Odroids and Raspberries will never be the same again!

The article schedule for the coming days is as follows:

The goal: freedom from having lots of USB cables all over the desk – at last…

(For comments, visit the forum area)

Bits, pointers, and interrupts

In Book on Feb 4, 2015 at 00:01

By popular request…

Several people have mentioned these topics to me recently, as being something they’d like to know more about. All of it is available from various textbooks of course, but it’s often buried deeply in more elaborate chapters. It seems like a good idea to single these out, especially if you’re quite fluent in programming a higher-level language other than C/C++.

Interestingly, all these topics turn out to be related in some way to memory maps, such as this somewhat arbitrary example from Wikipedia:

765px Bankswitch memory map svg

Low level programming is all about wrestling with such details, very “close to the silicon”.

The other topics which tend to trip people up coming from JavaScript, Python, PHP, Java, etc. are about how bits, bytes, and words can be manipulated, how to deal with memory areas for arrays and buffers, how types work in C/C++, and how a CPU copes with urgency:

I’ll keep the articles concise and only touch on the aspects relevant to embedded programs.

… but feel free to take a week off from reading this weblog if you already know all this!

(For comments, visit the forum area)

LPC810 meets RFM69, part 3

In Book on Jan 28, 2015 at 00:01

Let’s revisit the LPC810 and the RFM69. Things are starting to come together at last.

I’ll pick up where I left off three weeks ago, making two LPC810’s talk to each other via RFM69’s and then picking up the results on a Raspberry Pi over I2C. Hang in there!

You know the drill by now: one article per day, as things become ready for publication.

DSC 4924

(For comments, visit the forum area)

Volta makes the world go round

In Book on Jan 21, 2015 at 00:01

There is a lot more to go into w.r.t. the LPC810 µC and the RasPi/Odroid Linux boards, but since surprisingly many design decisions are related to that main driving force of electricity called “voltage”, this is a good opportunity to first cover those basics in a bit more detail.

Here is the list of upcoming articles, one per day:

Here’s a nice diagram from Wikipedia with all the rechargeable battery technologies:

800px Secondary cell energy density svg

Quite a variety, and all of them with different trade-offs. Read the articles to find out why it matters, when to stack ’em up, how to avoid problems, and what buck & boost is about.

(For comments, visit the forum area)

Embedded Linux

In Book on Jan 14, 2015 at 00:01

The “LPC810 meets RFM69” series, is being postponed a bit longer than anticipated, the relevant pieces are simply not ready and stable enough yet to present as working code. With my apologies for this change of plans. I’ll definitely get back to this – count on it!

For now, let’s start exploring another piece of the puzzle, when it comes to setting up a wireless network, and in particular a wireless sensor network (WSN)

As usual, each of the above articles will be ready on subsequent days.

DSC 4918  Version 2

If you’ve always wanted to try out Linux without messing with your computer – here’s a gentle introduction to the world of physical computing, from a Linux board’s perspective!

(For comments, visit the forum area)

LPC810 meets RFM69, part 2

In Book on Jan 7, 2015 at 00:01

The code presented last week – or should I say last year? – was unfinished. In fact, it wasn’t even tested, just designed and written in a way which “should” work, eventually…

The thing with communication is that you’ve got two pieces which need to work together, but they are running on different bits of hardware, which makes it harder to see the big picture and reason about what’s going on. Getting to that first data exchange can be tricky.

In this case, we’ve also got an I2C link in there, so there are in fact three systems involved, and in addition, we’re going to need a way to re-program those two LPC810 µC’s:

Rpi i2c rf69

Let’s get on with it and figure out how to make this stuff work:

Update – With apologies, I’m going to postpone the following three posts a bit longer. Too many distractions here, keeping me from working on this with proper concentration:

Lots of pesky little details to deal with and quite a few bugs to squash, as you’ll see…

(For comments, visit the forum area)