Computing stuff tied to the physical world

So many abstractions to choose from

You will probably be familiar with the “Arduino” series of open-source microcontroller boards and the associated IDE for writing + uploading software to it, and then running it. A bit more recently, the “Raspberry Pi” entered the scene, with a totally different purpose: a small yet full-blown computer, capable of doing all the things you’d expect from a laptop.

These two devices differ in complexity, but on a hardware level it’s merely a sliding scale: more CPU power, more RAM, more storage, and more capable hardware interfaces – that’s all it takes to go from an Arduino to a Raspberry Pi. They’re both “just” computers!

But on the software side, they couldn’t be more different:

  • an Arduino has no user interface, no screen, no keyboard, no mouse
  • code for the Arduino must be developed and built elsewhere, and then “uploaded”
  • when turned on, the Arduino starts doing exactly the same it did last time
  • none of the laptop/desktop tools you may be familiar with work on an Arduino
  • you can’t tweak the Arduino’s code and restart it, you need the original source code

You can substitute any embedded board name for “Arduino”, as this applies to all of them.

The key difference is in the resource constraints: a Raspberry Pi has ample resources for an operating system, large-scale storage, a full CLI & GUI, compilers, debuggers, and many other tools, as well as more space than we’ll ever need to store all our ideas and projects.

On an embedded µC, nothing fits, other than the compiled machine-code version of the one application it’s intended to run. Plenty for doing real stuff, even in the tiniest chips, and capable of running at extraordinarily low power levels, i.e. forever and totally stand-alone.

For a chip, an embedded µC is a marvel of powerful capabilities. But for the project leading up to it, the design process, the coding, the connectivity, it’s totally dependent on us maintaining a development and interface environment outside that embedded µC!

Approach #1 – same as the big ones

The most common approach today, is to treat the embedded chip as more of the same: the “real” computer (laptop, desktop, mainframe), just smaller and more feature-limited:

  • real computers use a compiler: let’s adapt it to cross-compile instead
  • real computers have a user interface: let’s set up a serial terminal session
  • real computers have debuggers: let’s solve that with remote debugging

The next article, titled “A µC is just a small computer” expands on this approach. It leads to what the Arduino IDE does, and what we get when using an embedded RTOS or elaborate libraries. It can be characterised as: edit on a “host” system, then upload to the “target”.

Approach #2 – focus on the runtime

What if we could avoid this split, and edit/extend on the µC itself?

This is possible for all but the smallest µCs, by creating a language environment on the chip itself. There are dozens of examples of this – using either a subset of existing “big” languages (Java, Lua, JavaScript, Python, Ruby, etc), or by designing a language to fit optimally on the resource-constrained µC.

Examples of this will be presented in the article titled “No wait, it’s a language runtime“. This approach can offer great “tinkerability”, but may make it harder to manage multiple nodes, and to track (and backup!) different versions of the code as it evolves over time.

We gain the immediacy of tinkering “on-chip”, but lose the safety net of revision control.

Approach #3 – a data perspective

This is actually a mix of the other two approaches: instead of creating a full programming environment on what is by definition a very limited µC context, we can turn the remote “target” system into an engine which obeys rules. It’s an interpreter of sorts, but the rules can be set up in a far more declarative way, instead of code with “execution flow” and calls.

One way to think of this is as having a toolbox with all the possible activities pre-compiled and pre-loaded on the µC, and then driving the actual task with a rule-like config / dataset.

This is further explored in the article titled “Or is it a data-driven engine, perhaps?” – it could be characterised as doing feature-specific code development using approach #1, while “driving” the resulting engine “on-chip”.

Final notes

In each of these approaches, we’ll need to deal with the physical separation of the “real” computer we’re sitting at and the embedded µC(s). During development, but also later, when our projects are ready and remains more or less stable.

There is no magic – we’re going to need to get code and data across that barrier. As well as the operational data, i.e. the communication we’ll need for sensor readings and commands.

But the first concern needs to be about the bigger picture: where do we want our code and logic to be developed and kept? which environment do we want to set up, learn, and spend all our time in? how do we fit in technological advances? what do we do with older pieces in the project? and perhaps above all: where would we like to end up a few years from now?

The next three articles will attempt to outline a few different paths.

[Back to article index]