Computing stuff tied to the physical world

Posts Tagged ‘JavaScript’

JavaScript semantics

In Software on Jan 24, 2013 at 00:01

Some things are quite surprising in JavaScript / CoffeeScript:

    $ coffee
    > '1'+2
    '12'
    > '1'-2
    -1
    > 1 < null
    false
    > 1 > null
    true
    > 1 < undefined
    false
    > 1 > undefined
    false
    > a = [1,2,3]
    [ 1, 2, 3 ]
    > a.b = 4
    4
    > (x for x in a)
    [ 1, 2, 3 ]
    > (x for x of a)
    [ '0', '1', '2', 'b' ]
    > (x for x in 'abc')
    [ 'a', 'b', 'c' ]
    > (x for x of 'abc')
    [ '0', '1', '2' ]
    > 

It makes sense once you know it … but that’s the whole thing with being a newbie, eh?

That array-with-poperties behaviour is actually very useful, because it lets you create collections which can be looped over, while still offering members in an object-like manner. Very Lua-ish. The same can be done with Object.defineProperty, but that’s more involved.

For the full story on “array-like” objects, see this detailed page. It gets real messy inside – but then again, so does any other language. As long as its easy to find answers, I’m happy. And with JavaScript / CoffeeScript, finding answers and suggestions on the web is trivial.

On another note: Redis is working well for storing data, but there is a slight impedance mismatch with JavaScript. Storing nested objects is trivial by using JSON, but then you lose the ability to let Redis do a lot of nifty sorting and extraction. For now, I’m getting good mileage with two hashes per collection: one for key-to-id lookup, one for id-based storage of each object in JSON format. But I’m really using only a tiny part of Redis this way.

Still crude, but oh so cool…

In Software on Jan 18, 2013 at 00:01

Good progress on the HouseMon front. I’m still waggling from concept to concept like a drunken sailor, but every day it sinks in just a little bit more. Adding a page to the app is now a matter of adding a CoffeeScript and a Jade file, and then adding an entry to the “routes” list to make a new “Readings” page appear in the main menu:

Screen Shot 2013-01-17 at 01.01.00

The only “HTML” I wrote for this page is readings.jade, using Foundation classes:

Screen Shot 2013-01-17 at 01.03.14

And the only “JavaScript” I wrote, is in readings.coffee to embellish the output slightly:

Screen Shot 2013-01-17 at 00.18.51

The rest is a “readings” collection, which is updated automatically on the server, with all changes propagated to the browser(s). There’s already quite a bit of architecture in place, including a generic RF12demo driver which connects to a serial port, and a decoder for some of the many packets flying around the house here at JeeLabs.

But the exciting bit is that the server setup is managed from the browser. There’s an Admin page, just enough to install and remove components (“briqlets” in HouseMon-speak):

Screen Shot 2013-01-17 at 00.14.09

In this example, I installed a fake packet generator (which simply replays one of my logfiles in real time), as well as a small set of decoders for things like room node sketches.

So just to stress what there is: a basic way of managing server components via the browser, and a basic way of seeing the live data updates, again in the browser. Oh, and all sorts of RF12 capturing and decoding… even support for the Announcer idea, described recently.

Did I mention that the data shown is LIVE? Some of this stuff sure feels like magic… fun!

Test-driven design

In Software on Jan 10, 2013 at 00:01

Drat, I can’t stop – this Node.js and AngularJS stuff sure is addictive! …

I’m a big fan of test-driven development, an approach where test code is often treated as more important than the code you’re writing to actually do the work. It may sound completely nuts – the thought of writing tons of tests which can only get in the way of making changes to your newly-written beautiful code later, right?

In fact, TDD goes even further: the idea is to write the test code as soon as you decide to add a new feature or write a new bug, and only then start writing the real stuff. Weird!

But having used TDD in a couple of projects in the past, I can only confirm that it’s in fact a fantastic way to “grow” software. Because adding new tests, and then seeing them pass, constantly increases the confidence in the whole project. Ever had the feeling that you don’t want to mess with a certain part of your code, because you’re afraid it might break in some subtle way? Yeah, well… I know the feeling. It terrifies me too :)

With test scaffolding around your code, something very surprising happens: you can tear down the guts and bring them back up in a different shape, because the test code will help drive that process! And re-writing / re-factoring usually leads to a better architecture.

There’s a very definite hurdle in each new project to start using TDD (or BDD). It’s a painful struggle to spend time thinking about tests, when all you want to do is write the darn code and start using it! Especially at the start of a project.

I started coding HouseMon without TDD/BDD on my plate, because I first want to expose myself to coding in CoffeeScript. But I’m already checking out the field in the world of Node.js and JavaScript. Fell off my chair once again… so many good things going on!

Here’s an example, using the Mocha and should packages with CoffeeScript:

Screen Shot 2013-01-09 at 22.57.08

And here’s some sample output, in one of many possible output formats:

Screen Shot 2013-01-09 at 22.59.55

I’m using kind of a mix of TDD’s assertion-style testing, and BDD’s requirement-style approach. The T + F tricks suit me better, for their brevity, but the “should.throw” stuff is useful to handle exceptions when they are intentional (there’s also “should.not.throw”).

But that’s just the tip of the iceberg. Once you start thinking in terms of testing the code you write, it becomes a challenge to make sure that every line of that code has been verified with tests, i.e. code coverage testing. And there too, Mocha makes things simple. I haven’t tried it, but here is some sample output from its docs:

Screen Shot 2012-02-23 at 8.37.13 PM

On the right, a summary of the test coverage of each of the source files, and in the main window an example of a few lines which haven’t been traversed by any of the tests.

Tools like these sure make it tempting to start writing tests!

Ok, ok, I’ll stop.

Technology decisions

In Software on Jan 7, 2013 at 00:01

Phew! You don’t want to know how many examples, frameworks, and packages I’ve been downloading, trying out, and browsing lately… all to find a good starting point for further software development here at JeeLabs.

Driving this is my desire to pick modern tools, actively being worked on, with an ecosystem which allows me to get up to speed, leverage all the amazing stuff people keep coming up with, and yet stay firmly on the simple side of things. Because good stuff is what you end up with when you weed out the bad, and the result is purity and elegance … as I’ve seen confirmed over and over again.

A new insight I didn’t start out from, is that server-side web page generation is no longer needed. In fact, with clients being more performant than servers (i.e. laptops, tablets, and mobile phones served from a small Linux system), it makes more and more sense to serve only static files (HTML, CSS, JS) and generic data (JSON, usually). The server becomes a file system + a database + a relatively low-end rule engine for stuff that needs to run at all times… even when no client is connected.

Think about the change in mindset: no server-side templating… n o n e !

Anyway – here are all the pieces I intend to use:

Node.js – JavaScript on the server, based on the V8 engine which is available for Intel and ARM architectures. A high-performance standards-compliant JavaScript engine.

The npm package manager comes with Node.js and is a phenomenal workhorse when it comes to getting lots of different packages to work together. Used from the command line, I found npm help to be great starting point for figuring out its potential.

SocketStream is a framework for building real-time apps. It wraps things like socket.io (now being replaced by the simpler engine.io) for bi-directional use of WebSockets between the clients and the server. Comes with lots of very convenient features for development.

AngularJS is a great tool to create very dynamic and responsive client-side applications. This graphic from a post on bennadel.com says it all. The angularjs.org site has good docs and tutorials, this matters because there’s quite a bit of – fantastic! – stuff to learn.

Connect is what makes the HTTP webserver built into Node.js incredibly flexible and powerful. The coin dropped once I read an excellent introduction on project70.com. If you’ve ever tried to write your own HTTP webpage server, then you’ll appreciate the elegance of this timeout module example on senchalabs.org.

Redis is a memory-based key-value store, which also saves its data on file, so that restarts can resume where it left off. An excellent way to cache file contents and “live” data which represents the state of a running system. Keeping data in a separate process is great for development, especially with automatic server restarts when any source file changes. IOW, the basic idea is to keep Redis running at all times, so that everything else can be restarted and reloaded at will during development.

Foundation is a set of CSS/HTML choices to support uniform good-looking web pages.

CoffeeScript is a “JavaScript dialect” which gets transformed into pure JavaScript on the fly. I’m growing quite used to it already, and enjoy its clean syntax and conciseness.

Jade is a shorthand notation which greatly reduces the amount of text (code?) needed to define HTML page structures (i.e. elements, tags, attributes). Like CoffeeScript, it’s just a dialect – the output is standard HTML.

Stylus is again just a dialect, for CSS this time. Not such a big deal, but I like the way all these notations let me see the underlying structure and nesting at a glance.

That’s ten choices, for Ten Terrific Technologies. Here are the links again:

(S = server-side, C = client-side, S+C = both, T = translated on the server)

All as non-restrictive open source, and with all development taking place on GitHub.

It’s been an intense two weeks, trying to understand enough of all this to be able to make practical decisions. And now I can hardly wait to find out where this will take me!

UpdateBootstrap has been replaced by Foundation – mostly as a matter of taste.

Node.js on Raspberry Pi

In Software on Jan 6, 2013 at 00:01

After all this fiddling with Node.js on my Mac, it’s time to see how it works out on the Raspberry Pi. This is a running summary of how to get a fresh setup with Node.js going.

Linux

I’m using the Raspbian build from Dec 16, called “2012-12-16-wheezy-raspbian”. See the download page and directory for the ZIP file I used.

The next step is to get this image onto an SD memory card. I used a 4 GB card, of which over half will be available once everything has been installed. Plenty!

Good instructions for this can be found at elinux.org – in my case the section titled Copying an image to the SD card in Mac OS X (command line). With a minor tweak on Mac OSX 10.8.2, as I had to use the following command in step 10:

sudo dd bs=1m if=~/Downloads/2012-12-16-wheezy-raspbian.img of=/dev/rdisk1

First boot

Next: insert the SD card and power up the RPi! Luckily, the setup comes with SSH enabled out of the box, so only an Ethernet cable and 5V power with USB-micro plug are needed.

When launched and logged in (user “pi”, password “raspberry” – change it!), it will say:

Please run 'sudo raspi-config'

So I went through all the settings one by one, overclocking set to “medium”, only 16 MB assigned to video, and booting without graphical display. Don’t forget to exit the utility via the “Finish” button, and then restart using:

$ sudo shutdown -r now

Now is a good time to perform all pending updates and clean up:

$ sudo -i
# apt-get update
# apt-get upgrade
# apt-get clean
# reboot

The reboot at the end helps make sure that everything really works as expected.

Up and running

That’s it, the system is now ready for use. Some info about my 256 MB RPi:

$ uname -a
Linux raspberrypi 3.2.27+ #250 PREEMPT \
                    Thu Oct 18 19:03:02 BST 2012 armv6l GNU/Linux
$ free
             total       used       free     shared    buffers     cached
Mem:        237868      51784     186084          0       8904      25972
-/+ buffers/cache:      16908     220960
Swap:       102396          0     102396
$ df -H
Filesystem      Size  Used Avail Use% Mounted on
rootfs          3.9G  1.6G  2.2G  42% /
/dev/root       3.9G  1.6G  2.2G  42% /
devtmpfs        122M     0  122M   0% /dev
tmpfs            25M  213k   25M   1% /run
tmpfs           5.3M     0  5.3M   0% /run/lock
tmpfs            49M     0   49M   0% /run/shm
/dev/mmcblk0p1   59M   18M   42M  30% /boot
$ cat /proc/cpuinfo 
Processor       : ARMv6-compatible processor rev 7 (v6l)
BogoMIPS        : 697.95
Features        : swp half thumb fastmult vfp edsp java tls 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xb76
CPU revision    : 7

Hardware        : BCM2708
Revision        : 0004
Serial          : 00000000596372ab
$

So far, it’s just a standard boilerplate setup. Yawn…

Node.js

On to Node.js! Unfortunately, the build included in Debian/Raspbian is 0.6.19, which is a bit old. I’d rather get started with the 0.8.x series, so here’s how to build it from source.

But first, let’s use this simple trick to get write permission in /usr/local as non-root:

$ sudo usermod -aG staff pi

Note: you have to logout and back in, or reboot, to get the new permissions.

With that out of the way, code can be built and installed as user “pi” – no need for sudo:

$ curl http://nodejs.org/dist/v0.8.16/node-v0.8.16.tar.gz | tar xz 
$ cd node-v0.8.16
$ ./configure
$ make
(... two hours of build output gibberish ...)
$ make install

That’s it. A quick check that everything is in working order:

$ node -v
v0.8.16
$ npm -v
1.1.69
$ which node
/usr/local/bin/node
$ ldd `which node`
  /usr/lib/arm-linux-gnueabihf/libcofi_rpi.so (0x40236000)
  libdl.so.2 => /lib/arm-linux-gnueabihf/libdl.so.2 (0x40074000)
  librt.so.1 => /lib/arm-linux-gnueabihf/librt.so.1 (0x40036000)
  libstdc++.so.6 => /usr/lib/arm-linux-gnueabihf/libstdc++.so.6 (0x40107000)
  libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0x4023f000)
  libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0x4007f000)
  libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0x400a7000)
  libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0x402b0000)
  /lib/ld-linux-armhf.so.3 (0x400d2000)
$

Oh, one more thing – the RPi doesn’t come with “git” installed, so let’s fix that right now:

$ sudo apt-get install git

There. Now you’re ready to start cookin’ with node and npm on the RPi. Onwards!

PS. If you don’t want to wait two hours, you can download my build, unpack with “tar xfz node-v0.8.16-rpi.tgz”, and do a “cd node-v0.8.16 && make install” to copy the build results into /usr/local/ (probably only works if you have exactly the same Raspbian image).

Plotting again, at last

In Software on Jan 4, 2013 at 00:01

The new code is progressing nicely. First step was to get a test table, updating in real time:

Screen Shot 2013-01-03 at 11.29.07

(etc…)

It was a big relief to figure out how to produce graphs again – e.g. power consumption:

Screen Shot 2013-01-03 at 10.01.55

The measurement resolution from the 2000 pulse/kWh counters is excellent. Here is an excerpt of power consumption vs solar production on a cloudy and wet winter morning:

Screen Shot 2013-01-03 at 12.00.02

There is a fascinating little pattern in there, which I suspect comes from the central heating – perhaps from the boiler and pump, switching on and off in a peculiar 9-minute cycle?

Here are a bunch of temperature sensors (plus the central heating set-point, in brown):

Screen Shot 2013-01-03 at 10.02.14

There is no data storage yet (I just left the browser running on the laptop collecting data overnight), nor proper scaling, nor any form of configurability – everything was hard-coded, just to check that the basics work. It’s a far cry from being able to define and configure such graphs in the browser, but hey – one baby step at a time…

Although the D3 and NVD3 SVG-based graphing packages look stunning, they are a bit overkill for simple graphing and consist of more code for the browser to load and run. Maybe some other time – for now I’m using the Flot package for these graphs, again.

Will share the code as soon as I can make up my mind about the basic app structure!

Processing P1 data

In Software on Jan 3, 2013 at 00:01

Last post in a series of three (previous posts here and here).

The decoder for this data, in CoffeeScript, is as follows:

Screen Shot 2012-12-31 at 15.28.22

Note that the API of these decoders is still changing. They are now completely independent little snippets of code which do only one thing – no assumptions on where the data comes from, or what is to be done with the decoded results. Each decoder takes the data, creates an object with decoded fields, and finishes by calling the supplied “cb” callback function.

Here is some sample output, including a bit of debugging:

Screen Shot 2012-12-31 at 13.43.10

As you can see, this example packet used 19 bytes to encode 10 values plus a format code.

Explanation of the values shown:

  • usew is 0: no power is being drawn from the grid
  • genw is 6: power feed into the grid is 10 x 6, i.e. ≈ 60W
  • p1 + p3 is current total consumption: 8 + 191, i.e. 199W
  • p2 is current solar power output: 258W

With a rate of about 50 Kbit/sec using the standard RF12 driver settings, and with some 10 bytes of packet header + footer overhead, this translates to (19 + 10) * 8 * 20 = 4,640 µS “on-air” time once every 10 seconds, i.e. still under 0.05 % bandwidth utilisation. Fine.

This is weblog post #1200 – onwards! :)

Playing back logfiles

In Software on Dec 31, 2012 at 00:01

After yesterday’s reading and decoding exploration, here’s some code which will happily play back my daily log files, of which I now have over 4 years worth

Screen Shot 2012-12-29 at 15.55.11

Sample output:

Screen Shot 2012-12-29 at 15.59.52

As you can see, this supports scanning entire log files, both plain text and gzipped. In fact, JeeMonLogParser@parseStream should also work fine with sockets and pipes:

Screen Shot 2012-12-29 at 15.55.46

The beauty – again – is total modularity: both the real serial interface module and this log-replay module generate the same events, and can therefore be used interchangeably. As the decoders work independent of either one, there is no dependency (“coupling”) whatsoever between these modules.

Not to worry: from now on I won’t bore you with every new JavaScript / CoffeeScript snippet I come up with – just wanted to illustrate how asynchronous I/O and events are making this code extremely easy to develop and try out in small bites.

I wish you a Guten Rutsch ins neue Jahr and a safe, healthy, and joyful 2013!

Decoding RF12demo with Node.js

In Software on Dec 30, 2012 at 00:01

I’m starting to understand how things work in Node.js. Just wrote a little module to take serial output from the RF12demo sketch and decode its “OK …” output lines:

Screen Shot 2012-12-29 at 01.04.48

Sample output:

Screen Shot 2012-12-29 at 00.44.21

This quick demo only has decoders for nodes 3 and 9 so far, but it shows the basic idea.

This relies on the EventEmitter class, which offers a very lightweight mechanism of passing around objects on channels, and adding listeners to get called when such “events” happen. A very efficient in-process pub-sub mechanism, in effect!

Here is the “serial-rf12demo” module which does the rest of the magic:

Screen Shot 2012-12-29 at 00.46.13

And that’s really all there is to it – very modular!

Data wants to be dynamic

In Software on Dec 25, 2012 at 00:01

It’s all about dynamics, really. When software becomes so dynamic that you see the data, then all that complex code will vanish into the background:

Screen Shot 2012-12-21 at 23.34.20   DSC_4327

This is the transformation we saw a long time ago when going from teletype-based interaction to direct manipulation with the mouse, and the same is happening here: if the link between physical devices and the page shown on the web browser is immediate, then the checkboxes and indicators on the web page become essentially the same as the buttons and the LED’s. The software becomes invisible – as it should be!

That demo from a few days back really has that effect. And of course then networking kicks in to make this work anywhere, including tablets and mobile phones.

But why stop there? With Tcl, I have always enjoyed the fact that I can develop inside a running process, i.e. modify code on a live system – by simply reloading source files.

With JavaScript, although the mechanism works very differently, you can get similar benefits. When launching the Node.js based server, I use this command:

    nodemon app.coffee

This not only launches a web server on port 3000 for use with a browser, it also starts watching the files in the current directory for changes. In combination with the logic of SocketStream, this leads to the following behavior during development:

  • when I change a file such as app.coffee or any file inside the server/ directory, nodemon will stop and relaunch the server app, thus picking up all the changes – and SocketStream is smart enough to make all clients re-connect automatically
  • when changing a file anywhere inside the clients/ area, the server sends a special request via WebSockets for the clients, i.e. the web browser(s), to refresh themselves – again, this causes all client-side changes to be picked up
  • when changing CSS files (or rather, the Stylus files that generate it), the same happens, but in this case the browser state does not get lost – so you can instantly view the effects of twiddling with CSS

Let me stress that: the browser updates on each save, even if it’s not the front window!

The benefits for the development workflow are hard to overstate – it means that you can really build a full client-server application in small steps and immediately see what’s going on. If there is a problem, just insert some “console.log()” calls and watch the server-side (stdout) or client-side (browser console window).

There is one issue, in that browser state gets lost with client-side code changes (current contents of input boxes, button state, etc), but this can be overcome by moving more of this state into Redis, since the Redis “store” can just stay running in the background.

All in all, I’m totally blown away by what’s possible and within reach today, and by the way this type of software development can be done. Anywhere and by anyone.

Onwards!

Setting up a SocketStream app

In Software on Dec 24, 2012 at 00:01

As shown yesterday, the SocketStream framework takes care of a lot of things for real-time web-based apps. It’s at version 0.3 (0.4 coming up), but already pretty effective.

Here’s what I did to get the “ss-blink” demo app off the ground on my Mac notebook:

  • no wallet needed: everything is either free (Xcode) or open source (the rest)
  • make sure the Xcode command-line dev tools have been installed (gcc, make, etc)
  • install the Homebrew package manager using this scary-looking one-liner:
    ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)"
  • using HomeBrew, install Node.js – brew install node
  • that happens to include NPM, the Node Package Manager, all I had to do was add the NPM bin dir to my PATH (in .bash_profile, for example), so that globally installed commands will be found – PATH=/usr/local/share/npm/bin:$PATH

Not there yet, but I wanted to point out at this point that Xcode plus Homebrew (on Mac, other platforms have their own variants), with Node.js and NPM as foundation for everything else. Once you have those installed and working smoothly, everything else is a matter of obtaining packages through NPM as needed and running them with Node.js – a truly amazing software combo. NPM can also handle uninstalls & cleanup.

Let’s move on, shall we?

  • install SocketStream globally – npm install -g socketstream
    (the “-g” is why PATH needs to be properly set after this point)
  • install the nodemon utility – npm install -g nodemon
    (this makes development a breeze, by reloading the server whenever files change)
  • create a fresh app, using – socketstream new ss-blink
  • this creates a dir called ss-blink, so first I switched to it – cd ss-blink
  • use npm to fetch and build all the dependencies in ss-blink – npm install
  • that’s it, start it up – nodemon app.js (or node app.js if you insist)
  • navigate to and you should see a boilerplate chat app
  • open a second browser window on the same URL, and marvel at how a chat works :)

So there’s some setup involved, and it’s bound to be a bit different on Windows and Linux, but still… it’s not that painful. There’s a lot hidden behind the scenes of these installation tools. In particular npm is incredibly easy to use, and the workhorse for getting tons and tons of packages from GitHub or elsewhere into your project.

The way this works, is that you add one line per package you want to the “package.json” file inside the project directory, and then simply re-run “npm install”. I did exactly that – adding “serialport” as dependency, which caused npm to go out, fetch, and compile all the necessary bits and pieces.

Note that none of the above require “root” privileges: no superuser == better security.

For yesterday’s demo, the above was my starting point. However, I did want to switch to CoffeeScript and Jade instead of JavaScript and HTML, respectively – which is very easy to do with the js2coffee and html2jade tools.

These were installed using – npm install -g js2coffee html2jade

And then hours of head-scratching, reading, browsing the web, watching video’s, etc.

But hey, it was a pretty smooth JavaScript newbie start as far as I’m concerned!

Connecting a Blink Plug to a web browser

In Hardware, Software on Dec 23, 2012 at 00:01

Here’s a fun experiment – using Node.js with SocketStream as web server to directly control the LEDs on a Blink Plug and read out the button states via a JeeNode USB:

JC's Grid, page 51

This is the web interface I hacked together:

Screen Shot 2012-12-21 at 23.34.20

The red background comes from pressing button #2, and LED 1 is currently on – so this is bi-directional & real-time communication. There’s no polling: signalling is instant in both directions, due to the magic of WebSockets (this page lists supported browsers).

I’m running blink_serial.ino on the JeeNode, which does nothing more than pass some short messages back and forth over the USB serial connection.

The rest is a matter of getting all the pieces in the right place in the SocketStream framework. There’s no AngularJS in here yet, so getting data in and out of the actual web page is a bit clumsy. The total code is under 100 lines of CoffeeScript – the entire application can be downloaded as ZIP archive.

Here’s the main client-side code from the client/code/app/app.coffee source file:

Screen Shot 2012-12-22 at 00.48.12

(some old stuff and weird coding in there… hey, it’s just an experiment, ok?)

The client side, i.e. the browser, can receive “blink:button” events via WebSockets (these are set up and fully managed by SocketStream, including reconnects), as well as the usual DOM events such as changing the state of a checkbox element on the page.

And this is the main server-side logic, contained in the server/rpc/serial.coffee file:

Screen Shot 2012-12-22 at 00.54.07

The server uses the node-serialport module to gain access to serial ports on the server, where the JeeNode USB is plugged in. And it defines a “sendCommand” which can be called via RPC by each connected web browser.

Most of the work is really figuring out where things go and how to get at the different bits of data and code. It’s all in JavaScript CoffeeScript on both client and server, but you still need to know all the concepts to get to grips with it – there is no magic pill!

Tomorrow, I’ll describe how I created this app, and how to run it.

Update – The code is now on GitHub.

Dynamic web pages

In Software on Dec 22, 2012 at 00:01

There are tons of ways to make web pages dynamic, i.e. have them update in real-time. For many years, constant automatic full-page refreshes were the only game in town.

But that’s more or less ignoring the web evolution of the past decade. With JavaScript in the browser, you can manipulate the DOM (i.e. the structure underlying each web page) directly. This has led to an explosion of JavaScript libraries in recent years, of which the most widespread one by now is probably JQuery.

In JQuery, you can easily make changes to the DOM – here is a tiny example:

Screen Shot 2012-12-20 at 23.40.23

And sure enough, the result comes out as:

Screen Shot 2012-12-20 at 23.40.32

But there is a major problem – with anything non-trivial, this style quickly ends up becoming a huge mess. Everything gets mixed up – even if you try to separate the JavaScript code into its own files, you still need to deal with things like loops inside the HTML code (to create a repeated list, depending on how many data items there are).

And there’s no automation – the more little bits of dynamic info you have spread around the page, the more code you need to write to keep all of them in sync. Both ways: setting items to display as well as picking up info entered via the keyboard and mouse.

There are a number of ways to get around this nowadays – with a very nice overview about seven of the mainstream solutions by Steven Sanderson.

I used Knockout for the RFM12B configuration generator to explore its dynamics. And while it does what it says, and leads to delightfully dynamically-updating web pages, I still found myself mixing up logic and presentation and having to think about template expansion more than I wanted to.

Then I discovered AngularJS. At first glance, it looks like just another JavaScript all-in-the-browser library, with all the usual expansion and looping mechanisms. But there’s a difference: AngularJS doesn’t mix concepts, it embeds all the information it needs in HTML elements and attributes.

AngularJS manipulates the DOM structure (better than XSLT did with XML, I think).

Here’s the same example as above, in Angular (with apologies for abusing ng-init a bit):

Screen Shot 2012-12-20 at 23.40.53

The “ng-app” attribute is the key. It tells AngularJS to go through the element tree and do its magic. It might sound like a detail, but as a result, this page remains 100% HTML – it can still be created by a graphics designer using standard HTML editing tools.

More importantly, this sort of coding can grow without ever becoming a mix of concepts and languages. I’ve seen my share of JavaScript / HTML mashups and templating attempts, and it has always kept me from using JavaScript in the browser. Until now.

Here’s a better example (live demo):

Screen Shot 2012-12-20 at 23.35.12

Another little demo I just wrote can be seen here. More physical-computing related. As with any web app, you can check the page source to see how it’s done.

For an excellent introduction about how this works, see John Lindquist’s 15-minute video on YouTube. There will be a lot of new stuff here if you haven’t seen AngularJS before, but it shows how to progressively create a non-trivial app (using WebStorm).

If you’re interested in this, and willing to invest some hours, there is a fantastic tutorial on the AngularJS site. As far as I’m concerned (which doesn’t mean much) this is just about the best there is today. I don’t care too much about syntax (or even languages), but AngularJS absolutely hits the sweet spot in the browser, on a conceptual level.

AngularJS is from Google, with MIT-licensed source on GitHub, and documented here.

And to top it all off, there is now also a GitHub demo project which combines AngularJS on the client with SocketStream on the server. Lots of reading and exploring to do!

JavaScript reading list

In Software on Dec 21, 2012 at 00:01

As I dive into JavaScript, and prompted by a recent comment on the weblog, it occurred to me that it might be useful to create a small list of books resources, for those of you interested in going down the same rabbit hole and starting out along a similar path.

Grab some nice food and drinks, you’re gonna’ need ’em!

First off, I’m assuming you have a good basis in some common programming language, such as C, C++, or Java, and preferably also one of the scripting languages, such as Lua, Perl, Python, Ruby, or Tcl. This isn’t a list about learning to program, but a list to help you dive into JavaScript, and all the tools, frameworks, and libraries that come with it.

Because JavaScript is just the enabler, really. My new-found fascination with it is not the syntax or the semantics, but the fast-paced ecosystem that is evolving around JS.

One more note before I take off: this is just my list. If you don’t agree, or don’t like it, just ignore it. If there are any important pointers missing (of course there are!), feel free to add tips and suggestions in the comments.

JavaScript

There’s JavaScript (the language), and there are the JavaScript environments (in the browser: the DOM, and on the server: Node). You’ll want to learn about them all.

  • JavaScript: The Good Parts by Douglas Crockford
    2008, 1st ed, 176 pages, ISBN 0596517742
  • JavaScript: The Definitive Guide by David Flanagan
    2011, 6th ed, 1100 pages, ISBN 0596805527

Videos: again by Douglas Crockford, there’s an excellent list at Stack Overflow. Going through them will take many hours, but they are really excellent. I watched all of these.

Don’t skim over prototypes, “==” vs “===”, and how “this” gets passed to functions.

Being able to understand every single notation in JavaScript is essential. Come back here if you can’t. Or google for stuff. Just don’t cut corners – it’s bound to bite you.

If you want to dive in really deep, check out this page about JavaScript and Scheme.

In the browser

Next on the menu: the DOM, HTML, and CSS. This is the essence of what happens inside a browser. Can be consumed in small doses, as the need arises. Simply start with the just-mentioned Wikipedia links.

Not quite sure what to recommend here – I’ve picked this up over the years. Perhaps w3schools this or this. Focus on HTML5 and CSS3, as these are the newest standards.

On the server

There are different implementations of JavaScript, but on the server, by far the most common implementation seems to be Node.js. This is a lot more than “just some JS implementation”. It comes with a standard API, full of useful functions and objects.

Node.js is geared towards asynchronous & event-driven operation. Nothing blocks, not even a read from a local disk – because in CPU terms, blocking takes too long. This means that you tend to call a “read” function and give it a “callback” function which gets called once the read completes. Very very different frame of mind. Deeply frustrating at times, but essential for any non-trivial app which needs to deal with networking, disks, and other “slow” peripherals. Including us mortals.

  • Learning Node by Shelley Powers
    2012, 1st ed, 396 pages, ISBN 1449323073

See also this great (but fairly long) list of tutorials, videos, and books at Stack Overflow.

SPA and MVC

Note that JavaScript on the server replaces all sorts of widespread approaches: PHP, ASP, and such. Even advanced web frameworks such as Rails and Django don’t play a role here. The server no longer acts as a templating system generating dynamic web pages – instead it just serves static HTML, CSS, JavaScript, and image files, and responds to requests via Ajax or WebSockets (often using JSON in both directions).

The term for this is Single-page web application, even though it’s not about staying on a single page (i.e. URL) at all costs. See this website for more background – also as PDF.

The other concepts bound to come up are MVC and MVVM. There’s an article about MVC at A List Apart. And here’s an online book with probably more than you want to know about this topic and about JavaScript design patterns in general.

In a nutshell: the model is the data in your app, the view is its presentation (i.e. while browsing), and the controller is the logic which makes changes to the model. Very (VERY!) loosely speaking, the model sits in the server, the view is the browser, and the controller is what jumps into action on the server when someone clicks, drags, or types something. This simplification completely falls apart in more advanced uses of JS.

Dialects

I am already starting to become quite a fan of CoffeeScript, Jade, and Stylus. These are pre-processors for JavaScript, HTML, and CSS, respectively. Totally optional.

Here are some CoffeeScript tutorials and a cookbook with recipes.

CoffeeScript is still JavaScript, so a good grasp of the underlying semantics is important.

It’s fairly easy to read these notations with only minimal exposure to the underlying language dialects, in my (still limited) experience. No need to use them yourself, but if you do, the above links are excellent starting points.

Just the start…

The above are really just pre-requisites to getting started. More on this topic soon, but let me just stress that good foundational understanding of JavaScript is essential. There are crazy warts in the language (which Douglas Crockford frequently points out and explains), but they’re a fact of life that we’ll just have to live with. This is what you get with a language which has now become part of every major web browser in the world.

Graphics, oh là là!

In Software on Dec 20, 2012 at 00:01

Graphs used to be made with gnuplot or RRDtool. Both generated on the server and then presented as images in the browser. This used to be called state of the art!

But that sooo last-century …

Then came JavaScript libraries such as Flot, which uses the HTML5 Canvas, allowing you to draw the graph in the browser. The key benefit is that these graphs can be made dynamic (updating through real-time data feeds) and interactive (so you can zoom in and show details).

But that sooo last-decade …

Now there is this, using the latest HTML5 capabilities and resolution-independent SVG:

Screen Shot 2012-12-18 at 22.15.15

See http://selection.datavisualization.ch/ (click through on each one to get details).

That picture doesn’t really do justice to the way some of these tools adjust dynamically and animate on change. All in the web browser. Stunning – in features and in variety!

I’ve been zooming in a bit (heh) on tools such as Rickshaw and NVD3 – both with lots of fascinating examples. Some parts are just window dressing, but the dynamics and real-time behaviour will definitely help gain more insight into the underlying datasets. Which is what all the visualisation should be about, of course.

For an interesting project using SocketStream, Flot, and DataTables, see DaisyCentral. There’s a great write-up on the Architectural Overview, and another page on graphically setting up automation rules:

Screen Shot 2012-12-18 at 22.48.27

This editor is based on jsPlumb for drawing.

Another interesting project is Dashku, based on SocketStream and Raphaël. It’s a way to build a live dashboard – the essence only became clear to me after seeing this YouTube video. As you build and adjust it in edit mode, you can keep a second view open which shows the final result. Things automatically get synced, due to SocketStream.

Now, if only I knew how to build up my fu-level and find a way into all this magic…

Getting my feet wet

In Software on Dec 19, 2012 at 00:01

It all starts with baby steps. Let me just say that it feels very awkward and humbling to stumble around in a new programming language without knowing how things should be done. Here’s the sort of gibberish I’m currently writing:

Screen Shot 2012-12-18 at 17.49.18

This must be the ugliest code I’ve ever written. Not because the language is bad, but because I’m trying to convert existing code in a hurry, without knowing how to do things properly in JavaScript / CoffeeScript. Of course it’s unreadable, but all I care for right now, is to get a real-time data source up and running to develop the rest with.

I’m posting this so that one day I can look back and laugh at all this clumsiness :)

The output appears in the browser, even though all this is running on the server:

Screen Shot 2012-12-18 at 17.11.04

Ok, so now there’s a “feed” with readings coming in. But that’s just the tip of the iceberg:

  • What should the pubsub naming structure be, i.e. what are the keys / topic names?
  • Should readings be managed per value (temperature), or per device (room node)?
  • What format should this data have, since inserting a decimal point is locale-specific?
  • How to manage new values, given that previous ones can be useful to have around?
  • Are there easy choices to make w.r.t. how to store the history of all this data?
  • How to aggregate values, but more importantly perhaps: when to do this?

And that’s just incoming data. There will also need to be rules for automation and outgoing control data. Not to mention configuration settings, admin front-ends, live development, per-user settings, access rights, etc, etc, etc.

I’m not too interested yet in implementing things for real. Would rather first spend more time understanding the trade-offs – and learning JavaScript. By doodling as I’m doing now and by examining a lot of code written by others.

If you have any suggestions on what I should be looking into, let me know!

“Experience is what you get while looking for something else.” – Federico Fellini

Real-time out of the box

In Software on Dec 18, 2012 at 00:01

I recently came across SocketStream, which describes itself as “A fast, modular Node.js web framework dedicated to building single-page realtime apps”.

And indeed, it took virtually no effort to get this self-updating page in a web browser:

Screen Shot 2012-12-17 at 00.45.50

The input comes from the serial port, I just added this code:

Screen Shot 2012-12-17 at 15.48.50

That’s not JavaScript, but CoffeeScript – a dialect with a concise functional notation (and significant white-space indentation), which gets turned into JavaScript on the fly.

The above does a lot more than collect serial data: the “try” block converts the text to a binary buffer in the form of a JavaScript DataView, ready for decoding and then publishes each packet on its corresponding channel. Just to try out some ideas…

I’m also using Jade here, a notation which gets transformed into HTML – on the fly:

Screen Shot 2012-12-17 at 15.53.56

And this is Stylus, a shorthand notation which generates CSS (yep, again on the fly):

Screen Shot 2012-12-17 at 15.54.25

All of these are completely gone once development is over: with one command, you generate a complete app which contains only pure JavaScript, HTML, and CSS files.

I’m slowly falling in love with all these notations – yeah, I know, very unprofessional!

Apart from installing SocketStream using “npm install -g SocketStream”, adding the SerialPort module to the dependencies, and scratching my head for a few hours to figure out how all this machinery works, that is virtually all I had to do.

Development is blindingly fast when it comes to client side editing: just save any file and the browser(s) will automatically reload. With a text editor that saves changes on focus loss, the process becomes instant: edit and switch to the browser. Boom – updated!

Server-side changes require nodemon (as described in the faq). After that, server reload becomes equally automatic during development. Pretty amazing stuff.

The trade-off here is learning to understand these libraries and tools and playing by their rules, versus having to write a lot more yourself. But from what I’ve seen so far, SocketStream with Express, CoffeeScript, Jade, Stylus, SocketIO, Node.js, SerialPort, Redis, etc. take a staggering amount of work off my shoulders – all 100% open source.

There’s a pubsub-over-WebSockets mechanism inside SocketStream. Using Redis.

Wow. Creating responsive real-time apps hasn’t been this much fun in a long time!

It’s all about MOM

In Software on Dec 17, 2012 at 00:01

Home monitoring and home automation have some very obvious properties:

  • a bunch of sensors around the house are sending out readings
  • with actuators to control lights and appliances, driven by secure commands
  • all of this within and around the home, i.e. in a fairly confined space
  • we’d like to see past history, usually in the form of graphs
  • we want to be able to control the actuators remotely, through control panels
  • and lastly, we’d like to automate things a bit, using configurable rules

In information processing terms, this stuff is real-time, but only barely so: it’s enough if things happen within say a tenth of a second. The amount of information we have to deal with is also quite low: the entire state of a home at any point in time is probably no more than perhaps a kilobyte (although collected history will end up being a lot more).

The challenge is not the processing side of things, but the architecture: centralised or distributed, network topology for these readings and commands, how to deal with a plethora of physical interfaces and devices, and how to specify and manage the automation rules. Oh, and the user interface. The setup should also be organic, in that it allows us to grow and evolve all the aspects of our system over time.

It’s all about state and messages: the state of the home, current, sensed, and desired, and the events which change that state, in the form of incoming and outgoing messages.

What we need is MOM, i.e. Message-oriented middleware: a core which represents that state and interfaces through messages – both incoming and generated. One very clean model is to have a core process which allows some processes to “publish” messages to it and other to “subscribe” to specific changes. This mechanism is called pubsub.

Ideally, the core process should be launched once and then kept running forever, with all the features and functions added (at least initially) as separate processes, so that we can develop, add, fix, refine, and even tear down the different functions as needed without literally “bringing down the house” at every turn.

There are a couple of ways to do this, and as you may recall, I’ve been exploring the option of using ZeroMQ as the core foundation for all message exchanges. ZeroMQ bills itself as “the intelligent transport layer” and it supports pubsub as well as several other application interconnect topologies. Now, half a year later, I’m not so sure it’s really what I want. While ZeroMQ is definitely more than flexible and scalable enough, it also is fairly low-level in many ways. A lot will need to be built on top, even just to create that central core process.

Another contender which seems to be getting a lot of traction in home automation these days is MQTT, with an open source implementation of the central core called Mosquitto. In MOM terms, this is called a “broker”: a process which manages incoming message traffic from publishers by re-routing it to the proper subscribers. The model is very clean and simple: there are “channels” with hierarchical names such as perhaps “/kitchen/roomnode/temperature” to which a sensor publishes its temperature readings, and then others can subscribe to say “/+/+/temperature” to get notified of each temperature report around the house, the moment it comes in.

MQTT adds a lot of useful functionality, and optionally supports a quality-of-service (QoS) level as a way to handle messages that need reliable delivery (QoS level 0 messages use best-effort delivery, but may occasionally get dropped). The “retain” feature can hold on to the last message sent on each channel, so that when the system shuts down and comes back up or when a connection has been interrupted, a subscriber immediately learns about the last value. The “last will and testament” lets a publisher prepare a message to be sent out to a channel (not necessarily the same one) when it drops out for any reason.

All very useful, but I’m not convinced this is a good fit. In my perception, state is more central than messages in this context. State is what we model with a home monitoring and automation system, whereas messages come and go in various ways. When I look at the system, I’m first of all interested in the state of the house, and only in the second place interested in how things have changed until now or will change in the future. I’d much rather have a database as the centre of this universe. With excellent support for messages and pubsub, of course.

I’ve been looking at Redis lately, a “key-value store” which is not only small and efficient, but which also has explicit support for pubsub built in. So the model remains the same: publishers and subscribers can find each other through Redis, with wildcards to support the same concept of channels as in MQTT. But the key difference is that the central setup is now based on state: even without any publishers active, I can inspect the current temperature, switch setting, etc. – just like MQTT’s “retain”.

Furthermore, with a database-centric core, we automatically also have a place to store configuration settings and even logic, in the form of scripts, if needed. This approach can greatly simplify publishers and subscribers, as they no longer need local storage for configuration. Not a big deal when everything lives on a single machine, but with a central general-purpose store that is no longer a necessity. Logic can run anywhere, yet operate off the same central configuration.

The good news is that with any of the above three options, programming language choice is irrelevant: they all have numerous bindings and interfaces. In fact, because interconnections take place via sockets, there is not even a need to use C-based interface code: even the language itself can be used to handle properly-formatted packets.

I’ve set up a basic installation on the Mac, using Homebrew. The following steps are not 100% precise, but this is more or less all that’s needed on Windows, MacOSX, or Linux:

    brew install node redis
    npm install -g express
    redis-server               (starts the database server)
    express                    (creates a demo app)
    npm install connect-redis
    node app.js                (starts the demo app on port 3000)

There are many examples around the net on how to get started, such as this one, which is already a bit dated.

Let’s see where this leads to…

Ahavascript

In Software on Dec 16, 2012 at 00:01

I learned to program in C a long time ago, on a PDP11 running Unix (one of the first installations in the Netherlands). That’s over 30 years ago and guess what… that knowledge is still applicable. Back in full force on all of today’s embedded µC’s, in fact.

I’ll spare you the list of languages I learned before and after that time, but C has become what is probably the most widespread programming language ever. Today, it is the #1 implementation language, in fact. It powers the gcc toolchain, the Linux operating system, most servers and browsers, and … well, just about everything we use today.

It’s pretty useful to learn stuff which lasts… but also pretty hard to predict, alas!

Not just because switching means you have start all over again, but because you can become really productive at programming when spending years and years (or perhaps just 10,000 hours) learning the ins and outs, learning from others, and getting really familiar with all the programming language’s idioms, quirks, tricks, and smells.

C (and it its wake C++ and Objective-C) has become irreplaceable and timeless.

Fast-forward to today and the scenery sure has changed: there are now hundreds of programming languages, and so many people programming, that lots and lots of them can thrive alongside each other within their own communities.

While researching a bit how to move forward with a couple of larger projects here at JeeLabs, I’ve spent a lot of time looking around recently, to decide on where to go next.

The web and dynamic languages are here to stay, and that inevitably leads to JavaScript. When you look at GitHub, the most used programming language is JavaScript. This may be skewed by the fact that the JavaScript community prefers GitHub, or that people make more and smaller projects, but there is no denying that it’s a very active trend:

Screen Shot 2012-12-14 at 15.53.14

In a way, JavaScript went where Java once tried to go: becoming the de-facto standard language inside the browser, i.e. on the client side of the web. But there’s something else going on: not only is it taking over the client side of things, it’s also making inroads on the server end. If you look at the most active projects, again on GitHub, you get this list:

Screen Shot 2012-12-14 at 15.57.09

There’s something called Node.js in each of these top-5 charts. That’s JavaScript on the server side. Node.js has an event-based asynchronous processing model and is based on Google’s V8 engine. It’s also is phenomenally fast, due to its just-in-time compilation for x86 and ARM architectures.

And then the Aha-Erlebnis set in: JavaScript is the next C !

Think about it: it’s on all web browsers on all platforms, it’s complemented by a DOM, HTML, and CSS which bring it into an ever-richer visual world, and it’s slowly getting more and more traction on the server side of the web.

Just as with C at the time, I don’t expect the world to become mono-lingual, but I think that it is inevitable that we will see more and more developments on top of JavaScript.

With JavaScript comes a free text-based “data interchange protocol”. This is where XML tried to go, but failed – and where JSON is now taking over.

My conclusion (and prediction) is: like it or not, client-side JavaScript + JSON + server-side JavaScript is here to stay, and portable / efficient / readable enough to become acceptable for an ever-growing group of programmers. Just like C.

Node.js is implemented in C++ and can be extended in C++, which means that even special-purpose C libraries can be brought into the mix. So one way of looking at JavaScript, is as a dynamic language on top of C/C++.

I have to admit that it’s quite tempting to consider building everything in JavaScript from now on – because having the same language on all sides of a network configuration will probably make things a lot simpler. Actually, I’m also tempted to use pre-processors such as CoffeeScript, Jade, and Stylus, but these are really just optional conveniences (or gimmicks?) around the basic JavaScript, HTML, and CSS trio, respectively.

It’s easy to dismiss JavaScript as yet another fad. But doing so by ignorance would be a mistake – see the Blub Paradox by Paul Graham. Features such as list comprehensions are neat tricks, but easily worked around. Prototypal inheritance and lexical closures on the other hand, are profound concepts. Closures in combination with asynchronous processing (and a form of coding called CPS) are fairly complex, but the fact that some really smart guys can create libraries using these techniques and hide it from us mere mortals means you get a lot more than a new notation and some hyped-up libraries.

I’m not trying to scare you or show off. Nor am I cherry-picking features to bring out arguments in favour of JavaScript. Several languages offer similar – and sometimes even more powerful – features . Based on conceptual power alone, I’d prefer Common Lisp or Scheme, in fact. But JavaScript is dramatically more widespread, and very active / vibrant w.r.t. what is currently being developed in it and for it.

For more about JavaScript’s strengths and weaknesses, see Douglas Crockford’s page.

So where does this leave me? Easy: a JS novice, tempted to start learning from scratch!

Tomorrow, some new considerations for middleware…