Computing stuff tied to the physical world

Gearing up for the new web

In Software on Sep 13, 2013 at 00:01

Well… new for me, anyway. What a fascinating world in motion we live in:

  • first there was the “pre-web”, with email BBS’es and very clunky modem links
  • then came Nestcape Navigator – and the world would never be the same again
  • hey, we can make the results dynamic, let’s generate pages through CGI
  • and not just outgoing, we can have forms and page refreshes to interact!
  • we need lots of ways to generate page variations, let’s add modules to the server
  • nah, we can do better than that: let’s add PHP / Python / Ruby inside the server!
  • and so the first major web explosion of the internet was born…

It took a decade, and another decade to make fast internet access mainstream, which is where we are today. There are lots and lots and LOTS of “web frameworks” in existence now, for any programming language you care about.

The second major wave of evolution came with a whole bunch of new acronyms, such as RIA and SPA. This is very much Google’s turf, and this is the technology which powers Gmail, for example. Rich and responsive interactivity. It’s going everywhere by now.

It was made possible by the super-trio of HTML, CSS, and JS (JavaScript). They’ve all evolved to define the structure, style, and logic of everything you see in the browser.

It’s hard to keep up, but as you know, I’ve picked two very active technologies for doing all my new software development: AngularJS and Node.js. Both by Google, by the way – but also both freely available as wide-open source (here and here). They’ve changed my life :)

It’s been a steep learning curve. I don’t mean just getting something going. I mean getting comfortable with it all. Understanding the idioms and quirks (there always are), and figuring out how to debug stuff (async is nasty, especially when not using promises). Can’t say I’m in the clear yet, but the fog is lifting and the fun level is rising!

I started coding HouseMon many months ago and it’s been running here for a long time now, giving me a chance to let it all sink in. The reality is: I’ve been doing it all wrong.

That’s not necessarily a bad thing, by the way: ya gotta learn stuff by doin’, right?

Ok, so I’ve been using Events as the main decoupling mechanism in HouseMon, i.e. publish and subscribe inside the app. The beauty of this is that it lets you keep the functionality of the application highly modular. Each subsystem subscribes to what it is interested in, and publishes results when they are available. No need to consider who needs this, or even how many subscribers there will be for what gets published.

It sounds great, and in a way it is, but it is a bit like a loose canon: events fire all over the place, with little insight in what is going on and when. For some apps, this is probably great, but with fairly stable continuous processing flows as in a home monitoring setup, it’s more chaotic than need be. It all works fine “in the small”, i.e. when writing little modules (“Briqs” in HouseMon-speak) and adding minor drivers / decoders, but I constantly felt lost with respect to the big picture. And then there’s this nasty back pressure dilemma.

But now the pieces are starting to fall into place. I’ve figured out the “big” structure of the HouseMon application as well as the “little” inner organisation of it all.

It’s all about flow. Three Four stories coming up – stay tuned…

  1. I think I understand where this is going to… As a coincidence I read the ‘Control Flow’ article mentioned in the ‘back pressure’ comments earlier today (because of some flow related issues I have myself), so I think I know what you’re saying.

    But there’s one thing I don’t understand in this post. A few days ago you wrote that back pressure could be ignored, but now it’s back again as a nasty dilemma. So now I am wondering, what has happened in those few days that made you change your mind about back pressure? It’s either a problem that needs to be solved, or it’s not..

    • It’s not an issue for normal events around the house. But I also want to be able to replay log files for re-processing, and then it does need to be addressed. And for some (vague) future plans, I’d like to make sure much faster real-time events won’t break the system, even with a (s)low-power embedded Linux of some sort.

  2. I enjoy your posts. Thank you for taking the time and sharing your ideas.

    Seems for your HouseMon needs you don’t need a cloud service. I wish to get feeds from many different homes then make those available through browser and mobile. I used to run a business where we wrote our server infrastructure for eCommerce, DRM…blah, blah…the experience taught me to avoid writing server side mostly because of optimizing uptime and maintenance.

    Have you looked into any cloud based services – Xively,, AWS?

    • I’m using S3 for VM backups, because that’s what TurnkeyLinux does. Have also been using a bit lately, very nice setup. But yeah, most of my needs are met with (stable) in-house hardware. Clouds come and go and can change in miraculous ways – and not just the condensed-water variety…

  3. Aha, the Asynchronous Events versus the Simple Synchronous Flow shoot-out ;-)

    In my opinion, the flow approach works best for most applications, as in many cases a packet or data is always processed in a given order.

    I did build some software based on OSC (Open Sound Control) which is completely based on data flowing through inputs/outputs/transformations etc. with a generic API allowing to connect each building blocks output to another blocks input; “the Flow”.

    The nice thing about the flow approach is that you still can implement asynchronous behaviour by adding an internal queue: if the queue is not full, the caller will continue immediately – the message is processed asynchronous -, but if the queue is full, the caller is blocked (automatic back pressure) until the queue is processed again.

    Now if the blocked caller has its own queue (that is not processed anymore if the caller is blocked), processes up stream are automatically blocked / slowed down giving a very robust, balanced data handling system that can process any burst of information. The time needed to process only depends on the CPU power!

    These kind of systems are easy to debug (compared to event-based systems), and easy to tune (size of internal buffers), and still do support loosely coupled subsystems through generic MessageIn API & internal buffers.

  4. I’ve used angular in the past, and am trying react on a new app to see how it works out. The video in the below post is good: Perhaps the most interesting part is they way they get good performance — similar concept to EFL libs I think …

Comments are closed.