This is the third of a four-part series on designing big apps (“big” as in not embedded, not necessarily many lines of code – on the contrary, in fact).
Because I couldn’t fit it in three parts after all.
Let’s talk about the big picture, in terms of technologies. What goes where, and such.
A decade or so ago, this would have been a decent model of how web apps work, I think:
All the pages, styles, images, and some code fetched as HTTP requests, rendered on the server, which sits between the persistent state on the server (files and databases), and the network-facing end.
The assumption here is that servers are big and fast, and that they work reliably, whereas clients vary in capabilities, versions, and performance, and that network connection stability needs to stay out of the “essence” of the application logic.
In a way, it works well. This is the context in which database-backed websites became all the rage: not just a few files and templating to tweak the website, but the essence of the application is stored in a database, with “widgets”, “blocks”, “groups”, and “panes” sewn together by an ever-more elaborate server-side framework – Ruby on Rails, Django, Drupal, etc. WordPress and Redmine too, which I gratefully rely on for the JeeLabs sites.
But there’s something odd going on: a few days ago, the WordPress server VM which runs this daily weblog here at JeeLabs crashed on an out-of-memory error. I used to reserve 512 MB RAM for it, but had scaled it back to 256 MB before the summer break. So 256 MB is not enough apparently, to present a couple of weblog pages and images, and handle some text searches and a simple commenting system.
(We just passed 6000 comments the other day. It’s great to see y’all involved – thanks!)
Ok, so what’s a measly 512 MB, eh?
Well, to me it just doesn’t add up. Ok, so there’s a few hundred MB of images by now. Total. The server is running off an SSD disk, so we should be able to easily serve images with say 25 MB of RAM. But why does WP “need” hundreds of MB’s of RAM to serve a few dozen MB’s of daily weblog posts? Total.
It doesn’t add up. Self-inflicted overhead, for what is 95% a trivial task: serving the same texts and images over and over again to visitors of this website (and automated spiders).
The Redmine project setup is even weirder: currently consuming nearly 700 MB of RAM, yet this ought to be a trivial task, which could probably be served entirely out of say 50 MB of RAM. A tenfold resource consumption difference.
In the context of the home monitoring and automation I’d like to take a bit further, this sort of resource waste is no longer a luxury I can afford, since my aim is to run HouseMon on a very low-end Linux board (because of its low power consumption, and because it really doesn’t do much in terms of computation). Ok, so maybe we need a bit more than 640 KB, but hey… three orders of magnitude?
In this context, I think we can do better. Instead of a large server-side byte-shuffling engine, we can now do this – courtesy of modern browsers, Node.js, and AngularJS:
Let me stress this point: there is no dynamic “templating” or “page generation” on the server side. This takes place in the browser – abstracted away by AngularJS and the DOM.
In parallel, there’s a “Real-Time Server” running, which handles the application logic and everything that needs to be dynamic: configuration! status! measurements! commands! I’m using Node.js, and I’ve started using the fascinating LevelDB database system for all data persistence. The clever bit of LevelDB is that it doesn’t just fetch and store data, it actually lets you keep flows running with changes streamed in and out of it (see the level-livefeed and multilevel projects).
So instead of copying data in and out of a persistent data store, this also becomes a way of staying up to date with all changes on that data. The fusion of a database and pubsub.
Lots of bits that need to work together. So far, I’m really delighted by the flexibility this offers, and the fact that the server side is getting simpler: just the autonomous data acquisition, a scalable database, a responsive WebSocket server to make the clients (i.e. browsers) come alive, and everything else served as static data. State is confined to the essence of the app. The rest doesn’t change (other than during development of course), needs no big backup solution, and the whole kaboodle should easily fit on a Raspberry Pi – maybe something even leaner than that, one day.
The last instalment tomorrow will be about how HouseMon is structured internally.
PS. I’m not saying this is the only way to go. Just glad to have found a working concoction.