Computing stuff tied to the physical world

TwitLEDs robot, part 2

In AVR, Hardware, Software on Jun 23, 2010 at 00:01

Yesterday’s post introduced the robot Myra and I have been working on. Here’s the first part we built:

Dsc 1725

It’s basically a backplane for the LED blinker component. Or to put it differently: a simple persistence-of-vision (POV) unit, using a JeeNode, and Output Plug to drive a few LEDs. Only the output plug was soldered-in permanently. The removable JeeNode allows it to be easily programmed and re-used, and the removable LEDs allow trying out different units. This turned out to be important, because I only had a few green LEDs when starting this, and had no idea then as to what sort of LEDs would give the best POV results later on.

Myra did all the soldering. Here are the two LED mounts we ended up with:

Dsc 1745

The one on the left is the super-duper LED concoction we built as final version. The one on the right was great for initial testing.

Everything is held together with rubber bands, zip-lock ties, tape, and ample amounts of hot glue (once verified to work!) – hacking at its best, clearly:

Dsc 1742

Here’s the LED blinker with the final LED strip, side view:

Dsc 1743

Side view close-up – with the foam board cover:

Dsc 1746

Bottom view:

Dsc 1744

Seven blue LEDs, ready to shine very brightly and controlled by the JeeNode.

The software started out very simple, of course. Things like this, just to make sure it all works:

Screen Shot 2010 06 18 at 23.01.21

This is the main part of what is more or less the final twitLEDs.pde sketch:

Screen Shot 2010 06 18 at 23.03.06

I found a suitable font table by googling around a bit. This is needed to go from ASCII characters to dots-in-a-readable-pattern. No room for Unicode (don’t laugh: some tweets are in Japanese and Chinese, and they won’t show properly).

The amazing bit is that everything worked essentially on first go. It blinked! But does it blink in the proper pattern? Our first test consisted of Myra taking a long-exposure shot, as I waved this thing around in the air – with the lights off. Liesbeth tracked progress through all the shrieks and laughs… but from a safe distance :)

Dsc 1793

Yippie. It really works!

Tomorrow: driving around without bumping into things.

  1. Would it be an option to do the rendering on the PC (which has enough space and computing power for a unicode font) and send image data down to the JeeNode instead of ASCII?

    • Yes, that would solve it. Right now, it would be too limiting though, since I’ve simplified the code to only send one packet of up to 60 bytes per message.

  2. I’m wondering the opposite… Would it be possible to remove the PC/Mac completely. Use a Jeenode with an Ethernet board to retrieve the tweets and send them over to the POV Buggy.

    Obviously nobody will have a twitter API for use with an ATmega, but it shouldn’t be too hard to reverse engineer what you need and scrape the text from an http request…

    I’ll leave the comment about using 8 LEDs and getting proper descending characters for another day ;-)

    • A JN + Ether Card can be used, but keep in mind that I haven’t worked out all the details for client-side use of the Ether Card yet: MAC/IP address resolution, gateways, and DNS …

  3. Your construction inspired me to dig out a old termoprinter mechanism that has no ‘intelligence’ only 8 ‘hotspot’ wires and a ‘step head wire’. My question is if you could link to the ASCII to ‘dot column’ table you used?

Comments are closed.