Computing stuff tied to the physical world

Move over, John von Neumann

Computers do their work one step at the time. Over, and over, and over, and over again. The breakthrough came when not only the data they manipulate but also the instructions that drive their actions were stored in memory. You take data from memory, you “process” them in some way, and at times you also put something back. Most of it used to be about doing arithmetic – nowadays it’s more about pushing bits around in interesting ways than true “computation”. This design is called the Von Neumann architecture:

420px Von Neumann architecture svg

Programming is just another word for creating a set of instructions to drive the above stuff. Sure, we have threading and multi-core CPU’s now. But it’s still a sequential process.

Inside, it’s a whole different story: logic gates and flip-flops is what makes these machines tick. And they in turn are built from transistors – lots of ’em. Millions, billions, even. But that’s still all they are: a lot of electronic components, wired together in just the right way to make them perform the tasks in the diagram above. All combined into a single chip.

Before Large Scale Integration, the heart of a computer looked like this in the 1980’s:

Z80 CPU

Now, it’s all one chip the size of a fingernail – an STM32F103 µC is probably two orders of magnitude faster, more powerful, and more complex. But the architecture is the same.

There is no inherent reason why logic gates have to be built from transistors. Vacuum tubes (“valves”) and even relays were originally the components used to implement logic. There are also implementations with fluids, and a very illustrative one built with marbles – see this 5-min DigiComp II video from Evil Mad Scientist for a huge replica. It can add, subtract, multiply, and divide, albeit only very small numbers.

The “Field-Programmable Gate Array” (FPGA) is just what the name says: an array of lots of gates, plus a mechanism to wire and re-wire them at will, in the field, i.e. any time.

Think of an FPGA as a room full of simple (SSI) logic chips, and wiring defined by sending it a bunch of bits. Basically a playground for tinkerers without requiring a soldering iron!

One way to visualise this, is as an FPGA having two layers: the lower layer is chock full of logic gates (organised at a slightly higer level as small but very generic “Logic Cells”), while the upper layer is one big chunk of static RAM memory – with each bit controlling the presence of absence of a wire in a gigantic set of interconnects. To avoid a combinatorial explosion, everything is organised in specific ways with buses and matrix connections.

A more detailed description can be found at ca.olin.edu – here’s a diagram from that page:

Fpga001

Conceptually, you can think of an FPGA as being able to construct just about anything you like from a huge set of simple logic elements: a mind-boggling supply of (electronic) Lego.

As you can imagine, FPGAs can be used to construct a computer. Not just the CPU, but all its interconnect logic, and to a certain extent even its RAM and ROM memory. For more substantial memory needs, we can hook up one or more dedicated memory chips, and drive them from the FPGA. This is where all those pins come in: think of them as a huge set of freely-usable I/O pins to whatever you like, then use the FPGA’s wiring to “hook them up”.

Implementing a CPU inside an FPGA as explicit logic design is called a soft processor or “soft core”. There are lots of them – proprietary and closely-guarded ones, but also quite a few as open source (see the “Processor” tab on the opencores.org website, for example).

A soft core is considerably less efficient in terms of silicon use (and power consumption) than a dedicated chip with all its connections hard-wired, and no extra components for all sorts of unused signal routing alternatives. But keep in mind that a soft care will run as fast as any µC – it is a logic design, it does not add an extra layer of interpretation.

So what’s the difference? Why go through all that trouble?

The key difference is parallelism. With a Von Neumann architecture, everything has to be processed in sequence. Doing more work means either slowing down, or switching to a faster (more expensive, more power-hungry) µC. With an FPGA – parallelism is free.

This has profound implications. Where a µC can easily get overloaded, and has to have a large set of built-in hardware peripherals to keep serial I/O, I2C, SPI, PWM, etc going, the FPGA can do all that on the side, because everything happens “on the side”, so to speak.

You can think of an FPGA as “raw material”. Consider that room full of simple logic again: many of the chips will have been used up to build a soft core, perhaps. Say you want to add a VGA video controller to your setup. You come up with (or find) a design, collect some of the remaining chips, build the circuit, and hook it up to that already-working soft core. With an FPGA, it’s the same: the unused logic can be turned into the VGA circuit.

This does not in any way slow down the original circuit, since you’ve conceptually added more chips to your design. Except that all of them were already inside the FPGA, of course.

With an FPGA, the number of logic elements is the limiting factor. Large FPGAs can easily implement multiple soft cores, if that’s what you want. Or N serial interfaces, or video in/out of various kinds, or audio, or USB, or Ethernet. Or all of that at the same time (once you have conditioned the I/O signals into logic levels, since most FPGAs are digital-only).

Software has proven to be immensely flexible. But its complexity is sequential – more work takes more time, to process it all step by step. The race has always been: faster clocks.

Hardware is about throwing enough silicon at a problem. Which is what FPGAs do, they simply do it in such a way that all the “raw” logic + interconnects sit inside a single chip.