Let’s try it with the same test data, i.e. the following text (all on one line):
Here is the output:
You can see the pieces being separated, and all the different parts of the message being decoded properly. One convenient feature is that the asString() and asNumber() calls can be mixed at will. So strings that come in but are actually numeric can be decoded as numbers, and numbers can be extracted as strings. For binary data, you can get at the exact string length, even when there are 0-bytes in the data.
The library takes care of all string length manipulations, zero-byte termination (which is not in the encoded data!), and buffer management. The incoming data is in fact not copied literally to the buffer, but stored in a convenient way for subsequenct use. This is done in such a way that the amount of buffer space needed never exceeds the size of the incoming data.
The general usage guideline for this decoder is:
- while data comes in, pass it to the decoder
- when the decoder says it’s ready, switch to getting the data out:
- call nextToken() to find out the type of the next data item
- if it’s a string or a number, pull out that value as needed
- the last token in the buffer will always be T_END
- reset the decoder to prepare for the next incoming data
Note that dict and list nesting is decoded, but there is no processing of these lists – you merely get told where a dict or list starts, and where it ends, and this can happen in a nested fashion.
Is it primitive? Yes, quite… but it works!
But the RAM overhead is just the buffer and a few extra bytes (8 right now). And the code is also no more than 2..3 KB. So this should still fit fairly comfortably in an 8-bit ATmega (or perhaps even ATtiny).
Note that the maximum buffer size (hence packet size) is about 250 bytes. But that’s just this implementation – Bencoded data has no limitations on string length, or even numeric accuracy. That’s right: none, you could use Bencode for terabytes if you wanted to.
PS – It’s worth pointing out that this decoder is driven as a push process, but that actual extraction of the different items is a pull mechanism: once a packet has been received, you can call nextToken() as needed without blocking. Hence the for loop.