More Ramblings from a Los Angeles Programmer

August 14, 2021

How I built some magic wands

Filed under: coding, daily life, technology, Uncategorized — Tags: , , , — Josh DeWald @ 2:21 pm

Note: The bulk of this was written about two years ago. I realized I had never really posted it anywhere so I have updated in case there is interest in others of building something similar. I provide a link to the code I had on disk but it may not completely work as-is. In particular, I’m pretty sure that the Wiimote software isn’t able to work on the most recent versions of MacOS.

A little bit of a glue goes a long way. 

A little less than two four years ago my wife approached me about possibly building “working” Harry Potter-style magic wands for my daughter’s eleventh birthday party. I no longer remember exactly how I responded, but it was probably something along the lines of: “Uhhh… maybe?”

It turns out I did manage to make something that I thought was pretty neat. More importantly, my daughter and her friends seemed to enjoy it. Most importantly, my wife liked it.

This is the journey as I recall it make magic wands that actually work on the cheap. Feel free to jump straight to the end for directions, source code, and anything else I can think of to hopefully help you make your own. 

This project was really broken into four sub-projects

  1. The “wands” themselves
  2. A mechanism to recognize “spell” gestures
  3. A mechanism to perform actions based on recognized “spells”
  4. Doing stuff with the wands

Phase 1a – Prototype wands

The sub-project of the wand itself had two mini-tasks within it:

  1. What would be the primary technology for the wands?
  2. What would be the method to “see” the wands?

Is it possible? What would a wand be?

At the time that I was asked to do this, the Harry Potter world was already popular and had working magic wands. So it was certainly possible. Now to figure out a way to somewhat reproduce that. 

An early thought I had was that I already had something in the house which I could wave around and have its actions reflected on screen: a WiiMote! I figured that was likely sending out some IR signals to the sensor bar which was feeding into the Wii. 

But I was wondering to myself how I would reproduce the sensor bar. Turns out, I didn’t need to! The “sensor bar” of a Wii is nothing more than a couple spaced out IR Leds which the WiiMote detects. 

Some people have even replaced their sensor bars with candles, which strikes me as somewhat of a fire hazard. 

This meant that the WiiMote actually had the smarts to detect the IR light and ship that information over to the Wii to act on. Wikipedia of course has a great summary of the capabilities of the Wiimote

If the WiiMote were to be made stationary and the source of IR moving, we at least in principle have the very beginnings of the capability to “wave a thing around and have it do stuff.”

I did some searching around to see if this was crazy and found out that people had made sweet IR + WiiMote digital whiteboards

I had no desire to spend a bunch of money trying to figure out what I could make, so I thought to myself: Do I have a means of generating IR signals with something that I can move around? Yes I did. Like most people, I had a few TV remotes around. 

Now that I had an idea for what the wand would be, I needed a way to implement something with the computers and hardware I had available. The only development machine I had access to was a Macbook, so I began hunting for how to attach a WiiMote to a Mac via Bluetooth to receive those sweet, sweet coordinates. 

I landed in WJoy, which was a set of drivers and an application for connecting the WiiMote for use as a gamepad. More importantly, it also includes source code for an application framework that can be integrated into your own application. 

At the time of this writing, the binary for WJoy is no longer downloadable as the developers have removed it due to security constraints for low level driver access in new versions of MacOS. I believe that in order to run it in anything past Sierra, it is necessary to remove some security restrictions to enable the drivers to work. I would never recommend you do something to place your machine at risk of vulnerabilities. I’ve since come across DarwiinRemote, but have not tried it. It may have the same driver signing issues. 

I was a bit rusty, but I managed to get a simple Cocoa application going in Objective-C (it turns out now Swift is the thing..) which could pair with a WiiMote and collection x,y positions and draw those on the screen. The “resolution” if the points is 1024×768 and there could in theory be four wands being detected at once. I was pretty much done!

Phase 2a – “Spell” recognition

The second major block of work was being able to recognize “spells”, which presented major questions:

  1. How would I “recognize” spells? 
  2. How do I convert recognized spells into a desired action in the real world?
  3. How do I convert desired actions into actual actions

Having a wand and a means of seeing it move, I needed to move onto figuring out how to translate wand motions into a recognized “spell.” 

Idea 1 – Handwriting?

The first thought I had was that perhaps spells could be treated as if they were letters or handwriting. That made me think of computer vision or OCR. I spent a couple of days reading the documentation for the OpenCV project. This was a dead end for me as I the learning curve seemed high and it was significantly more powerful than what I thought I actually needed. Had I went with using a camera to record the wand moving I believe that would have been a more appropriate solution. 

Idea 2 – “Mouse” gestures

My next idea was to think of the points being received from the WiiMote as if they were mouse gestures. In hindsight (always) this was more obviously the solution. I was thinking perhaps I could implement (or find) a custom gesture recognizer for Cocoa. I hunted around for built-in Cocoa frameworks or Objective-C libraries that would take arbitrary points and convert them to a recognized “gesture” which could then be sent along to the next phase in the pipeline. 

I eventually found the PennyPincher algorithm by Eugene Taranta and Joseph LaViola, which was designed for very fast recognition of gestures against user-defined “templates”. Remarkably, the algorithm operates on a very small re-sampled points from the templates (and user action). Even better, there existed an MIT-licensed Swift-based UIGestureRecognizer of PennyPincher. It was for iOS rather than MacOS but I could work with that. The implementation was even submitted to Hacker News but it doesn’t appear to have made it to the front page. 

I downloaded the framework and sort of shoved it into my application. I opted to not attempt to actually make use of the GestureRecognizer portion of the code and instead integrated directly with the implementation of the raw PennyPincher template recognition. A simple mode was added where I could draw something on the screen with my “wand” (still a TV remote) and then give that a name (e.g. “alohomora”). This tuple of (name, points) is passed to the PennyPincher library to create a “template” which it hands back. 

Recognizing a spell is just taking the points received and passing the list of “templates” to the PennyPincher library and asking it to hand back the name of the template which was the best match. The important bit here is that there are multiple templates with the same name but I have found that there is often only a need to “train” only a couple variants of each spell. 

And so in  April of 2018 I had a basic prototype going where I could point the remote control at the WiiMote, wave it around in some appropriate shapes, click a button and have the program display the name of the recognized spell. And now we’re pretty much done! 

Apparently I actually believed that too, as it was more than a year before I picked back up where I left off, with six weeks until my deadline. 

Phase 1b – More better wands

As my brain got back into the game of building the wands for the party, it became clear that having kids wave around a TV remote would be less-than-impressive and there was no way it would pass the Wife Acceptance Test. 

My wife and her friend had plans for what the wands would look like, with major sources of ideas were some existing LED-based wands from Vintage Kitty and an Instructable by “mostlyglue”. The directions for either of those could likely be followed, just replacing the colored LED with an IR LED.

NOTE: I am more of a software than hardware guy, so am merely presenting what I ended up building. There’s possibly all sorts of things I did wrong here, but it did end up working.

But for reference, the actual wands we created were based on (these are not affiliate links):

  1. A skinny dowel (we grabbed some packages of 10 from Michael’s)
  2. 5mm IR Led (I used these “Super-bright” ones from Adafruit)
  3. A 1.5V “hearing aid” battery (I purchased a 24 pack of LR44 style from Amazon)
  4. A small tactile switch (12mm square from Adafruit)
  5. Wires (I bought this set of 22 AWG spools from Adafruit)
  6. Solder – I’m going to admit I was pretty half-assed here and only solder’d 30% of the wands. I had no soldered previously so was not very confident in doing in right. 
  7. Electrical tape – Wrapped around the wood as a base, and also around all of the electronic bits
  8. Hot glue – This came later when some post-clay wands stopped working which I believe was due to moisture creating electrical issues. One of the links above suggested using hot glue at the place where the connections were made to protect them. Appeared to solve the problem

I originally didn’t want to even have a switch so it would seem more magical, but having 17 constantly on IR LEDs waving around would have created less-than-ideal tracking conditions. 

The circuit is dead simple (no resistor was needed as the battery I used essentially matched the voltage of the LED). 

Here is some before and after images of the wands

One thing that should be clear from the images is that these wands did not have replaceable batteries and were really “one-time” use during the party. The intent was that the wands themselves became take-home party favors, just non-functional once the batteries died. Some of the above links use replaceable batteries. 

Phase 3 – Connecting to the Real World

Drawing some pictures by waving around a stick is kind of neat, but the actual ask from my lovely party planner was for the wands to actually do something. From the beginning I assumed I would use either a Raspberry Pi or an Arduino. The full extent of my knowledge was that the Raspberry Pi was a very small form-factor computer and the Arduino was a programmable processor that you could attach sensors to. 

I thought about it a bit and figured my essential requirements were something that I could send a signal to in some form which would then translate that signal to powering on one or more devices. This seemed more appropriate for the Arduino to handle. 

Arduino

I went to my local Fry’s a couple of times and ended up getting the TinyDuino Arduino-compatible board. Specifically the coin-cell Starter Kit (for what I built, the $30 Basic would have sufficed). At 3V it does 4Mhz but with the USB it ends up with 8Mhz. Clearly not doing any major processing, but enough for my needs (I hoped)!

The incredibly tiny form factor of the TinyDuino was enticing in case I wanted to attempt to embed the device directly in something that I wanted to control. 

I initially assumed that the Arduino part would be standalone, but for what was built for the party it was always attached over USB to the computer so I drew power the laptop rather than a battery. 

As part of my ongoing quest in this project to glue as much stuff together that Just Worked, I needed to find the simplest way possible to tell the Arduino to Do Something once a gesture was recognized. I briefly explored using the Wifi or Bluetooth module (both were more expensive than I wanted to spend when I wasn’t certain of the approach) but ended up using the USB connection that normally used for flashing the Arduino with new firmware. That turns out to be a serial connection to the Arduino and can be used for communication (and power!). 

Firmata

The question then was: How do I send signals over serial to the Arduino and have it respond? 

Someone else had the answer in the form of the Firmata protocol. It is literally described as “a protocol for communicating with microcontrollers from software on a computer”. Wow, that sounded exactly like what I needed! The protocol is based on the MIDI message format (often used for communicated with music keyboard). 

The Firmata firmware is included with the Arduino IDE, and I simply had to flash it from there. Almost comically easy. 

Next up: Is there a Swift or Objective-C library that can speak Firmata? 

I wasn’t able to find anything that met my needs, but I came across a NodeJS (Javascript) library called Johnny Five, which is a general purpose robotics library which uses Firmata to communicate. It is also possible to use the lower-level libraries which Johnny Five itself depends on.

var five = require("johnny-five");

var board = new five.Board();

board.on("ready", function() {

  // Create an Led on pin 13

  var led = new five.Led(13);

  // Blink every half second

  led.blink(500);

});

Copying from their sample, you can see simple the library is to use:

Wowsa. 

Spells-over-http

I embedded code very close to that into a NodeJS express app. This was completely overkill but time was of the essence and I did not want to devote any more time than I needed on the infrastructure. 

So the app itself is just:

const app = require('express')();

const port = 3000;

var five = require("johnny-five"), board = new five.Board();

var led = null;

var light = null;

board.on("ready", function() {

 led = new five.Led(12);

 light = new five.Led(4);

});

app.get('/lumos', (request, response) => {

  console.log("LUMOS");

  if (light) {

    light.on();

    response.send("On!");

  } else {

    response.send("Not yet initialized");

  }

});

app.get('/alohomora', (request, response) => {

  console.log("ALOHOMORA");

  if (led) {

    led.on();

    response.send("On!");

  } else {

    response.send("Not yet initialized");

  }

});

// … and so forth for each spell

app.listen(port, localhost, (err) => {

  if (err) {

    return console.log('something bad happened', err)

  }

  console.log(`server is listening on ${port}`)

});

Isn’t it amazing the world we live in right now? With that dirt simple code and easy-to-install firmware, I can use HTTP to tell the Arduino to toggle some pins. 

Phase 4a – Making things happen

Alrighty! We are so close!

We’ve got wands.

We’ve got a means of detecting the wands.

We’ve got a means of recognizing wand gestures.

We’ve got a means of translating wand gestures into a desire to do something. 

Now we just need… the ability to actually do something!

This is where you can really get as creative as you want. My wife and I thought a lot about what we wanted to do and ended up landing on two interactions:

  1. Turning on and off a light 
  2. Unlocking and locking a box

I also looked into doing something with a fan and a feather but it was comically loud and I was not happy at all. 

Treasure box

In my constant effort to be able to ride on other people’s coattails, I looked for projects where a box was opened via an Arduino. There are actually quite a few projects for it and you could likely choose any of them. The project that I got the most final inspiration for was an RFID Lockbox on Instructables. I was already purchasing items before I realized that the actual kit was discontinued. However it provided me enough info that I was able to sort out something using a box that my wife had already purchased at Michael’s. 

Using the ideas I found from the RFID lockbox and random places on the Internet, I managed to get something working that I was reasonably pleased with following the massive procrastination providing limited time. What follows is by no means a well-polished box and again want to make clear that this is merely one way to accomplish the task.

I landed on the following parts list (these are not affiliate links, I am only linking to show literally what I purchased):

  1. Small 12VDC push-pull solenoid – the movement on this is around ¼”. This is the “lock” and will rest right inside the strike plate, preventing upward movement until the solenoid is turned on, which pulls it back. 
  2. A strike plate for a door
  3. 12V DC power adapter – This powers the solenoid. Just picked up a random multi-voltage one from Best Buy which I think lost and happened to find something else around the house which worked. 
  4. A 5v opto-isolated relay – This is necessary so that you can take the 12V (or higher) power but have it triggered by the low voltage (and low amp) Arduino. I think there are additional modules for Arduino which might make this easier, but I wasn’t quite sure what to look for and this worked. I actually got a pack of 2, which came in handy since the light project also used a relay.
  5. A pack of “pigtail” cables which make it easier to connect the DC adapter
  6. Hook-up wires to between everything. I mostly used the same wiring I used for the wands. 
  7. Random bits of wood to hold the pieces inside the box (I had some 0.5”x2” around)
  8. Electrical tape
  9. Duct tape – Because of course

Here’s a hopefully reasonable circuit diagram (the “S” is the solenoid). The biggest trial-and-error was getting the positive and negative correct, as it wasn’t always all that intuitive to me. 

https://crcit.net/c/b0cd82ed5fa84d249596b07da4e330f3

I happened to have some small screws and plastic washers around, which I used to mount the solenoid to the board, and also to separate it from the wood. The solenoid would sometimes get quite hot (I don’t know if this was due to miswiring on my side or not). The relay also had a couple layers of electrical tape underneath and then was taped to the board. This is probably not electrically sound (again a reminder that this is just what I managed to get working, definitely not the best way). 

I had some servos and remote control for an airplane I bought — but never flew — 20 years ago. I had planned on making use of the servos but never was able to directly. Instead I cut the wiring harnesses to re-use them for my own purposes so that I could easily connect and disconnect the box. So in a way… the servos totally got used. 

Lamp

It goes without saying that one of the real world items to control would be a lamp. I knew this would most likely be the same as the treasure box, so one obvious means of achieving turning a lamp on and off would be the splice the relay inline with the wiring of an existing lamp. However, that would have been pretty destructive to the lamp and also would not support controlling other things. Helpfully, there are many tutorials on the Internet for how to create an Arduino powered power box. I simply followed the “Turn Any Appliance Into a Smart Device with an Arduino Controlled Power Outlet” provided by Circuit Basics (not sure who the actual author is). The only words I will add is that I think the directions make it sound like you would use the hook-up wire on the high voltage side, but the pictures show using wires from the surge protector cable. So follow what you see in the picture there. 

Phase 2c – Software Improvements

As is usually the case when your software meets the loving eyes of a significant other, there were some light observations and friendly suggestions for improvement. Sample dialogue after I proudly demonstrated waving the wand around and having the software successfully recognize the spell: 

Me: Voila! (paraphrasing)

SO: But I saw you hit a button

Me: That was just me telling it figure out what I did

SO: But I saw you hit a button

Me: I’ll fix that right away

Another briefer dialogue (monologue?):

SO: Shouldn’t the spells glow or something?

Yes, they should. 

And so the software became infinitely cooler when I tweaked it slightly to be essentially edge-triggered in its spell recognition. When the app sees points coming in, it will start collecting them. When it appears that the points have stopped (I think I used a 250ms delay), the software assumes the spell has stopped and it attempts to recognize the gesture. Voila! No more button and actually much more in line with how real touch gestures work. 

For doing the more better rendering, I just made use of Bezier curves between each point and messed around with the layer compositing options available NSLayer (via the XCode UI) and enabled a shadow setting that created a sort of “glowing” effect. 

That sounds way more advanced that it really is. Honestly, I spent the most time trying to figure out how the graphics context worked. There is always a current implied context being used, but I was trying to figure out how to associate the Path with a context. The code for rendering the spell (in Objective-C, which I know makes me a dinosaur):

CGContextRef myContext  = [[NSGraphicsContext // 1

                          currentContext] graphicsPort];

     NSBezierPath *path = [[NSBezierPath alloc] init];

     [path setLineWidth:3.0];

     [[NSColor colorWithRed:0 green:0 blue:205 alpha:1] set];

     [[self pixels] enumerateObjectsUsingBlock:^(id  _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {

        NSPoint point = ((NSValue*)obj).pointValue;

  // (xFact, yFact) is just to scale to point into the viewport

       // You are probably supposed to do Affine transforms directly on

  // the path or something

        NSPoint modified = NSMakePoint((1024 - point.x) * xFact, point.y * yFact);

        if (idx == 0) {

            [path moveToPoint:modified];

        } else {

            [path lineToPoint:modified];

        }

        [path moveToPoint:modified];

        // This part is just to show the individual points its receiving, which is not necessary to show the path itself

        [path appendBezierPath:[NSBezierPath bezierPathWithRoundedRect:CGRectMake((1024 - point.x) * xFact, point.y * yFact, 5, 5) xRadius:5 yRadius:5]];

    }];

    // This is the bit that threw me for a while trying to determine

    // *where* it was actually stroking

    [path stroke];

It works!

With about 3 days remaining we came up with a (not necessarily original) warrant for the whole wand “experience”: Charms Class. 

The “students” would come into one of the rooms which had the software presented on a flat screen hanging up on the wall. The Wiimote was semi-hidden by a stuffed owl and some other remote controls. They each had a printed spell book that my wife made (which was quite awesome). Each student would choose a couple of spells to try from the spell book, and would see it presented on screen and recognized. The “test” mode was then initiated which would ask for a specific spell to be attempted and would let them know when they got it before presenting another spell.

Here is a video of things basically working

asks to to to cast wingardium leviosa
casted wingardium leviosa
correct!

The “final” test/reward would be them going into the “room under the stairs” (a literal closet that happens to be under the stairs) which was dark. They would need to perform a spell to turn on the light so that they could see. From there they would see the (locked) box and would need to unlock it with the correct spell. Once opened, the box was filled with some red “sorcerer’s stones” which they could then add to their goody bag. A friend of ours happened to have a string of red leds which was super impressive when placed under the translucent red stones inside the box, creating a really cool glow. 

The Code

Doing this write-up made me quite nervous about putting the code I wrote out there, as it is/was quite a mess and very much about Just Making It Work. My last experience with doing anything related to Mac develop was writing Objective-C for iOS, so that is primarily what I used. But it appears that the world has shifted to Swift (which I thought was still just the Hot New Thing, but seems in fact to just be the standard). So the code is a curious mish-mosh of Swift and Objective-C and shoving libraries into fit what I needed them to do. 

I made an effort to rework things for this so that it would be easier for others to use, modify and rewrite as necessary. 

It’s possible the above is no longer true (when I wrote it about two years ago). I have put the code up on github, but it likely needs some re-work to Just Work. If there is interest, I would be happy to see what I can do to get it into a state if it’s unusable now. But the important bits are present.

Anyhow, the code

Closing and Future Thoughts

To support a more self-contained system I think the aforementioned Raspberry Pi would be a good option to be able to accept input over Bluetooth from the Wiimote and perform the simple operation of translating to a gesture (PennyPincher is *intended* for fast CPU-limited calculations). In line with this, you can purchase 4-LED position sensors which could be attached to the Raspberry Pi or Arduino (however I cannot find the link for one now), which would eliminate the need for the WiiMote. There are also folks who have extracted the IR camera from the WiiMote and interfaced it with the Arduino. I am not sure what clock speed you would need to be able to run the gesture recognition, but my gut says the current Arduino devices (at least the one I bought) would not be up to it. Would love to be wrong about that! Attaching to the Raspberry Pi would likely be feasible as well. 

Appendix-of-fun A: PennyPincher

While doing this write-up I realized that I actually had no real understanding of the PennyPincher algorithm (I just knew that it translated a set of points into a template and could match an input with a list of templates, which was my only requirement). And reading up a bit on it now it clicked why when I added load/save functionality it caused spells to stop working: I had thought the algorithm just stored normalized equidistant points rather than the vectors between those points!

Sample template for “incendio” (read this as [delta-x,delta-y])

-0.22553337249078115,0.9742354427410935

-0.20349388626843995,0.9790762167734274

-0.28365229341319453,0.9589272008038122

-0.6666202338688895,-0.7453975206536355

-0.40049162128800736,-0.9163004208653968

-0.24587721760380032,-0.9693009820811147

-0.34067260668429067,-0.9401819903906533

0.9190902460140412,0.394047103379595

0.941793902415372,0.33619078716292156

This can be converted to a GnuPlot compatible vector data by assuming the first point is (0,0) and applying the math for each (the subtraction on the “x” is because of how the data is received from the Wiimote):

awk -F’,’ ‘BEGIN { OFS=”,”; x=0; y=1; }\
 {print x,y,-$1,$2; x -= $1; y+= $2}’ |\
 gnuplot -p -e “set datafile separator ‘,’;\
 set terminal svg dynamic;\
 plot ‘-‘ using 1:2:3:4 with vectors filled head lw 3″

If you look at the GNUPlot images below, I’ve plotted a sample of some of the trained “templates” emitted by the algorithm. 

As you can see, the distance between each point is equal, which is one of the simplifying assumptions of the algorithm. And since we are just using vectors (think: SVG), the algorithm is insensitive to both translation and scale! It only cares about the “error” in angles between subsequent points. Some clever folks there.  

However, one quirk of the implementation that I have noticed is that since it effectively pairs up points between the template and the current input, so any templates with a small number of points that happen to look quite similar to the “beginning” of the input may get improperly matched if there is more variance in later points. 

For example “lumos” spell is just straight lines going up and down and so don’t have very many points compared to other longer gestures. So there was a tendency in some situations to recognize spells as “lumos”. However it will only insert a single extra point between “widely spaced” points. A potential fix for this is to ensure that multiple in-between points get inserted during resampling so all templates and inputs have the actual same number of points. 

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: