More Ramblings from a Los Angeles Programmer

August 14, 2021

How I built some magic wands

Filed under: coding, daily life, technology, Uncategorized — Tags: , , , — Josh DeWald @ 2:21 pm

Note: The bulk of this was written about two years ago. I realized I had never really posted it anywhere so I have updated in case there is interest in others of building something similar. I provide a link to the code I had on disk but it may not completely work as-is. In particular, I’m pretty sure that the Wiimote software isn’t able to work on the most recent versions of MacOS.

A little bit of a glue goes a long way. 

A little less than two four years ago my wife approached me about possibly building “working” Harry Potter-style magic wands for my daughter’s eleventh birthday party. I no longer remember exactly how I responded, but it was probably something along the lines of: “Uhhh… maybe?”

It turns out I did manage to make something that I thought was pretty neat. More importantly, my daughter and her friends seemed to enjoy it. Most importantly, my wife liked it.

This is the journey as I recall it make magic wands that actually work on the cheap. Feel free to jump straight to the end for directions, source code, and anything else I can think of to hopefully help you make your own. 

This project was really broken into four sub-projects

  1. The “wands” themselves
  2. A mechanism to recognize “spell” gestures
  3. A mechanism to perform actions based on recognized “spells”
  4. Doing stuff with the wands

Phase 1a – Prototype wands

The sub-project of the wand itself had two mini-tasks within it:

  1. What would be the primary technology for the wands?
  2. What would be the method to “see” the wands?

Is it possible? What would a wand be?

At the time that I was asked to do this, the Harry Potter world was already popular and had working magic wands. So it was certainly possible. Now to figure out a way to somewhat reproduce that. 

An early thought I had was that I already had something in the house which I could wave around and have its actions reflected on screen: a WiiMote! I figured that was likely sending out some IR signals to the sensor bar which was feeding into the Wii. 

But I was wondering to myself how I would reproduce the sensor bar. Turns out, I didn’t need to! The “sensor bar” of a Wii is nothing more than a couple spaced out IR Leds which the WiiMote detects. 

Some people have even replaced their sensor bars with candles, which strikes me as somewhat of a fire hazard. 

This meant that the WiiMote actually had the smarts to detect the IR light and ship that information over to the Wii to act on. Wikipedia of course has a great summary of the capabilities of the Wiimote

If the WiiMote were to be made stationary and the source of IR moving, we at least in principle have the very beginnings of the capability to “wave a thing around and have it do stuff.”

I did some searching around to see if this was crazy and found out that people had made sweet IR + WiiMote digital whiteboards

I had no desire to spend a bunch of money trying to figure out what I could make, so I thought to myself: Do I have a means of generating IR signals with something that I can move around? Yes I did. Like most people, I had a few TV remotes around. 

Now that I had an idea for what the wand would be, I needed a way to implement something with the computers and hardware I had available. The only development machine I had access to was a Macbook, so I began hunting for how to attach a WiiMote to a Mac via Bluetooth to receive those sweet, sweet coordinates. 

I landed in WJoy, which was a set of drivers and an application for connecting the WiiMote for use as a gamepad. More importantly, it also includes source code for an application framework that can be integrated into your own application. 

At the time of this writing, the binary for WJoy is no longer downloadable as the developers have removed it due to security constraints for low level driver access in new versions of MacOS. I believe that in order to run it in anything past Sierra, it is necessary to remove some security restrictions to enable the drivers to work. I would never recommend you do something to place your machine at risk of vulnerabilities. I’ve since come across DarwiinRemote, but have not tried it. It may have the same driver signing issues. 

I was a bit rusty, but I managed to get a simple Cocoa application going in Objective-C (it turns out now Swift is the thing..) which could pair with a WiiMote and collection x,y positions and draw those on the screen. The “resolution” if the points is 1024×768 and there could in theory be four wands being detected at once. I was pretty much done!

Phase 2a – “Spell” recognition

The second major block of work was being able to recognize “spells”, which presented major questions:

  1. How would I “recognize” spells? 
  2. How do I convert recognized spells into a desired action in the real world?
  3. How do I convert desired actions into actual actions

Having a wand and a means of seeing it move, I needed to move onto figuring out how to translate wand motions into a recognized “spell.” 

Idea 1 – Handwriting?

The first thought I had was that perhaps spells could be treated as if they were letters or handwriting. That made me think of computer vision or OCR. I spent a couple of days reading the documentation for the OpenCV project. This was a dead end for me as I the learning curve seemed high and it was significantly more powerful than what I thought I actually needed. Had I went with using a camera to record the wand moving I believe that would have been a more appropriate solution. 

Idea 2 – “Mouse” gestures

My next idea was to think of the points being received from the WiiMote as if they were mouse gestures. In hindsight (always) this was more obviously the solution. I was thinking perhaps I could implement (or find) a custom gesture recognizer for Cocoa. I hunted around for built-in Cocoa frameworks or Objective-C libraries that would take arbitrary points and convert them to a recognized “gesture” which could then be sent along to the next phase in the pipeline. 

I eventually found the PennyPincher algorithm by Eugene Taranta and Joseph LaViola, which was designed for very fast recognition of gestures against user-defined “templates”. Remarkably, the algorithm operates on a very small re-sampled points from the templates (and user action). Even better, there existed an MIT-licensed Swift-based UIGestureRecognizer of PennyPincher. It was for iOS rather than MacOS but I could work with that. The implementation was even submitted to Hacker News but it doesn’t appear to have made it to the front page. 

I downloaded the framework and sort of shoved it into my application. I opted to not attempt to actually make use of the GestureRecognizer portion of the code and instead integrated directly with the implementation of the raw PennyPincher template recognition. A simple mode was added where I could draw something on the screen with my “wand” (still a TV remote) and then give that a name (e.g. “alohomora”). This tuple of (name, points) is passed to the PennyPincher library to create a “template” which it hands back. 

Recognizing a spell is just taking the points received and passing the list of “templates” to the PennyPincher library and asking it to hand back the name of the template which was the best match. The important bit here is that there are multiple templates with the same name but I have found that there is often only a need to “train” only a couple variants of each spell. 

And so in  April of 2018 I had a basic prototype going where I could point the remote control at the WiiMote, wave it around in some appropriate shapes, click a button and have the program display the name of the recognized spell. And now we’re pretty much done! 

Apparently I actually believed that too, as it was more than a year before I picked back up where I left off, with six weeks until my deadline. 

Phase 1b – More better wands

As my brain got back into the game of building the wands for the party, it became clear that having kids wave around a TV remote would be less-than-impressive and there was no way it would pass the Wife Acceptance Test. 

My wife and her friend had plans for what the wands would look like, with major sources of ideas were some existing LED-based wands from Vintage Kitty and an Instructable by “mostlyglue”. The directions for either of those could likely be followed, just replacing the colored LED with an IR LED.

NOTE: I am more of a software than hardware guy, so am merely presenting what I ended up building. There’s possibly all sorts of things I did wrong here, but it did end up working.

But for reference, the actual wands we created were based on (these are not affiliate links):

  1. A skinny dowel (we grabbed some packages of 10 from Michael’s)
  2. 5mm IR Led (I used these “Super-bright” ones from Adafruit)
  3. A 1.5V “hearing aid” battery (I purchased a 24 pack of LR44 style from Amazon)
  4. A small tactile switch (12mm square from Adafruit)
  5. Wires (I bought this set of 22 AWG spools from Adafruit)
  6. Solder – I’m going to admit I was pretty half-assed here and only solder’d 30% of the wands. I had no soldered previously so was not very confident in doing in right. 
  7. Electrical tape – Wrapped around the wood as a base, and also around all of the electronic bits
  8. Hot glue – This came later when some post-clay wands stopped working which I believe was due to moisture creating electrical issues. One of the links above suggested using hot glue at the place where the connections were made to protect them. Appeared to solve the problem

I originally didn’t want to even have a switch so it would seem more magical, but having 17 constantly on IR LEDs waving around would have created less-than-ideal tracking conditions. 

The circuit is dead simple (no resistor was needed as the battery I used essentially matched the voltage of the LED). 

Here is some before and after images of the wands

One thing that should be clear from the images is that these wands did not have replaceable batteries and were really “one-time” use during the party. The intent was that the wands themselves became take-home party favors, just non-functional once the batteries died. Some of the above links use replaceable batteries. 

Phase 3 – Connecting to the Real World

Drawing some pictures by waving around a stick is kind of neat, but the actual ask from my lovely party planner was for the wands to actually do something. From the beginning I assumed I would use either a Raspberry Pi or an Arduino. The full extent of my knowledge was that the Raspberry Pi was a very small form-factor computer and the Arduino was a programmable processor that you could attach sensors to. 

I thought about it a bit and figured my essential requirements were something that I could send a signal to in some form which would then translate that signal to powering on one or more devices. This seemed more appropriate for the Arduino to handle. 


I went to my local Fry’s a couple of times and ended up getting the TinyDuino Arduino-compatible board. Specifically the coin-cell Starter Kit (for what I built, the $30 Basic would have sufficed). At 3V it does 4Mhz but with the USB it ends up with 8Mhz. Clearly not doing any major processing, but enough for my needs (I hoped)!

The incredibly tiny form factor of the TinyDuino was enticing in case I wanted to attempt to embed the device directly in something that I wanted to control. 

I initially assumed that the Arduino part would be standalone, but for what was built for the party it was always attached over USB to the computer so I drew power the laptop rather than a battery. 

As part of my ongoing quest in this project to glue as much stuff together that Just Worked, I needed to find the simplest way possible to tell the Arduino to Do Something once a gesture was recognized. I briefly explored using the Wifi or Bluetooth module (both were more expensive than I wanted to spend when I wasn’t certain of the approach) but ended up using the USB connection that normally used for flashing the Arduino with new firmware. That turns out to be a serial connection to the Arduino and can be used for communication (and power!). 


The question then was: How do I send signals over serial to the Arduino and have it respond? 

Someone else had the answer in the form of the Firmata protocol. It is literally described as “a protocol for communicating with microcontrollers from software on a computer”. Wow, that sounded exactly like what I needed! The protocol is based on the MIDI message format (often used for communicated with music keyboard). 

The Firmata firmware is included with the Arduino IDE, and I simply had to flash it from there. Almost comically easy. 

Next up: Is there a Swift or Objective-C library that can speak Firmata? 

I wasn’t able to find anything that met my needs, but I came across a NodeJS (Javascript) library called Johnny Five, which is a general purpose robotics library which uses Firmata to communicate. It is also possible to use the lower-level libraries which Johnny Five itself depends on.

var five = require("johnny-five");

var board = new five.Board();

board.on("ready", function() {

  // Create an Led on pin 13

  var led = new five.Led(13);

  // Blink every half second



Copying from their sample, you can see simple the library is to use:



I embedded code very close to that into a NodeJS express app. This was completely overkill but time was of the essence and I did not want to devote any more time than I needed on the infrastructure. 

So the app itself is just:

const app = require('express')();

const port = 3000;

var five = require("johnny-five"), board = new five.Board();

var led = null;

var light = null;

board.on("ready", function() {

 led = new five.Led(12);

 light = new five.Led(4);


app.get('/lumos', (request, response) => {


  if (light) {



  } else {

    response.send("Not yet initialized");



app.get('/alohomora', (request, response) => {


  if (led) {



  } else {

    response.send("Not yet initialized");



// … and so forth for each spell

app.listen(port, localhost, (err) => {

  if (err) {

    return console.log('something bad happened', err)


  console.log(`server is listening on ${port}`)


Isn’t it amazing the world we live in right now? With that dirt simple code and easy-to-install firmware, I can use HTTP to tell the Arduino to toggle some pins. 

Phase 4a – Making things happen

Alrighty! We are so close!

We’ve got wands.

We’ve got a means of detecting the wands.

We’ve got a means of recognizing wand gestures.

We’ve got a means of translating wand gestures into a desire to do something. 

Now we just need… the ability to actually do something!

This is where you can really get as creative as you want. My wife and I thought a lot about what we wanted to do and ended up landing on two interactions:

  1. Turning on and off a light 
  2. Unlocking and locking a box

I also looked into doing something with a fan and a feather but it was comically loud and I was not happy at all. 

Treasure box

In my constant effort to be able to ride on other people’s coattails, I looked for projects where a box was opened via an Arduino. There are actually quite a few projects for it and you could likely choose any of them. The project that I got the most final inspiration for was an RFID Lockbox on Instructables. I was already purchasing items before I realized that the actual kit was discontinued. However it provided me enough info that I was able to sort out something using a box that my wife had already purchased at Michael’s. 

Using the ideas I found from the RFID lockbox and random places on the Internet, I managed to get something working that I was reasonably pleased with following the massive procrastination providing limited time. What follows is by no means a well-polished box and again want to make clear that this is merely one way to accomplish the task.

I landed on the following parts list (these are not affiliate links, I am only linking to show literally what I purchased):

  1. Small 12VDC push-pull solenoid – the movement on this is around ¼”. This is the “lock” and will rest right inside the strike plate, preventing upward movement until the solenoid is turned on, which pulls it back. 
  2. A strike plate for a door
  3. 12V DC power adapter – This powers the solenoid. Just picked up a random multi-voltage one from Best Buy which I think lost and happened to find something else around the house which worked. 
  4. A 5v opto-isolated relay – This is necessary so that you can take the 12V (or higher) power but have it triggered by the low voltage (and low amp) Arduino. I think there are additional modules for Arduino which might make this easier, but I wasn’t quite sure what to look for and this worked. I actually got a pack of 2, which came in handy since the light project also used a relay.
  5. A pack of “pigtail” cables which make it easier to connect the DC adapter
  6. Hook-up wires to between everything. I mostly used the same wiring I used for the wands. 
  7. Random bits of wood to hold the pieces inside the box (I had some 0.5”x2” around)
  8. Electrical tape
  9. Duct tape – Because of course

Here’s a hopefully reasonable circuit diagram (the “S” is the solenoid). The biggest trial-and-error was getting the positive and negative correct, as it wasn’t always all that intuitive to me.

I happened to have some small screws and plastic washers around, which I used to mount the solenoid to the board, and also to separate it from the wood. The solenoid would sometimes get quite hot (I don’t know if this was due to miswiring on my side or not). The relay also had a couple layers of electrical tape underneath and then was taped to the board. This is probably not electrically sound (again a reminder that this is just what I managed to get working, definitely not the best way). 

I had some servos and remote control for an airplane I bought — but never flew — 20 years ago. I had planned on making use of the servos but never was able to directly. Instead I cut the wiring harnesses to re-use them for my own purposes so that I could easily connect and disconnect the box. So in a way… the servos totally got used. 


It goes without saying that one of the real world items to control would be a lamp. I knew this would most likely be the same as the treasure box, so one obvious means of achieving turning a lamp on and off would be the splice the relay inline with the wiring of an existing lamp. However, that would have been pretty destructive to the lamp and also would not support controlling other things. Helpfully, there are many tutorials on the Internet for how to create an Arduino powered power box. I simply followed the “Turn Any Appliance Into a Smart Device with an Arduino Controlled Power Outlet” provided by Circuit Basics (not sure who the actual author is). The only words I will add is that I think the directions make it sound like you would use the hook-up wire on the high voltage side, but the pictures show using wires from the surge protector cable. So follow what you see in the picture there. 

Phase 2c – Software Improvements

As is usually the case when your software meets the loving eyes of a significant other, there were some light observations and friendly suggestions for improvement. Sample dialogue after I proudly demonstrated waving the wand around and having the software successfully recognize the spell: 

Me: Voila! (paraphrasing)

SO: But I saw you hit a button

Me: That was just me telling it figure out what I did

SO: But I saw you hit a button

Me: I’ll fix that right away

Another briefer dialogue (monologue?):

SO: Shouldn’t the spells glow or something?

Yes, they should. 

And so the software became infinitely cooler when I tweaked it slightly to be essentially edge-triggered in its spell recognition. When the app sees points coming in, it will start collecting them. When it appears that the points have stopped (I think I used a 250ms delay), the software assumes the spell has stopped and it attempts to recognize the gesture. Voila! No more button and actually much more in line with how real touch gestures work. 

For doing the more better rendering, I just made use of Bezier curves between each point and messed around with the layer compositing options available NSLayer (via the XCode UI) and enabled a shadow setting that created a sort of “glowing” effect. 

That sounds way more advanced that it really is. Honestly, I spent the most time trying to figure out how the graphics context worked. There is always a current implied context being used, but I was trying to figure out how to associate the Path with a context. The code for rendering the spell (in Objective-C, which I know makes me a dinosaur):

CGContextRef myContext  = [[NSGraphicsContext // 1

                          currentContext] graphicsPort];

     NSBezierPath *path = [[NSBezierPath alloc] init];

     [path setLineWidth:3.0];

     [[NSColor colorWithRed:0 green:0 blue:205 alpha:1] set];

     [[self pixels] enumerateObjectsUsingBlock:^(id  _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {

        NSPoint point = ((NSValue*)obj).pointValue;

  // (xFact, yFact) is just to scale to point into the viewport

       // You are probably supposed to do Affine transforms directly on

  // the path or something

        NSPoint modified = NSMakePoint((1024 - point.x) * xFact, point.y * yFact);

        if (idx == 0) {

            [path moveToPoint:modified];

        } else {

            [path lineToPoint:modified];


        [path moveToPoint:modified];

        // This part is just to show the individual points its receiving, which is not necessary to show the path itself

        [path appendBezierPath:[NSBezierPath bezierPathWithRoundedRect:CGRectMake((1024 - point.x) * xFact, point.y * yFact, 5, 5) xRadius:5 yRadius:5]];


    // This is the bit that threw me for a while trying to determine

    // *where* it was actually stroking

    [path stroke];

It works!

With about 3 days remaining we came up with a (not necessarily original) warrant for the whole wand “experience”: Charms Class. 

The “students” would come into one of the rooms which had the software presented on a flat screen hanging up on the wall. The Wiimote was semi-hidden by a stuffed owl and some other remote controls. They each had a printed spell book that my wife made (which was quite awesome). Each student would choose a couple of spells to try from the spell book, and would see it presented on screen and recognized. The “test” mode was then initiated which would ask for a specific spell to be attempted and would let them know when they got it before presenting another spell.

Here is a video of things basically working

asks to to to cast wingardium leviosa
casted wingardium leviosa

The “final” test/reward would be them going into the “room under the stairs” (a literal closet that happens to be under the stairs) which was dark. They would need to perform a spell to turn on the light so that they could see. From there they would see the (locked) box and would need to unlock it with the correct spell. Once opened, the box was filled with some red “sorcerer’s stones” which they could then add to their goody bag. A friend of ours happened to have a string of red leds which was super impressive when placed under the translucent red stones inside the box, creating a really cool glow. 

The Code

Doing this write-up made me quite nervous about putting the code I wrote out there, as it is/was quite a mess and very much about Just Making It Work. My last experience with doing anything related to Mac develop was writing Objective-C for iOS, so that is primarily what I used. But it appears that the world has shifted to Swift (which I thought was still just the Hot New Thing, but seems in fact to just be the standard). So the code is a curious mish-mosh of Swift and Objective-C and shoving libraries into fit what I needed them to do. 

I made an effort to rework things for this so that it would be easier for others to use, modify and rewrite as necessary. 

It’s possible the above is no longer true (when I wrote it about two years ago). I have put the code up on github, but it likely needs some re-work to Just Work. If there is interest, I would be happy to see what I can do to get it into a state if it’s unusable now. But the important bits are present.

Anyhow, the code

Closing and Future Thoughts

To support a more self-contained system I think the aforementioned Raspberry Pi would be a good option to be able to accept input over Bluetooth from the Wiimote and perform the simple operation of translating to a gesture (PennyPincher is *intended* for fast CPU-limited calculations). In line with this, you can purchase 4-LED position sensors which could be attached to the Raspberry Pi or Arduino (however I cannot find the link for one now), which would eliminate the need for the WiiMote. There are also folks who have extracted the IR camera from the WiiMote and interfaced it with the Arduino. I am not sure what clock speed you would need to be able to run the gesture recognition, but my gut says the current Arduino devices (at least the one I bought) would not be up to it. Would love to be wrong about that! Attaching to the Raspberry Pi would likely be feasible as well. 

Appendix-of-fun A: PennyPincher

While doing this write-up I realized that I actually had no real understanding of the PennyPincher algorithm (I just knew that it translated a set of points into a template and could match an input with a list of templates, which was my only requirement). And reading up a bit on it now it clicked why when I added load/save functionality it caused spells to stop working: I had thought the algorithm just stored normalized equidistant points rather than the vectors between those points!

Sample template for “incendio” (read this as [delta-x,delta-y])










This can be converted to a GnuPlot compatible vector data by assuming the first point is (0,0) and applying the math for each (the subtraction on the “x” is because of how the data is received from the Wiimote):

awk -F’,’ ‘BEGIN { OFS=”,”; x=0; y=1; }\
 {print x,y,-$1,$2; x -= $1; y+= $2}’ |\
 gnuplot -p -e “set datafile separator ‘,’;\
 set terminal svg dynamic;\
 plot ‘-‘ using 1:2:3:4 with vectors filled head lw 3″

If you look at the GNUPlot images below, I’ve plotted a sample of some of the trained “templates” emitted by the algorithm. 

As you can see, the distance between each point is equal, which is one of the simplifying assumptions of the algorithm. And since we are just using vectors (think: SVG), the algorithm is insensitive to both translation and scale! It only cares about the “error” in angles between subsequent points. Some clever folks there.  

However, one quirk of the implementation that I have noticed is that since it effectively pairs up points between the template and the current input, so any templates with a small number of points that happen to look quite similar to the “beginning” of the input may get improperly matched if there is more variance in later points. 

For example “lumos” spell is just straight lines going up and down and so don’t have very many points compared to other longer gestures. So there was a tendency in some situations to recognize spells as “lumos”. However it will only insert a single extra point between “widely spaced” points. A potential fix for this is to ensure that multiple in-between points get inserted during resampling so all templates and inputs have the actual same number of points. 

August 14, 2011

More hot emulator action

Filed under: coding, java — Tags: , , — Josh DeWald @ 1:36 pm

After seeing a random Facebook comment from a friend, I spent quite a few hours this weekend working on my NES emulator to get the game “Low G Man” working. Turns out the issue was related to the fact that an NMI (Non-maskable interrupt) can occur mid-cycle, and that particular game waits around for the VBLANK (which is what generates the NMI) to occur (other games do this as well). Unfortunately, because my emulator doesn’t really support intra-instruction events (the emulator is single-threaded), the loop waiting for the VBLANK would never see it.

Essentially what I think was happening was

; wait for VBLANK
LDA $2002
BNE ...
(let PPU run... VBLANK occurs... causing NMI)
(NMI interrupt handler runs... which reads $2002, clearing it)
LDA $2002; this now doesn't see the VBLANK because the NMI handler cleared it
BNE ...
LDA $2002;
BNE ...

I added in a modification that the PPU (or anyone) could tell the CPU “hey, this NMI actually occurred a litte later than you think”, allowing for one extra instruction to execute. Which, in this case, is enough to get it out of the infinite loop.

So, effectively it now does

LDA $2002
BNE ...
(notify PPU to do work, generating VBLANK + NMI)
LDA $2002; see the VBLANK
(NMI handler runs, but the LDA has already had a chance to run)
BNE ...
; yay, outside loop!

I had previously done a hack of setting the “sign” flag (which is what the BNE is actually looking for, because it is that highest bit) in the CPU NMI callback, based on a recommendation of something I found. But this felt like a horrible hack. I’m not certain that the new way isn’t a hack as well, but I think it is at least more “realistic”.

On another note, I’ve put the emulator into github:

I couldn’t think of a name (JNES is already taken), so I just did Quay’s Java NES emulator (qjnes). Note that this is still technically both a C64 as well as an NES emulator.

If you do happen to download it, you can fire it up with ant:
ant nes -Drom=

While you can play a lot of games with it (Super Mario Brothers 3 plays well, Kirby’s Dreamland played well last time I check)… there is no sound and many glitches (it is remarkably how ‘perfect’ some games require the system to be). This is not the emulator you want to get if you actually want it for “fun” (Nestopia and the likes are for that). It was purely so I could “see if I could”.

It doesn’t do anything cool like bytecode manipulation and JIT compiling… because it doesn’t need to on a modern computer. I finally added code to sleep, because on my Macbook PRO it runs well over full speed, so I now try to keep it around 60fps.

November 5, 2008

Blu Ray DRM “cracked”

Filed under: coding, technology — Tags: , , , , — Josh DeWald @ 8:32 pm

I absolutely love to hear about supposedly “unbreakable” DRM mechanisms being cracked (well, circumvented in this case).

To Content Producers: Most people pay for their content. Those who don’t pay, never will. You are not protecting anything or making any more money by implementing these insane schemes to stop being from copying. All this does is make engineers that much more persistent in figuring it out. For every great engineer at your company, there are 100 out in the wild who are going to crack you software. Again, I will happily pay for a BluRay disc, but I want to play it on the player of my choice (such as on a MythTV installation). But when you have your stupid restrictions, you have lost a sale.

So I am hoping that I can in fact get a MythTV box that can play Blu-Ray “out of the box” with a drive.

October 25, 2008

Interesting bits about the JVM

Filed under: coding, technology — Tags: , , , , , — Josh DeWald @ 11:53 pm

Here are some random things about the JVM that you might (or least I didn’t) not know:

  • boolean is represented as a 4-byte integer internally and treated as such in all bytecode-level operations (method parameters can be specified as being booleans)
  • Often when a NullPointerException is thrown, the JVM actually has access to the method that was being called, but the NPE is generally thrown in the calling method, rather than the called method.
  • Reflection is really inherant to how the JVM is specified and operates; all methods are located by {class,method name,method parameters} dynamically
  • The JVM truly has no concept for the Java language; String and Object are for the most part the only classes treated specially
  • The JVM is stack-based rather than register-based

Some cool things about C# (.NET as a whole?):

  • C# has the notion of Properties, which let clients of your class access members “directly” while at the same time allowing for change to underlying implementation (as they are actually getter/setter methods)
  • C# allows for the notion of first-class methods, which it calls delegates. This lets you avoid defining a whole interface to get access to a single generic method.
  • C# has first-class support for firing events, which makes use of the delegates (not sure if it has to or not, I’m still learning this stuff)
  • C# requires you to use override prefix for a method if it is overriding a parent method, which makes it clear to readers of that class that this is in fact overriding something.

I learned these little tidbits while learning C# by writing a JVM in that language. I am fascinating by emulators and VMs because they are software that represents something that is well-defined ( . So in a sense the (naive) implementation is fairly straightforward, if tedious. (I also learned that you can actually write a non-trivial C# application by effectively writing Java and renaming it with .cs and doing a few basic replacements… or just about.)

This project would not have been at all possible without the GNU Classpath project, which has worked tirelessly to implement all of the standard Java classes as well as reference implementations the classes needed to tie to an actual Virtual Machine implementation. I have not gotten JNI to work yet, so for the time being I am implementing the native stuff directly in C# (which makes sense actually as that is my JVM implementation language and is sitting “below” the JVM).

Both Classpath and the official Sun jre implmentation (which is now open source as OpenJDK) provide real world implementations of that stuff you have not looked at since university (Hashtables, Linked Lists, etc) in a fairly readable format. And because they are real world, they offer glimpses into the optimizations and workarounds that have to be done to make these data structures work in the real world.

There is also a project called IKVM which is a very complete .NET-based implmentation of the JVM as well as the class libraries which allows for .NET applications to actually execute Java classes. I think it includes a combination of GNU Classpath and OpenJDK classes inluding managed .NET implementations of the native methods. If I continue this project (not terribly likely, I don’t call it “ToyVM” for nothing) I will probably migrate to using that so I can focus on the internals of the JVM itself. When I started I just wanted to get going and I had issues with the version of Visual Studio .NET that I had and then could not get GNU Classpath or OpenJDK to build or work in cygwin. I actually used MonoDevelop to do the C# development (so I wrote a JVM in C# using a Linux-based .NET implementation running on x86 VM on top of Windows XP).

A couple of nights ago, I finally got the “Hello, World!” application to work after about 2 months of development on and off, not sure what the actual man-hours was.

I used C#’s event handling/delegate set up to do gather some runtime statistics for the basic “Hello World!” application and apparently it loaded 142 classes before it finally did the output. Most of these were not used obviously, but are part of the environment that is statically loaded by key classes (like Charsets).

Next steps:

  • Make it look more like C# (Cxx languages tend to use GetBlah() rather than getBlah(), and C# supports Properties which I would like to make use of)
  • Implement Garbage Collection (pluggable perhaps). I am curious about the various methods that are used and which are better in various situations
  • Do some additional refactoring into additional namespaces
  • Optimize the most heavily used bytecodes if possible
  • See what breaks when I run things other than HelloWorld.class 🙂 I am very much a just-in-time developer so I only got the stuff working that were absolutely required to get Hello World to work. 84 byte codes have been implemented (out of the 107 that were encountered, but some load xload,xstore,if,if_icmp get reused with different initialization parameters)
  • As mentioned before, look into integrating with the IKVM libraries so I can worry less about the native aspects of it

I have put the code into a local git repository, which is another tool that I have been wanting to play around with. I am happy to push that out somewhere, with the normal caveat that this is your typical homebrew code that is not as well commented as it should be (but is hopefully well structured enough for it to make sense).

Happy Coding.

January 27, 2008

Schools and that jazz

Filed under: coding, technology — Tags: , , , , , , — Josh DeWald @ 7:31 pm

There has been a lot of furor (at least from all the links popping up on proggit) around the “worth” of CS degrees and how bad the programs are.

My personal take is that people are expecting the wrong thing out of it. There is certainly a mechanical/trade aspect to programming. That’s the part that they can teach quite well: syntax, basic algorithms, etc. Strangely, this is the part they only teach in Software “Engineering” courses versus pure “Computer Science.” Most people go into these programs expecting to be able to walk into a typical business programming job and get to work. They really do no want to learn about Big O, Finite State Machines, or Data Structure Implementation. Who wants to know about all that damn math!?

The bit that they do not teach well is the part that actually makes you good at “programming”: critical thinking problem solving skills. Much of your time is spent figuring out how to go from problem to solution and, after doing that, why the apparent solution does not actually solve the problem correctly. You will spend a lot of time debugging software and fixing bugs. That’s just the way it is. Yes, each language makes some aspect of expression easier, but at the end of the day the actual algorithm is exactly the same. There are really only two ways a bit of program can be wrong:

  1. The algorithm is incorrect
  2. The expression of the algorithm is incorrect

I would argue that the analysis of either of these problems requires slightly different skills. One of them is the heart of “computer science” and it is the creation of algorithms that solve problems in faster and more innovative ways that are researched. The majority of us will never come up with a truly new algorithm; rather we will solve a problem that is just being defined in terms of different nouns. So a key skill of any programmer (during the design phase of construction, however short that may be) is recognizing how the problem can be re-phrased in another light and use a known algorithm. The site TopCoder is an excellent way to practice this.

Assuming that the proper algorithm has been chosen, the next step is to actually implement it. Theoretically this is the “easy” part, but it is also where the majority of effort is placed in the real world. An absolutely necessary skill of a software engineer is to be able to follow the logic of code (usually people speak of reading code but I really think you follow the logic of code instead. While I have seen some poetic code before, it really isn’t literary in nature) and trace what it is doing with a particular input. It is this skill (or the lack of) which is why, i believe, people complain about bad Computer Science education. You can whine and moan about Java or C++ being used (instead of “pure” languages like Haskell) but frankly that is a bunch of hogwash. If a person is getting the right education, or has the right innate talent, then they will be able to solve problems in any language given to them.

I have always said to people that one of the most useful classes I ever had in college was my Physics class. The professor was smart and did not allow calculators on the exams. You see, it is not the answer that matters, but how you get there. Your algorithm. The most important lessons in Computer Science (and medicine, and law, and….) is those that teach critical thinking and being methodical about solving a problem.

Ruby will not make you magically a better programmer. Java does not turn you into some brainless idiot. Perl will not turn you into a person incapable of writing clear code. Using RAD tools will not prevent you from learning how your code actually works. It is the person behind the code that matters.

Update: I found this response to the debate by Brian Hurt at Enfranchised Mind to be very good (and much better written than mine) in the sense of mentioning that, effectively, you want a “Developer that knows Java” rather than a “Java Developer”.  The reality, though, even if we don’t want to admit it, is that companies want Java developers. What do they care if the person will be useless 10 or 20 years from now, they’ll just get a developers that are trained up on New Fangled Language X.

January 9, 2008

Holidays and a New Year

Filed under: coding, daily life, technology, uk life — Tags: , , , , — Josh DeWald @ 5:52 pm

It’s been a while…

I’m ending a nice long 3-or-so week extended Christmas (just me, the wife, and pictures of gifts from family), New Year’s (very cool dinner on a ship permanently docked on the Thames. Those who know me won’t be surprised to know that I spilled red wine all over the table within about 2 minutes of sitting down. The 9 others at the table were quite nice about it) and 5-day Moroccan holiday in Marrakesh (مراكش). The last was quite cool (finally something different from Europe, you can only handle so many castles) but hectic and wearing at times (I can only handle so much haggling.. even though it’s satisfying to only pay 50% of the original price, I know I’m still paying way more than the price a local would pay). Again those who know me will not be surprised to know that I dropped (and shattered) the large tagine that we had purchased… was about 10 feet from our boarding gate at the airport.

And to really but an end to the holiday, my wife is now on a plane back to the States to get us going for our repatriation there. I will be following 3 weeks later, as my year-long stint here in the UK is ending. I have have had an awesome time here, both at work in out and about. Met some great people who I will definitely miss.

And now for something completely different..

To bring things back around to geeky stuff (I tend to skim over other people’s personal stuff, so I understand if you, Reader, have done the same) I have finally started working on my Super Nintendo (SNES) emulator. It is still using the same basic core as the C64 and NES emulators. Main difference is that the SNES using a 65816, a successor to the 6502 which can address 16MB of memory (24 bits) and implements all of the 256 op codes and adds some more addressing modes. When it initially starts up, it is actually in 6502 emulation mode (with bugs that the original 6502 had fixed, which I’m sure provided frustration to many developers who depended on undocumented instructions and bugs). I have gotten some basic graphics to work in the ‘test.smc’ demo included in Y0shi’s documentation, but it is nowhere near even able to get screenshots, but hopefully only a week or so (I’ve spent a feverish 3 or 4 days dedicated to SNES stuff, but probably spent another couple of weeks previously working toward getting an Apple IIGS emulator working, which uses the same processor) to get there.

I have started adding some JUnit tests of the actual instruction implementations, as even minor bugs can truly spiral out of control in the running of the system.

As usual, Zophar’s domain has proved invaluable for obtaining documentation, but I have also used Wikipedia for system overview (memory sizes and the like) and another site I stumbled on just called “Emu-Docs

I will make the code available via CVS or Subversion once it is in a usable state. Apparently my wife never really played the SNES, so we shall see if I can find anything to drive me like getting SMB3 working on the NES did.  I would love to get Super Mario Kart working.

I have been using ZSNES as my “reference” for what a working system should look like (I don’t know if it’s open source or not, but I am only using the executable in any case).

Shoutout goes to the many people who dedicated hours and hours dissecting and analyzing the SNES and writing emulators in C and Assembly which ran on old Pentiums. My Java emulator may one day run at a full speed on a 2 Gig machine 🙂

November 27, 2007

Architecture and Coding

Filed under: coding, technology — Tags: , , , — Josh DeWald @ 8:49 pm

I have the word “architect” in my title at work, which is cool cause it apparently means I know what I am doing but is bad because it means that people assume that I always know what I am doing.

I consider myself a pretty good software engineer, in that I can generally produce working software against the requirements given to me. I actually do not tend to be up on the latest trends in WS-* or frameworks, etc so most of my code is pretty straightforward (I hope). Thankfully others I work with (both at work and on hobby stuff) keep up with things like Hibernate, Spring, Quartz so it allows me to keep track of what people are actually using, versus what the hype is. I will shamelessly take the skeletons of how those systems are used by other co-workers and friends. However, I always try to figure out how to make using them easier if I find bits that are in any way non-intuitive.

I tend to write in an “agile” way. What this really means is that I think about the problem and do some drawings, pseudocode, sketches, pictures, whatever that I need to get something up and running that solves an immediate problem. Then I move on to other components. Realize that I missed a detail. Rinse. Repeat As I move onto other requirements I integrate them with my solution and always try to have a working system that does something.

Back to the point of my story… this is a horrible way to work if you are supposed to “architect” (in the sense of “here is the design, build it”) a system based on a set of requirements. It is very easy to draw some lines and boxes with cool names and leave the “details as an exercise for the programmer.” Bad idea. By the time it has made it to the programmer (unless they work intimately with you on your own team) many of the requirements have been lost, and they are left only with the design. So they implement, or try to implement, what has been given to them. And surprise, surprise, it meets the requirements but does not actually work.

This is when I realize that what I really gave was a design for a “first pass” that would really be a prototype of the system. If I were implementing, it would be easy to turn around and enhance to go to the next level. But in this case, it has already gone down through two other levels of tech lead and off-shore architect. So not very easy to say “ok, so that we aren’t quite there yet, we actually have a million other details to work out”.

So lesson learned there is that however long you would normally give yourself to “design” the system, multiply it by 10 so that you can basically look at every single API and external system being called, and run through every use case you can think of to see if the design does in fact meet the requirements and produce a working system. It really is not enough to see that API X has a method that looks like Y and takes parameters A and B and probably does Z. What happens if you have 1 million sets of {A,B}?

My real point, I suppose, is that being fairly good at designing and implementing your own systems does not make you an architect. And if you are in that position, do not let someone tell you that you only need “maybe a day or 2” to design something if it is of any real complexity (where a good way to measure complexity is how many interfaces exist between various systems). Give yourself at least a couple of days per non-trivial interface.

I hate the feeling of thinking that if only I had had more time my architecture would have been perfect after the first implementation. I hate even more the feeling that it honestly probably would not have been. It is never right the first time.

Or maybe I just really suck as an architect.

In any case, I am currently getting to do the implementation on a quick-turnaround (aren’t they always in the corporate world?) implementation and I am absolutely loving it. Sure, the deadline is quickly approaching. But you know what, it is my project to succeed or fail at. I am using parts of the system that I “architected” and seeing parts where I totally did not think about the problem in low-level enough detail, and trying to fix those oversights. I am getting to play with systems that I have not worked on previously. And most importantly, I am getting to code! Maybe it is the design aspects of the coding I enjoy, but I think not. What I really, truly, absolutely, love is seeing something grow before my eyes into a working system. I love the list of “//TODO” items (and love even more removing them as they get done) and bits of stub code. I love when I get surprised that it works even better than I expected. I love making a database query 10,20,30 times faster through simple application of a decent index. I love keeping 10 things in my head as I move from file to file and implement my ever-growing APIs.

I have no doubt that my methods would scare many of the purists out there. At heart, I am a “just-in-time” developer. For my emulator, I did not implement an instruction until I ran into my “not yet implemented!” exception. Same goes often for other things that I implement. It is a bit of an extreme example of YAGNE, I suppose.

Update (11/30/2007): Added some words I have left out.

November 17, 2007

iPhone Native Look and Feel Wikipedia

Filed under: coding, daily life, iphone, technology — Tags: , , , , — Josh DeWald @ 4:28 pm

As I mentioned before, I used the “schools edition” of Wikipedia and stored it on my iPod Touch (will work on the iPhone as well) for some hot off-line Wikipedia action. What quickly became apparent is that the set of subject index pages that come with it are actually kind of hard to navigate (small text, etc). I figured that somebody had made some set of style sheets that made web apps look like native iPhone apps, and of course someone did.

So what I did was make a basic perl script would crawl through the subject index pages (looking for “categoryname” div tags) and generate an html file that had the same look and feel. Currently all of the index goes into a single page, which gets you the “wipe” effect, but it could just as easily be put into different files. I had to modify the javascript slightly to allow it to link into the actual content pages (the default javascript assumes that all the links are internal to the file… either that or I just couldn’t figure it out). Basically, if an “a” tag has a class of “external” then it’s… external.

So if you grab iphonenav.js from the link above, and modify the click eventListener like so:

addEventListener(“click”, function(event)

var link =;

if (link.className.indexOf(“external”) == -1){

while (link && link.localName.toLowerCase() != “a”)
link = link.parentNode;

if (link && link.hash)
var page = document.getElementById(link.hash.substr(1));
}, true);

You can download the perl script from my site. Just run it against “wikipedia/wp/index/subject.html”. Save the script with a .pl extension. My host seems to be trying to interpret the script, even if I have the extension be “.pl.txt”.

You can see what the end result looks like here. By the way… you should be able to just take that html file and drop it next to the main index page for the Wikipedia schools edition.

As usual with this type of thing, I’m sure people will look at the script and think “what are you doing!?” But it seems to do the job.

October 3, 2007

Design Smart, Code Stupid

Filed under: coding, technology — Tags: , — Josh DeWald @ 11:42 am

“Design Smart, Code Stupid” is just another way of saying “Don’t be clever” at the code level. Remember that as soon as you write some code, you are now its primary maintainer. If the code base is a hobby project, you may be the only maintainer, so you owe it to yourself to make your code easy to read. If its not a hobby project, it becomes even more important (if only to save face!); the last thing you want is other developer’s looking at your code and thinking “what does this line do?!” Instead, another developer should look at your code and think, “this makes total sense, anybody could write this!” A little self-test is to revisit your code a week or two after you wrote it. If it confuses you and produces a lot of “wtf?” then you need to simplify it.

Do you really need to use a ternary operator? Is it honestly easier to do a bit mask and a bit shift in the same statement? You may want to reconsider that nested a.getB().getC().getD().doSomething() you have going there.

So that’s the “code stupid” part… kinda starting backwards here.

So every line should be obvious, but each one should also go toward solving a problem (why else is it there anyhow). As you take the view higher and higher through the system, it should continue to make just as much sense. Each method should be well-named so that a person does not even need to look at it to know what the purpose is: the calling of a method should be as obvious as a built-in language construct. At the class or module level, it should be obvious what role it serves in the big picture. Multiple modules and classes should be “packaged” together as a cohesive unit that solves some problem.

Essentially, at whatever level a person looks, it should be obvious what what the system is doing, and why it is being done in the way chosen (even if there are alternative methods).

And you know what, another developer will look at the design and think “Man, that’s smart!” Because it is not easy to do for any non-trivial problem.

Because you know what, if you don’t have a good view of the big picture design of your system, then neither will anybody else. Instead, you will have a hodge podge of classes and methods that sort of work together. You need to know what it is you are building before you build it. Always think about the Use Cases!

I have no idea if Steve McCcconell uses the phrase in “Code Complete.” I would not be surprised, as entire book is really devoted to the idea of building a system that makes sense at every level. So I know that I am havingaa completely un-original thought, and I’ll freely admit it. I have no doubt read it so many times that its worked its way into me into some sort of meme. Nonetheless, I just thought I’d talk about it for a bit. I try to live it and know that I screw it up all the time.

Update: Fixed the sentence on Code Complete, realized that I had left out like 4 words 😉

September 19, 2007

This is how you methodically troubleshoot

Filed under: coding, daily life, technology — Josh DeWald @ 12:09 pm

I don’t think I could possibly give a better example of how to methodically diagnose (and fix!) an issue.

Scott Hanselman had an issue firing of Microsoft Live Meeting and documents how he found the root cause. Absolutely brilliant.

He really has it all:

  1. Reproduces the issue
  2. Isolates the issue
  3. Gathers data
  4. Makes a hypothesis
  5. Tests the hypothesis
  6. Forms a conclusion
  7. Has a working system

In case it’s not obvious, yes I think troubleshooting/debugging should be performed using the “Scientific Method” as the pattern. Perhaps you don’t always specifically spell out your “hypothesis”, but you need to have one. Don’t spend too much time randomly poking around… methodically poke around. Each time you try something, you need to know what the result of that test will mean for you, and get you further along.

Update: Modified link to be direct, rather than via FeedBurner

Older Posts »

Create a free website or blog at