Thalmic Myo, first impressions.

After months of indecision, I finally decided to preorder the Thalmic Myo, it arrived last week and I’ve had a little bit of time to play around with it so thought I’d put together some notes on my impressions so far.

First off, the device itself is fairly well constructed, I think this is worth mentioning because of the fact that Myo’s aren’t actually mass produced in the conventional sense. Thalmic actually manufacture each unit on their own production line in Canada. I really like the idea that the research, development and production line are all more closely connected and, in theory, able to prototype and iterate more quickly and efficiently. I’m not sure what impact this way of working has on the final cost of the device but I think it must have been a brave decision to take, and one which I have a lot of respect for. I remember reading about the challenges faced by the Raspberry Pi Foundation and how, eventually, they were able to bring manufacturing into the UK, again, I think this sort of approach is a good thing which opens up a wider range of opportunities, so hats off to those tech start ups who are nurturing their own talent and keeping things local.

One particular article about the Myo, that caught my attention over the summer was in this interview with Thalmic CEO and co-founder Stephen Lake, on TechCrunch – during which the impression that I get is that one of the main aims with the Myo was to create a device which allows people to dive straight in without the need for any lengthy setup or calibration each time you use it. Although I think that this is an ambitious and idealistic approach, I also think that, in usability terms, it’s a good target to aim for, if you want a broad audience to be able to adopt a new technology, the chances of success are more likely when there are fewer barriers to entry. There’s a lot of cool tech out and about at the moment, but I think the more ground breaking devices; Leap, Myo and Glass in particular, run the risk of alienating broader audiences if they aren’t accessible and easy to hit the ground running with.

In my experience of using the Myo so far, it seems to be fairly robust, it’s quick to start detecting movement and gestures soon after putting the device on, and gyro data; X, Y, Z rotation and acceleration are all fairly reliable. The accuracy of the gesture detection, however, is still a little bit flaky – at the time of writing, there seem to be a fair few false positives, though this appears to be more the case when the Myo is put on and initially misinterprets a gesture. From that point on, it seems that the Myo is unable to reassess a misinterpreted gesture, the only resolution for which, is to remove and re-sync the armband. As such, I’ve found that it’s sometimes necessary to put the armband on, make a few adjustments to ensure good contact and then check through each gesture before you get going. Nonetheless, once you are up and running, it feels pretty reliable.

Upon first finding out about the Myo, there were a few features or use cases which got me really excited, in particular, the idea of having a gesture device which can operated independently of a computer, wires, or the necessity to face a camera or other detection device, other than that which is attached to your arm. The idea of a hands free, lock/unlock gesture based input also appealed and naturally, I thought I’d put a few of the use scenarios, featured in the promo video, to the test. It turns out that even though there is a lock/unlock gesture to effectively start and stop the Myo listening to gestures, attempting to do the washing up, whilst wearing the armband to control music playing in the room, resulted in a few deafening moments when the Myo was unintentionally activated, and the volume turned up : s it’s early days though and I’m sure that the mass of user data sent back to Myo through the Myo Connect app will prove invaluable for the improvement of future gesture detection algorithms.

 

My first Myo app:

Inspired by the Moff smart wristband, I put together a quick demo app by way of a light sabre simulator, to try out some of the API features. With Processing being my go to prototyping platform, I found a really handy Processing library which seems to provide access to most, if not all, of the functionality that I was interested in playing with.

 

So far, it’s been fun playing with the Myo, I’m really excited about the potential that it offers, and am looking forward to the improvements in forthcoming API’s. I think it’ll really come into it’s own as a peripheral controller, where it could be used in conjunction with other devices like smartphones, tablets and so on. Much like Google Glass, I think that the Myo would work well to extend the functionality and features of other devices. Imagine, for example, combing Myo with a smartphone app and some WiFi connected bulbs, to allow gesture control of lighting within the home. Directional and gesture data from the Myo could be interpreted by a smartphone app to determine, firstly, which bulb the user is pointing at, relative to the magnetometer in the phone, and secondly, the adjustment that the user wishes to make the that bulb, based on the gesture being performed. The potential for scenarios like this, demonstrate one of the most exciting things about the Myo, – the fact that it is more platform agnostic than other gesture devices like the Leap and Kinect, and this is where I think it gains the advantage.

In summary, the Myo is an exciting bit of kit and I’m really glad that I got one – as ever, I just need to find some spare time to delve a bit deeper and start playing around with some ideas. I think it’s got lots of potential as a wearable – if only the distinction between intended and unintended gestures can be better defined – unfortunately though this is, once again, the classic problem with gesture based input devices – when is a gesture, not an intended gesture?

 

UPDATE: 20th December 2014

As hoped, Thalmic have been making a lot of great improvements to the API – in particular, it turns out the a lot of people were having problems with the lock/unlock gesture, which they’ve now changed to a double tap between the thumb and forefingers. So far, this seems to be more reliable than the former thumb to pinky lock/unlock gesture.

They’ve also opened up access to the raw muscle data – again, a brave move and I expect that a lot of developers, myself included, will be grateful for the chance to experiment with this.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s