Updated: Specialmoves shows gesture-based interactive design experiments with Leap Motion, Kinect & more

Specialmoves used Intel's Gesture Camera to create a game where you could draw in the air then pose with your creation, which was then posted to Facebook

We speak to the London-based agency about what its building for itself and clients that push the boundaries of how users interact with projects.

Last week, we attended an open evening at Specialmoves where the showed off some of the personal creative tech projects its staff have been working on, which ranged from an iPhone-controlled tracked robot to gesture-based games using some of hottest gestural control tech including Microsoft Kinect, Intel's Gesture Camera (above) and the Leap Motion controller.

Rather than just being a bit of fun, the agency sees such projects as a way to R&D new ideas and concepts that may make their way into client projects. The agency has created interactive projects for the likes of Intel, Smirnoff and Tate Britain – and is best known for a 'design your own album cover' project for the Kaiser Chiefs.

We caught up with Specialmoves' head of interactive Gavin Clark, head of business development and marketing Hannah Locke, and senior developer Stephen Chan.

DA: Why is creating personal 'play' projects so important to Specialmoves?

SC: "Technology is advancing quickly and we are in a growing industry, therefore we need to be 'playing' or running R&D projects constantly to stay on top of new technology. We also use the opportunity to learn, not just about the technology itself but also development techniques, new languages, new structures, new architectures and rapid development."

HL: "The R&D process and testing of new technologies is a key part of Specialmoves culture. We are fundamentally passionate and inquisitive people, with many of our developers and designers regularly attending hackathons and presenting the results both internally and at industry events. Our founder and technical director Pascal Auberson leads our in-house R&D function, which reinvests 15% of revenue into taking new and emerging technologies and developing them internally to assess potential real-world applications.

"Not only does it make us happier and more productive, but also benefits our clients by bringing the latest and most appropriate technologies to each project; tested, developed and with a range of use cases and scenarios ready to pitch to brands, as well as the confidence that we can build it. Because of this clients often come to us with projects which are challenging - often because they are trying to do something that hasn't been done before, or find a way to create a unique or novel user interaction."

DA: Could you give us an example of a previous 'play' project that has led to a commercial project?

HL: "One of our previous R&D projects DIY City enabled us to investigate live interactive mass-participation projects and complex projection technologies. Within a two-week period we designed and built a mass user mobile application, with live interactive projections culminating in an outdoor event at our studios. Running an agile process, we were able to rapidly cycle through development and testing within a tight timescale gathering learning around both technologies and the challenges of live events. Following this event we were asked to deliver a live drawing event at Tate Britain, where we were able to further refine the technologies used in DIY City, delivering a unique live experience for the audience at Tate.

"Having already developed, tested and refined the technologies, we were able to deliver an exciting, seamless digital experience with minimal risk. It was also a lot of fun to see our work come out of R&D and be used in the real world."

DA: Tell us about the limitations of designing gestural interfaces for the Leap Motion controller. How does this affect the interfaces you design?

Specialmoves has created a game that uses the Leap Controller (the small box in front of the screen). Moving your hand in front of the controller allows you to pick up boxes on screen, move and throw them around.

SC: "The whole idea of gestural interfaces is still in its early stages, with technologies like the Kinect, Leap Motion and the Intel Camera which are all great concepts to help create and control interfaces with our human body without the use of a controller or remote.

"Each device has its different uses. The Kinect can detect the human skeleton and use the whole body to interact with applications, but the Leap gives us the power to engage apps with our hands and fingers. The thing to remember is that there are always flaws and limitations to the devices.

"The limitations we found with the Leap concern the boundaries and distance of hand position before the device can pick up your hand and fingers. The size of different peoples' hands can cause issues, so we had to find a balance with maximums and minimums. The rotation of the hand can also confuse the device, as fingers start disappearing. Also in testing, we found that device itself is pretty unstable in different lighting environments.

"Knowing these limitations with the Leap, we do have to work around and adapt to these issues in our designs, but we will be creative and create any application in the best way to show off the best bits about the device."

DA: Tell us about Intel's Gesture Camera

Examples of Specialmoves' Intel Gesture Camera project game where you could draw in the air then pose with your creation

GC: "[The SDK for] Intel’s gesture camera is still very young. We found that quite a few of the gestures we wanted haven’t been ported to the C# library – which would have been useful as we did a lot of the work in Unity – so we had to hunt through the original C libraries to find the same results in C#.

"A few of the facial gestures didn’t work too well – smiles were often picked up while talking and winks while blinking, all in rapid succession. We found that the hand gestures were much easier to work with for example, so we ended up using the peace sign and then a thumbs up and thumbs down to confirm or cancel an action.

"We had to handle rapid reporting of the gestures to make sure they weren’t handled multiple times by having the user hold the position for a half a second or so to confirm that it was a gesture and not someone walking past in the background."

DA: Do you find these limitations frustratingly restrictive or do they inspire what you create?

SC: "Limitations and restrictions are not frustrating – even with limitations the R&D must still go on. The point in R&D is to find the good and bad things about the technology. Limitations don’t hold us back, but rather give us the opportunity to explore and become innovative with the technology we have. The technology itself is the inspiration.

"We are not the only ones that are doing R&D and the creators of the Kinect, Leap and Intel Camera are still working on these devices to fix these limitations, but they won’t know about these limitations without developers like us being creative and exploring these devices fully. We know we won't be able to create the whole 'Minority Report experience' right away but we can create small interactive systems that can really show off the technology and also our creativity."

GC: ""For example with the Intel camera, the restrictions did affect the result because ultimately we used hand gestures rather than the face detection we originally wanted to work with, but as we went in to the project with an R&D mindset this wasn’t frustrating."

One project allowed people to stand in front of a greenscreen and interact in real-time with a virtual character controllered live by a Specialmoves team member who was in front of a Kinect

One rather creepy project used the Intel Gesture Camera's facial tracking system to create a portrait who's eyes followed you around the room.

DA: Why do you think there's such interest currently in 'hacking' projects that bring together interactive design and real-world experiential design?

GC: "Hacking has become popular because it really helps creativity. An idea on a Friday night can lead to a finished product by Sunday afternoon. The rapid iterations mean everyone has to work closely together and you often get a result that all are happy with. There are no big surprises at the end of the project because everyone has been so involved all the way through.

"Working with real-world technology isn’t as mature. You’re often working with really new bits of kit that have quite a few restrictions and hidden gotchas. So it pays to ‘hack’ it as things will change so quickly. The API you wanted to use doesn’t actually work? Move quickly and find another – or create your own."

HL: "In the commercial arena, digital out-of-doors (DooH) – user interaction within digital/physical environment rather than simple digital billboards – is a nascent area. Because of this, there will be necessary experimentation with technologies around these kinds of projects. Because we are active in both hacking and DooH fields, we have found that the agile processes used in both hackathons and in the development of new technologies in our R&D practice suit the needs of this new breed of project.

"This approach requires a close relationship with our agency partners and a brand or marketeer who is open to working in this way. It is not as straightforward as building a simple website or application; many iterations of design and build will take place within the project lifecycle and the whole team must be flexible enough to accept work-arounds as the ultimate user experience can be influenced by what technology is capable of delivering.

"The expectation that things will go wrong when hacking really lends itself well to real-world experiential design, whilst developing in advance in an R&D scenario helps uncover and resolve technical issues quickly, long before a project begins."

Updated 28/5/13: Videos provided by Specialmoves showing the projects in action have been added to this story.

Elsewhere on IDG sites

Read Next...