25. July 2012–
Hyped in the blogosphere, gesture control has been around for some time now with companies such as German startup gestigon – pulling back the curtains after years of hard development and experimentation. VentureVillage spoke to CSO Moritz von Grotthuss about gestigon's latest developments and future forecasts of a gesture-controlled world.
It’s the stuff that sci-fi geeks used to dream about. Now it’s creeping into reality. Tom Cruise’s gesture-controlled computer from Minority Report is technology that industry experts say, will be the next touchscreen era for tech. To give you a glimpse of its potential, gesture-control software company Leap Motion recently launched a Minority Report-style computing system, where powerful sensors pick up mid-air gestures – which are then translated into precise computer commands. It's three-dimensional computing without actually touching anything. The possibilities for this upcoming technology are seemingly endless...
Moritz, why is Gesture Control the next “big thing”?
Years ago, when the mouse was introduced, it was a big thing. People questioned why it’d be beneficial if you have a keypad. Then a few years later we saw the introduction of touch screens back in 2001. I had an Ericsson mobile in 2001 with a touchscreen and now the technology’s really entered the market with the iPad and iPhone – ten years later.
But every couple of years we take a big step in how we communicate with our environment – computers, mobiles and so on. Currently you can say voice control is really entering the market. Now, I would say – is the time for controlling your environment with gestures – without needing to touch.
Five years ago, none of us were eager to use an iPad. No-one needed a touchscreen and we got on without it. I had one of the first touchscreens on the market back in 2002. It was fun, but you had no use for it because there was no such thing as mobile internet and it was far too slow so you couldn’t do anything on it. It was just cool to have.
Now, it’s so natural for our kids that they can’t imagine not using a touchscreen device. But gesture control will come soon, and one big struggle will be to find the gestures that are relevant and easy.
There’s a huge market for gesture control. But it won’t eliminate other forms of device controls. It won’t eliminate the touch pad or the mouse, but simply, add extra layer, something new.
Tell us about Gestigon and where it's up to...
We've developed gesture recognition software for fingers, hands and body gestures – making use of 3D image data that can be analysed in real-time. Any gesture triggering an action can be configured individually, depending on the application and the software, it can be easily adopted for gaming programmers, for example. We’re currently VC funded and a looking for a Series A investment and are in the early tech developmental stages.
At the moment, we’re working on optimising our software solution for a globally leading chip OEM and designing a use case for the automotive industry as the starting point of a gesture-recognition concept.
Who are your biggest competitors and what sets you apart from the pack?
Omek from Israel, Belgium's Softkinetic and Microsoft are our biggest competitors. Possibly also Leap Motion, but they haven’t showed their solution (sensor + SW) to the public yet.
Our solution is well-suited for devices such as mobile phones, tablets, remote controls, navigation systems, and non-stationary/decentralised uses like laptops and MedTech. Unlike most competitors on the market, we don't work with a comparative database – trained with more than 1,000 video analysis' of every gesture by forming a median.
This makes the technology slow and uses a lot of computing power that is only available by a well-equipped PC. Gestigon’s solution is based on true skeleton recognition: body, hand and fingers. We detect the relevant body part in 3D data, project a 3D-network of skeleton-like nodes of the body part, and then any gesture can be defined as a trigger. It's a much more advanced, accurate and power-saving option.
The solution of gestigon runs on all standard platforms (Win7, Linux, OS X and PandaBoard) and is flexibly integrated into existing or preferred hardware and software environments of our customers.
What might 2050 look like in a gesture-controlled world?
I believe in a totally connected environment where technology makes our lives easier and more convenient. It’s a struggle to look to the future, but I don’t believe in these highly commercialised worlds where - in 2050 we’ll be living in an environment where apartments are made of shiny black leather, steel, and are totally cleaned up - I think we’ll be living in as much a mess as we do today.
But there will be much more interaction between people and devices. Many more devices will understand what we want it to do. With the heating on, perhaps it’ll recognise that I’m sitting here in a T-shirt and the window’s open, and can measure the outside temperature has warmed up, so, at some point – my windows might close automatically or the heating changes to a cooler temperature.
There may be more recognition within our environment with gesture recognition. Already today, there's brain/computer interfaces where people wear a device to type on a keyboard just by eye movements. These things are happening and I’m not sure if in the future brains will narrate by some non-touching device, but I could imagine that perhaps if we sit in a car and the head of the seat might have two electrodes which could read your mind while you’re sitting in the car and driving. Perhaps it could tell you if you've had too much to drink, or if you're in an bad mood - car-stereo could start playing soothing music.