Pages

December 04, 2011

Near term computer interfaces Kinect 2, Senseg, Siri are moving towards ultimate interfaces

Eurogames Microsoft’s next-generation Kinect 2 will be so powerful it will enable games to lip read, detect when players are angry, and determine in which direction they are facing.

When Kinect launched in November 2010 the depth sensor was set at a 30 frames per second limit and a 320x240 resolution limit. The issue relates to the USB controller interface, which is capable of around 35MB/s, but it only uses around 15/16MB/s. This artificial limit is in place because multiple USB devices can be used at once on an Xbox 360.

Lip reading in the movie 2001. In 2012, 2013, the lip reading will be from a new Xbox and not a Hal 9000

Kinect 2 is also expected to support the tracking of pitch and volume of player voices and facial characteristics to better measure their emotions. The current Kinect unit has been updated several times to improve its camera tracking. Recently Microsoft launched its Avatar Kinect technology which allows the sensor to track mouth and eyebrow movements. There’s been a number of rumours suggesting the company is also working to build in finger tracking to the next-generation of Kinect.

Digital Foundry report - Microsoft plans to launch two very different versions of the next Xbox in late 2012 or in 2013.

1. "pared down future xbox" will be like a set-top box, and will act as a Kinect-themed gaming portal.

2. "more fully-featured machine" with optical drive, hard disk and backwards compatibility. This would be aimed at hardcore gamers and released at a higher price-point.




Haptic interfaces for any screen

Senseg patented solution creates a sophisticated sensation of touch using Coloumb’s force, the principle of attraction between electrical charges. By passing an ultra-low electrical current into the insulated electrode, Senseg’s Tixel™, the proprietary charge driver can create a small attractive force to finger skin. By modulating this attractive force a variety of sensations can be generated, from textured surfaces and edges to vibrations and more.

Natural Language Understanding

When you ask or instruct Siri to do something, it first sends a little audio file of what you said over the air to some Apple servers, which use a voice recognition system from a company called Nuance to turn the speech – in a number of languages and dialects – into text. A huge set of Siri servers then processes that to try to work out what your words actually mean. That's the crucial NLU part, which nobody else yet does on a phone.

Then an instruction goes back to the phone, telling it to play a song, or do a search (using the data search engine Wolfram Alpha, rather than Google), or compose an email, or a text, or set a reminder (possibly linked to geography – the instruction, "Remind me to call mum when I get home," will work), or – boring! – call a number.

NLU has been one of the big unsolved computing problems (along with image recognition and "intelligent" machines) for years now, but we're finally reaching a point where machines are powerful enough to understand what we're telling them. The challenge about NLU is that, first, speech-to-text transcription can be tricky (did he just say, "This computer can wreck a nice beach," or "This computer can recognise speech"?); and second, acting on what has been said demands understanding both of the context and the wider meaning.

Combining Kinect and 3D realtime holograms

Using a single Xbox Kinect and standard graphics chips, MIT researchers demonstrate the highest frame rate yet for streaming holographic video. Naturally for the hologram demo (video below), one of the group's members donned full Princess Leia garb to re-enact the fabled "Help me, Obi-Wan Kenobi. You're my only hope," scene from Star Wars. The group is looking to develop alternative versions of the diffraction screen at lower costs, and is seeking to design a laptop-scale screen that retails at around $200.

In November, 2010, researchers at the University of Arizona made headlines with an experimental holographic-video transmission system that used 16 cameras to capture data and whose display refreshed every two seconds. The new MIT system uses only one data-capture device — the new Kinect camera designed for Microsoft’s Xbox gaming system — and averages about 15 frames per second. Moreover, the MIT researchers didn’t get their hands on a Kinect until the end of December, and only in the week before the conference did they double the system’s frame rate from seven to 15 frames per second. They’re confident that with a little more time, they can boost the rate even higher, to the 24 frames per second of feature films or the 30 frames per second of TV — rates that create the illusion of continuous motion.

DARPA Holographic Display Table

The Defense Advanced Research Projects Agency (DARPA) has completed a 5-year project in early 2011, called “Urban Photonic Sandtable Display”, or UPSD, that creates realtime, color, 360-degree 3D holographic displays. Without any special goggles, an entire team of planners can view a large-format (up to 6-foot diagonal) interactive 3D display.







If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
blog comments powered by Disqus