The ubiquitous keyboard and mouse that have dominated computing for the last 30 years are getting some company and competition as gesture interfaces become a reality outside the test lab.
Microsoft’s Project Natal for Xbox 360 promises an immersive user experience in which the interface becomes more invisible than ever before. With Natal the user is the interface. Looking to take the user experience far beyond Nintendo’s Wii, Natal uses a 3-D depth camera and microphone for motion, gesture, and audio input. Xbox claims Natal will let people steer an on-screen race car by moving their arms in steering motions and use gestures like actual kicks to move a soccer ball on screen. In one demo, Natal recognizes a person’s face and automatically logs them into their Xbox profile. Think Wii without the controller. Wikipedia has a brief article on Natal’s background and technology.
And if you think this is a just going to be a high-tech gamer toy, look at the opportunities for communication and commerce in this post on Engadget. Imagine manipulating your TV’s menu system with the same gestures you’d use on an iPhone. No convoluted controller or touch screen required. It’s like Minority Report in your media room.
Motion-detecting interfaces aren’t limited to efforts as ambitious as Natal. Here’s a look at Pek Pongpaet using the accelerometer in the WiiMote to control an on-screen X-Wing fighter. Many areas of education, from aeronautics to architecture, could be revolutionized with touchable and movable experiences. Pek also did a recent demo at DePaul University in Chicago where he used the Wii Balance Board to connect to a website through WiiFlash Server to steer a car on screen by leaning in the direction he wanted to steer the car.
It’s clear new ways of human-computer interaction are coming thanks to multi-touch UIs and gestural interfaces. Aching gamers’ thumbs everywhere will be rejoicing.