Microsoft's boffins aim for the personal touch

With Project Lightspace, Microsoft researchers use ordinary furniture and sensor equipment to turn every surface in a room into…

With Project Lightspace, Microsoft researchers use ordinary furniture and sensor equipment to turn every surface in a room into a computer interface, writes MARIE BORAN

WHEN SCIENCE fiction author William Gibson famously said: “The future is already here – it’s just not evenly distributed,” he knew that the best way to predict what kind of technologies the majority of us will be using in the next decade was to look at the experimental research going on in labs right now.

A rare glimpse inside Microsoft Research labs at its Redmond, Washington, campus points to a future where every surface becomes a computer, and the human body becomes the controller.

Advanced display and sensor technologies are beginning to break down the barriers between the digital and physical world, and Star Trek's Holodeck doesn't seem like such a far-fetched idea after all.

READ MORE

Don’t expect this in the shops in the near future, though; it’s part of the non-commercial research being carried out by one of 850 PhD researchers employed by Microsoft globally.

“We’re the ‘R’ in Microsoft’s R&D,” says Kevin Schofield, general manager of Microsoft Research.

“The vast majority of money is spent on development of new products. One per cent of the entire Microsoft staff works within research, but it’s very important to have this small number doing blue-sky stuff.”

The company’s research section resembles academia more than big industry in terms of functionality and research outcomes. Schofield says: “It’s like a really large computer science department. You don’t see published papers on our mission statement, but third-party peer review is very important to us.”

Despite being labelled as blue-sky, many of the experimental technologies developed in these labs have to come to pass as commercial products.

Most recently, this is how Xbox Kinect and the virtual keyboard for Windows Phone devices were developed.

The main areas of investment for Microsoft Research are natural language, machine learning, graphics, information retrieval and user experience.

You could say Project Lightspace is a combination of all of these. This is the brainchild of Andy Wilson, senior researcher and co-inventor of the first generation Microsoft Surface touchscreen device.

We’re ushered into a small, dark room where Wilson brings us to a large but unimpressive table. He points upwards to an arrangement of three projectors and depth sensors that are essentially prototypes of the slimline Kinect, and explains that this is Lightspace.

The unlikely mix of ordinary furniture and sensor equipment is a 3D interactive computing space that may very likely be inside most homes in some years to come.

“It turns every surface in the room into a computer interface,” explains Wilson as he knocks on a plywood table to show that it is nothing more than reconstituted timber.

“With current computing, all of your motion is being reduced to a single point: the cursor. Lightspace is an interactive space that uses your body and objects around you as an interface.”

Projected onto the table are several images. It looks like an illuminated tabletop with photographs scattered across the surface. Wilson places his hand on an image and moves it across the table. What happens next is the interesting part. He slides the image off the table, at which point it turns into a sphere of light that he can hold in his hand.

Wilson asks for a volunteer, and passes the ball of light to him simply by touching his hand. This is because the depth camera has already begun to render a 3D mesh of the second person and “sees” him as a distinct entity.

With the moves of a pro, Wilson takes back the sphere and rolls it up and down his arm before placing it on the table, whereupon it turns back into the digital photo.

Lightspace can also sense when you are touching two displays, and uses the body as a conduit to move data between the two areas. “It’s a very physical kind of operation. We work with the idea of ‘on-body projections’ or using your body as a display,” explains Wilson, as he touches a virtual menu hovering above the ground. The menu, or 3D widget as he calls it, jumps to his hand and rotates with him to stay oriented correctly.

Steven Bathiche is looking at developing a Minority Report-style interface called the Magic Window. It looks like a glass wall that can be used as a multi-touch screen, but also as a communications portal between two locations.

Bathiche is director of research at Microsoft’s Applied Sciences Group of 20 interdisciplinary researchers who investigate optics, computer vision, software and electronics.

His laboratory holds all manner of futuristic interactive displays, including one that tracks you if you get too close, and another that shows completely different 3D images to two separate people depending on where they are sitting.

This could be useful if you want to watch Come Dine With Me, but your partner is intent on catching the latest Mythbusters.

Demonstrating the Magic Window, he shows a scenario of two schoolchildren looking at what appears to be a glass wall between them, but which is in fact a kind of whiteboard connecting classrooms that are thousands of miles apart.

One of the more complicated aspects of this technology is that it must show perspective. As the child walks around the classroom, it must look as though she is peering into a window frame with the correct viewpoint. Now imagine several children all looking through the glass and moving about.

Luckily, Bathiche and his group have been developing optics technology.

“You need to create a 3D display, and we’re doing this with our unique technology called wedge optics. It allows us to create a very thin 3D display.”

Bathiche then invites the room to test out what he says is the world’s first and probably only steerable auto-stereoscopic display. This is the one that he explains could be used in our living rooms – along with projected, localised audio – to watch two different TV shows at the same time.

“The trend is for more interaction. More information is passing from the real world into the computing world,” he says.

“Your mobile devices today understand more context than any computer in the past. They know your location, who you’ve called, what temperature it is.”

One of the big trends is the computer understanding the context of the user so that it can anticipate and compute on the user’s behalf, explains Bathiche.

As far as mobile computing goes, Windows Phone may be a laggard in the market but, frankly, competitors should be scared stiff by the natural user interfaces being developed at Redmond.

This is the future – and Xbox Kinect is only the tip of the iceberg.