If Jorg Muller has his way, managing your Tweet deck and cluttered inbox will simply involve perking up your ears, as emails, texts and tweets swirl around your head in a swarm of sound.
Muller, a professor of human-computer interaction at the University of Berlin, has designed the “BoomRoom,” an audio-enabled space equipped with 56 loud speakers that direct sound to stationary and mobile positions. An array of 16 gesture-recognizing cameras allow users to steer and control this audio, essentially creating an isolated cocoon of sound that only you can hear.
Muller envisions emails, texts and tweets — each with their own unique audio stamp to identify senders — fluttering around a user’s head. More urgent messages might buzz your scalp. Gesture recognition would allow users to “touch” an email to open it and have a computer read it out loud.
The BoomRoom uses wave field synthesis (WFS), a technique developed at the Delft University of Technology that builds 3-D sound fields by using algorithms to either cancel or reinforce sound waves with constructive or destructive interference. This allows sound to be placed at pinpoint locations.
On a more practical side, a BoomRoom could be used to create a more streamlined living space and reduce our reliance on so many gadgets. Cueing a music track could be as simple as touching a chair. Controlling the volume, bass or treble might involve moving your hands together or apart. Answering the phone might be as easy as touching your ear, or picking up a banana, for that matter.
Furniture and objects could announce themselves to the visually impaired and messages could float in midair. The possibilities are endless, but the idea is that loudspeaker panels would be integrated into the walls, linked to inconspicuous cameras and connected to various devices to create the free-flowing, hands-free audio-enabled smart home of tomorrow.