In the 2011 movie “Source Code,” US Army Captain Colter Stevens has to stop a dangerous terrorist from detonating a bomb on a train. But because he is paralyzed in real-life, Stevens is being sent on the mission through an avatar he guides with his mind. Sounds far-fetched? Too Sci-Fi? Think again.
One Israeli Professor is taking technology along those lines – further than you could ever imagine, with his latest project: Controlling your very own clone-avatar.
Get our weekly highlights directly in your inbox!Sign up
Dr. Doron Friedman, head of the Advanced Virtuality Lab (AVL) at the Interdisciplinary Center (IDC) in Herzliya, Israel, has been studying and experimenting with the next generation of human-computer interfaces and their impact on individuals and society for the last three years, along with an international team of experts. The AVL’s main activity is to build virtual worlds and the interfaces that will be used in the future and investigates human behavior and the human mind in a virtual reality setting.
One of its projects, called VERE (Virtual Embodiment and Robotic Re-embodiment,) funded by the European Union is trying to find a way to control a virtual or physical body using only the mind. For the project AVL members Ori Cohen, Dr. Dan Drai, and Dr. Doron Friedman teamed up with Prof. Rafael Malach from the Weizman Institute of Science in Rehovot, Israel. The team is one of the first worldwide to use a brain scanner (fMRI) to control a computer application interactively in real time – an innovation which could potentially have a dramatic impact on communication with severely disabled patients, Friedman says. “You could control an avatar just by thinking about it and activating the correct areas in the brain.”
“Recently, with advances in processing power, signal analysis, and neuro-scientific understanding of the brain, there is growing interest in Brain Computer Interface, and a few success stories. Current BCI research is focusing on developing a new communication alternative for patients with severe neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, and spinal cord injury,” Friedman has written.
Another focus of the AVL is telepresence. “Although we have phones, emails and video conferences – we still prefer a real meeting. The question is why? What is missing in mediated communication, and how can we develop technologies that will feel like a real meeting,” Friedman asks.
The AVL’s answer to those questions is BEAMING (Being in Augmented Multi-modal Naturally-networked Gatherings,) a project funded by the European Union with researchers from different countries. It aims to produce the feeling of a live interaction using mediated technologies such as surround video conference, virtual and augmented reality, virtual sense of touch (haptics,) and spatialized audio and robotics.
Friedman tells NoCamels that the work package headed by AVL in the Beaming project is “Better than Being There.” How can something be better than actually being somewhere? “Because with BEAMING you can be in several places at once,” Friedman says. “With BEAMING proxies, the virtual character will not only look like you, but behave in the same way that you would, limited only by today’s artificial intelligence capabilities.”
BEAMING Proxies were implemented by AVL members Peleg Tuchman and Oren Salomon in SecondLife, the online virtual world. A study on religion online was made using those proxies as bots (virtual robots) to help Prof. Gregory Price of the Dept. of Religious Studies at the University of North Carolina. The bot wandered around in the virtual environment of SecondLife and collected data to evaluate social and ethical implications of the bot and the other “real” players.
The future challenge is to enable physical attendance of proxies outside virtual spaces, meaning your avatar could attend a real-life meeting. But already the most significant difference between BEAMING and computer games that involve avatars is the full-body projection, meaning with BEAMING your avatar looks like you and interacts with the virtual surroundings and people like you would.
To show how BEAMING could work in real surroundings, the Beaming team created a virtual theater rehearsal between two actors and a director, based at University College London. The actor was located in a studio equipped with a virtual reality display system that used projections on three walls and the floor to imitate a room. The actress wore a full body motion capture suite that transmitted her movements and voice to the actor. The director then saw this virtual rehearsal displayed on a screen with avatars enacting the actor’s movements.
BEAMING was also used by AVL researcher, Dr. Beatrice Hasler, who was asked to give a lecture in SecondLife. Because Beatrice couldn’t be there at the time of the lecture to control her avatar, the lecture was pre-recorder and her avatar was programmed to answer questions based on keywords. Hasler also automated the way her avatar would interact with other users and her body language. In those circumstances that the avatar was not sure how to answer a question, it was able to call Hasler to ask her for the answer.
AVL is also using the concept of BEAMING for “intelligent transformations,” which is basically body language and gesture translation. For example, a backhanded “V” sign may not mean much to an American, but might be considered rude in the United Kingdom. When speaking with a Brit, BEAMING can change that gesture into something more appropriate.
But Friedman’s VERE research is still in its early stages. People who expect immediate results have to rethink their expectations, says Friedman. There’s a major difference between academic research and industrial work flow, he explains. “The EU body instructed us to focus on technologies that will have a great impact in 20 years. A good academic research is ahead of today’s technologies and/or focuses on things that are not commercial but are important to our future society.”