New brain-computer interfaces (BCIs) are poised to increase
human productivity, advance entertainment and transform social
interactions. A potential catalyst for all the new media that’s
emerging right now, such devices could play a game-changing role in
the near to mid-term evolution of comm technology.
When I first read Emotiv’s announcement of a brain-wave
reading headset my reaction was lukewarm. But then, as my brain rattled off implication after
implication of this new comm device, it all sank in: “This is
telekinesis. And it’s nearly market-ready!”
Subsequently, a quick search through the Future Scanner for similar
material turned up
a helmet that allows Second Life users to navigate their avatar
simply by thinking about walking. Boom. Another BCI that’s nearly usable. And this one’s been around
for three months already.
After a bit of reflection, I’ve come believe that these
technologies have the potential to truly revolutionize the way that
we play games, drive automobiles, learn in classrooms, surf
information and ultimately relate to other people—and not 20 years
from now, more like 5-10 years.
A product that in 2008 lets you control a video game by
adjusting your mental and emotional states is a big, big deal on
the macro timeline of innovations. It heralds the beginning of a
“The next major wave of technology innovation will change the
way humans interact with computers,” says Nam Do, co-founder and
CEO of Emotiv Systems. “As the massive
adoption of concepts such as social networking and virtual worlds
has proven, we are incorporating computer-based activities not only
into the way we work, learn, and communicate but also into the way
we relax, socialize and entertain ourselves.”
Nam Do may be selling his company’s system, but his message
resonates with me.
Imagine what the near-term successors to these early
BCI’s will mean for brain-to-brain
bandwidth. Together with virtual worlds, augmented reality, new
semantic technologies, (etc), they have the potential to “lube”
social network effects in a fashion that no human has ever
Sound the trumpets. Telekinetic interfaces have arrived and are
here to stay.
What potential near-term applications can you envision for
A new higher-speed Internet2, now under development in labs
around the world, will one day offer holographic images
indiscernible from reality, providing an array of applications that
we can only dream of today.
With digital video resolution four times finer than today’s
HDTV, and haptic technologies that
provide a realistic sense of touch, researchers can create
holograph images of people filmed thousands of miles away enabling
lifelike virtual interaction indiscernible from reality. The system
uses cameras that capture live images of people from two or more
places, merges the data, and feeds it back to all locations.
We could organize a meeting with friends or relatives from
cities scattered around the world without anyone actually
traveling. People will kiss, hug and reminisce as if they were in
the same room. And our senses will convince us that they are there.
We could even meet with a simulation of a favorite celebrity.
The day when anyone can create a stunning 3D Augmented Reality simulation is getting closer. Last month, General Electric's innovative AR media campaign to promote its 'Smart Grid' platform helped to push Augmented Reality out into the masses by giving users a chance to try it at home using a printable marker download and webcam.
When was the last time you saw fast-food restaurant employees
actually key prices into the register? Today, clerks behind the
counter press buttons with pictures of cups, burgers, or bags of
fries. They never need to read or remember cost of items.
Futurist William Crossman, author of Vivo [Voice-In/Voice-Out]:
The Coming Age of Talking Computers, believes that tomorrow’s
mobile and virtual reality devices, using visual displays like
those in fast-food restaurants, will render reading, writing, and
text obsolete in the not-to-distant future.
Crossman explains why this transformation will take place.
“Before Homo sapiens ever existed, ancient proto-humans accessed
information by speaking, listening, smelling, tasting, and
touching. They relied on memory to store information they heard.
Speaking and listening was civilization’s preferred method of
communication for millions of years.
Then about 10,000 years ago an explosion of information emerged
with the onset of the agricultural revolution and memory overload
quickly followed. Human memories were no longer efficient and
reliable enough to store and share the huge volume of new ideas. To
overcome this problem, our forbearers developed a remarkable
technology that has lasted for thousands of years – written
Men have a infamous tendency to let their phallic tendencies dictate what they create. It is perhaps why some of the most famous builds like the Great Pyramids, Taj Majal and the Washington monument were made.
So, it didn’t surprise me when I recently read about an effort to create the world’s first male organ controlled computer.
So now that men have brought the inevitable to the realm of technology, I wonder how else humans of the future might interact with their computers?
With the recent (or not so recent) popularity of Nintendo Wii and its gyroscopic features, the rest of the human-computer interface market seems to have entered an innovative period. It looks rather likely that we’ll soon be playing games through VR googles, gesturing in the air to perform fluid dynamics calculations and maybe even writing Dear-John letters by thought alone.
Best of all, we won’t have wait decades for many of these advances as some amazing new products are already in prototype and will be market-ready in the very near-term. Here are some of the particularly interesting interface candidates:
1. In 2004, four people, two of them partly paralyzed wheelchair users successfully moved a computer cursor with a sensor cap that reads your brain with electrodes. In late February, technology pioneer Emotiv Systems announced the EPOC neuroheadset, a light weight, inexpensive ($300 USD), wireless headset that detects conscious thoughts, expressions, and emotions. Emotiv’s aim is the video games market and could open up a whole new generation of emotional immersive-ness in games.”
2. A modern take on a classic: The Livescribe pulse Smartpen is a pen that doubles as a stereo voice recorder, a music player, and most unique of all, a tiny infrared camera that picks up commands from a specially designed notebook. The ‘Dot’ notebook has record, pause, stop, playback, and navigation ‘buttons’ that you can tap on the bottom of the page to control the pen.
3. How about turning ANY surface, wall, table, or floor into a primary input device that can read handwriting, act as a musical instrument, a touchpad, or even a keyboard if you’re so inclined. The technology is called Tangible Acoustic Interfaces for Computer-Human Interaction (TAI-CHI) and the power is in sound waves.
Cyberkinetics of Foxborough Massachusetts has begun FDA-approved clinical trials with BrainGate, a device that enables paralyzed people to control computers directly with their brains – and eventually could help them regain complete mobility.
Most handicapped people are satisfied if they can get a rudimentary connection to the outside world. BrainGate enables them to achieve far more than that. By controlling the computer cursor, patients can access Internet information, TV entertainment, and control lights and appliances – with just their thoughts.
And as this amazing technology advances, researchers believe it could enable brain signals to bypass damaged nerve tissues and restore mobility to paralyzed limbs. “The goal of BrainGate is to develop a fast, reliable, and unobtrusive connection between the brain of a severely disabled person and a personal computer” said Cyberkinetics President Tim Surgenor.
BrainGate may sound like science fiction, but its not. The device is smaller than a dime and contains 100 wires thinner than human hairs which connect with the portion of the brain that controls motor activity. The wires detect when neurons are fired and sends those signals through a tiny connector mounted on the skull to a computer.
Implanted into the brains of five handicapped patients, the device is already showing great promise. A 25-year-old quadriplegic has successfully been able to switch on lights, adjust the volume on a TV, change channels, and read e-mail using only his thoughts. And he was able to do these tasks while carrying on a conversation and moving his head at the same time.
As touch-screen interfaces become more reactive and computers get smarter we’re bound to see faster, reactive, and more forgivable interfaces. Case in point is a new product called Swype that allows users to intuitively swype through various letters on a touch-screen keyboard in a single fluid motion, then statistically calculates what you intended to type.
If it sounds a lot like the next generation of T9 that’s because one of the founders, Cliff Kushler, also invented that huge time-saver. But make no mistake about it, Swype marks a big leap in next-gen productivity. Already garnering rave reviews, it works
“across a variety of devices such as phones, tablets, game consoles, kiosks, televisions, and virtual screens” and lets formerly slow texters achieve input speeds of over 50 words per minute. That’s right – some/most people can’t even type that quickly on a regular keyboard.
During the next decade we are likely to see commercial products that will start to define the 'Post PC' Era of smart, networked objects that follow a new path of product development. Users will interact with embedded devices beyond the keyboard and mouse. We know that OLEDs offer a clear path to flexible, transparent display screens, but what about the combination of sensors and low power chips that make the 'screen' irrelevant for new applications. If it is hard to imagine commercial Post PC applications for enterprise sectors, what about designs for education and entertainment markets based on visions like Impress project from Sillenet [via Vimeo]
France-based Easy Web develops 3D video projection systems for 'monumental architecture', but could they be developing new cultural expectations for human-city interfaces where everything becomes a template?
Microsoft recently unveiled to the mass public a new gadget
called the Sphere they’ve been working on in their labs. The video
shows some pretty crazy applications that the Sphere could be used
for – the most amazing being their ‘earth’ demo which depicts a
spinning interactive globe. Check it for yourself:
If there’s one thing this video helps me to realize, it’s that
Google Earth would be incredible on this spherical display. But,
although it shows some ingenuity and outside-the-box thinking, this
display will most likely never make it past being a handy geography
The problems inherent in the Sphere are numerous. Flat displays
mean you don’t have to go searching all around for objects on your
desktop like photos or open windows. The game function is flat out
impossible in any competition-based scenario. The idea that you
would have enough time to react to a ball floating over the horizon
at a quick pace is laughable (the demonstrator himself has a hard
time finding the balls). And as for presentations, a large flat
screen will work better as a display tool than a ball of any
Last week, a colleague of mine at Future Blogger, Alvis Brigis, suggested that the coming reign of online video broadcasting as the "most ubiquitous and accessible form of communication" may be short-lived. In its stead, he suggested that brain-computer interfaces (BCIs) may replaced it.
To many people the idea of brain-to-computer or even brain-to-brain communication might seem a little "out there." I disagree and think that Alvis is on the right track. As evidence, I submit this recent article on the U.S. Army’s plans to invest in a "Thought Helmut" for voiceless communication. And lest anyone think that voiceless communication is some far-off, fuzzy, futuristic technology just check out this amazing video demonstrating an early prototype of this technology.
Until I can read your thoughts directly, I’d be interested in reading your reactions to this possibility and how you think it may necessitate that we unlearn some things—such as, perhaps, how we communicate in the future.