Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Restoring Vision With Bionic Eyes: No Longer Science Fiction

Dr. Michael Beyeler wants to interface with your brain through bionic vision. Find out why he compares his work to a 'less creepy' version of the brain implants on 'Black Mirror.'

July 9, 2019
Bionic vision

(Yuichiro Chino / Getty)

Bionic vision might sound like science fiction, but Dr. Michael Beyeler is working on just that.

Originally from Switzerland, Dr. Beyeler is wrapping up his postdoctoral fellow at the University of Washington before moving to the University of California Santa Barbara this fall to head up the newly formed Bionic Vision Lab in the Departments of Computer Science and Psychological & Brain Sciences.

We spoke with him about this "deep fascination with the brain" and how he hopes his work will eventually be able to restore vision to the blind. Here are edited and condensed excerpts from our conversation.

Dr. Beyeler, give us an overview of the 'neural engineering' field that will lead to bionic sight in the future.
[MB] Neuroengineering is an emerging interdisciplinary field aiming to engineer devices that can interface with the brain. Kind of like the brain implants from Black Mirror, but much less creepy. [Laughs]

The human brain has roughly 100 billion nerve cells, or neurons, and trillions of connections between them, organized into different brain areas each supporting a particular task; for example, processing visual or auditory information, making decisions, or getting from A to B. You can imagine that understanding how these neural circuits give rise to perception and action requires bringing together skills from a variety of disciplines, such as neuroscience, engineering, computer science, and statistics.

Explain how these BMIs work in your field. I've tested them for mood elevation, but not connected to visual states.
Right. "Brain-computer interfaces" can be used both for treating neurological and mental disorders as well as for understanding brain function, and now engineers have developed ways to manipulate these neural circuits with electrical currents, light, ultrasound, and magnetic fields. Remarkably, we can make a finger, arm, or even a leg move just by activating the right neurons in the motor cortex. Similarly, we can activate neurons in the visual cortex to make people see flashes of light. The former allows us to treat neurological conditions such as Parkinson's disease and epilepsy, whereas the latter should eventually allow us to restore vision to the blind.

Amazing. And what kinds of devices are currently in the field?
The idea of a visual prosthesis, or bionic eye, is no longer science fiction. You might have heard of the Argus II, a device developed by a company called Second Sight, which is available across the US, Europe, and in some Asian countries. It's for people who have lost their sight due to a retinal degenerative disease such as retinitis pigmentosa and macular degeneration.

How many people today have these retinal prostheses?
I believe there are now more than 300 Argus II users around the world and the manufacturer, Second Sight, has also just started implanting ORION, a device that skips the eye entirely and directly interfaces with the visual cortex. Apart from that, we are also anxiously awaiting the first results of PRIMA, a new subretinal device developed at Stanford University and commercialized by a French company called Pixium Vision.

So this is a growing field?
Definitely. In fact, some 30 more devices are in development across the globe. Overall there should be a wide variety of sight restoration technologies available within the next decade.

Dr. Michael Beyeler testing

(A woman wearing Argus II glasses tests Dr. Beyeler's system)

For clarity, explain how the current systems work.
When individuals, due to different diseases, no longer have their photoreceptors—the light-gathering cells in the back of the eye—the idea is to replace these cells with a microelectrode array that mimics their functionality. Argus II users also wear a pair of glasses with a small camera embedded, so the visual input of the camera can be translated into a series of electrical pulses that the implant delivers to the neural circuits in the eye. For most patients, Argus II provides "finger-counting" levels of vision—people can differentiate light from dark backgrounds and see motion, but their vision is blurry and often hard to interpret. Unfortunately, with current technology it turns out to be really hard to mimic the neural code in the eye and the visual cortex to fool the brain into thinking that it saw something meaningful. This is where I come in.

My goal is basically to understand how to go from camera input to electrical stimulation and come up with a code that the visual system can interpret. This requires both a deep understanding of the underlying neuroscience as well as the technical skills to engineer a viable real-time solution.

And how do you do this?
By using tools from computer science, neuroscience, and cognitive psychology. For example, we come up with mathematical equations that describe how individual neurons respond to electrical stimulation. We also perform simple psychophysical experiments, such as asking Argus II users to draw what they see when we stimulate different electrodes. We then use insights from these experiments to develop software packages that predict what people should see for any given electrical stimulation pattern, which can be used by the device manufacturer to make the artificial vision, provided by these devices, more interpretable for the user.

Are you focusing on bionic (artificial) rather than biomimicry (natural) vision?
Yes, because instead of focusing on "natural" vision, we might be better off thinking about how to create "practical" and "useful" artificial vision. We have a real opportunity here to tap into the existing neural circuitry of the blind and augment their visual senses much like Google Glass or the Microsoft HoloLens. For example, make things appear brighter the closer they get, use computer vision to mark safe paths and combine it with GPS to give visual directions, warn users of impending dangers in their immediate surroundings, or even extend the range of "visible" light with the use of an infrared sensor. Once the quality of the generated artificial vision reaches a certain threshold, there are a lot of exciting avenues to pursue.

On a practical level, how are you using technology in your research?
Since we don't develop our own implants, we often collaborate with different device manufacturers. Recently we have been making extensive use of Argus II, which comes with its own quite sophisticated software development kit. Second Sight have been very forthcoming with us, both by providing access to patients as well as by enduring our nagging requests to make minor software modifications so that we can field-test our crazy theories. In the end, these collaborations should be a win-win for both parties, ideally trading data for insight.

What other tools, and software, do you use in your work?
The field is currently dominated by different device manufacturers, who (understandably) can be very protective of their intellectual property. However, the Swiss in me regards it as important to provide a neutral academic voice promoting tools and resources that are available to all. We therefore focus heavily on open-science practices.

You've developed some open-source projects, right?
Yes, in this spirit, we were the first to make our simulation engine, pulse2percept, available as an open-source Python package. The goal of pulse2percept is to predict what a patient should see for any given input stimulus. Interestingly, this approach has already gained the attention of Second Sight and Pixium Vision, who expressed interest in using our software to predict what their patients are seeing. In the future, my goal is to adapt this software to other devices as they become available.

What brought you to the US, from Switzerland, in the first place?
I started out as an electrical engineer, in Zurich, because I've always been interested in how things work, but became more and more interested in the brain itself, realizing my skills as an electrical engineer were directly transferable to understanding how the brain works. I could take signal processing, network theory, and information theory and—through biomedical engineering and neuroscience—work towards brain-inspired neural networks and robotics—and use all these concepts to do something really good. That's how I ended up at the University of California Irvine to continue my studies and do my PhD. I was only planning to come to the US for 9 months or so.

And you're still here a decade later.
[Laughs] Right.

Back to your research today: who are your main supporters?
My recent work would not have been possible without generous support from the Washington Research Foundation in combination with the Gordon & Betty Moore Foundation and the Alfred P. Sloan Foundation. On top of that, I've been very fortunate to receive a K99 Pathway to Independence Award from the National Eye Institute at NIH. This a prestigious five-year grant that is meant to ease my transition into starting my own research group as an Assistant Professor.

You're about to join the Departments of Computer Science and Psychological & Brain Sciences at the University of California Santa Barbara and build a Bionic Vision Lab, which is such a cool name.
Yes, I am super stoked about this opportunity. There are lots of clinical research groups studying the effects of blinding degenerative diseases, and several biomedical groups engineering new devices. But nobody is really focusing on novel methods and algorithms to improve the code with which these devices interact with the human visual system itself. Our group will be an interdisciplinary effort that tries to combine insights from neuroscience with computer science and engineering to build smarter brain-computer interfaces and dream up new ways to maximize the practicality of artificial vision.

Finally, how did you become interested in this field in the first place?
To me this is the ultimate scientific quest, and it has the potential to cure blindness. In the end, it all comes back to a deep fascination with the brain—this mysterious hunk of meat that uses less power than a light bulb to give rise to our conscious perception of the world. How on earth does the brain do it?

It is extraordinary, when you really think about it.
It's perhaps one of the last big remaining scientific mysteries. And what better way to test our understanding of the brain than to build a device that can safely and meaningfully interact with it? I mean, the technology to tap into this complex circuitry is coming, there is no way around that, and it will allow us to manipulate our perception, our decisions, our actions. We better start thinking about how to use these powers for good.

How quitting Facebook affects your brain
PCMag Logo How quitting Facebook affects your brain

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About S.C. Stuart

Contributing Writer

S.C. Stuart

S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; US Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life encounters with over 27 robots and counting).

Read S.C.'s full bio

Read the latest from S.C. Stuart