How direct neural interfaces work

Technologies which seemed like they were from science fiction yesterday, are entering our everyday lives. One such tech is direct neural interfaces.

Direct neural interfaces: what they are and how they work

We are living in a fascinating era when the technologies which seemed like they were from science fiction are entering our everyday lives. Or, at least, they’re making the first shaky steps to being a part of our regular reality. One great example of such tech is direct neural interface. On the surface it is just another method of human-machine interaction, but it really is something much more revolutionary.

Direct neural interfaces: what they are and how they work

Modern PC manipulators are a mouse, a keyboard, or a touch-enabled display. Voice or gesture input are becoming more and more pervasive. A computer is already capable of tracking your eye movements or identifying the direction a user looks. The next stage of human-machine interaction is a means of direct computation of neural system signals, presented by direct neural interfaces.

How it all started

The first theoretical insights into this concept are based on the fundamental research carried out by Sechenov and Pavlov, who are the founding fathers of the conditional reflex theory. In Russia, the development of this theory which serves the basis for such devices, started in the middle of the 20th century. The practical application, carried out both in Russia and abroad, was heard of as long ago as the 1970s.

Back in those days, the scientists tried to inject various sensors into lab chimps’ bodies and made them manipulate robots by force of mind in order to get bananas. Curiously, it worked.

As they say, where there’s a will, there’s a way. The key challenge was the fact that in order to make the whole thing work, the scientists had to equip their ‘mind machine’ by a set of electronic components which occupied the whole room nearby.

Now, this challenge can be tackled, as many electronic components became miniscule. Today, any geek can play that chimp from the ’70s. Not to mention the practical use from such technologies and the merits they bring to disabled or paralyzed people.

How it works

To put it simply, human neural system generates, transmits, and processes electrochemical signals in different parts of the body. The ‘electric part’ of those signals can be ‘read’ and ‘interpreted’.

There are different ways to do this; all of them have their advantages and drawbacks. For instance, you can collect the signals via magnetic resonance imaging (MRI), but the needful appliances are too bulky.

It is possible to inject special liquid markers to enable the process, but they may be harmful for a human organism. Finally, one may use tiny sensors. Using such sensors is, in general, the means of using direct neural interfaces.

In our everyday lives we may find one such appliance in a neurologist’s office. It looks like a rubber cap with a ton of sensors and wires attached to it. It serves for diagnostics, but who said it cannot serve other purposes?

We should differentiate between direct neural interfaces and brain-machine interfaces. The latter is a derivation of the first and deals only with brain. Direct neural interfaces deal with different parts of neural system. In essence, we talk about indirect or direct connection to the human’s neural system which we can use to transmit and receive certain signals.

There are many ways to ‘connect’ to a human, and all of them depend on the sensors used. For example, sensors vary in terms of submersion; there are the following types of sensors:

  • Non-submerged sensors: the electrodes are positioned on the surface of the skin, or are even a bit detached, like those used in the aforementioned ‘medical cap’.
  • Half-submerged sensors: the sensors are positioned on the surface of the brain or close to the nerves.
  • Submerged sensors: the sensors are implanted directly and spliced into the brain or nerves. This method is very pervasive and has a lot of side effects: you may accidently tamper with a sensor, which would in turn provoke the process of rejection. Well, anyway, this method is spooky, but nevertheless, it is used.

To ensure a higher quality of the signal, the sensors may be moistured with special liquids, or the signal may be initially processed ‘on the spot’, etc. Then the registered signals are processed by purpose-made hardware and software, and, based on the purpose, yield a certain result.

Where it can be used

The first purpose that springs to mind is research. If we refer to early studies, we talk about animal experiments. This was where it all started: mice or chimps were injected with tiny electrodes, and then their brain zones or neural system activities were monitored. The collected data helped to enable immersive studies of brain processes.

Next is medicine. Such interfaces have been used in diagnostics in neurology. If the studied individual gets the result, he can initiate a process called neurofeedback.

An additional channel responsible for the organism’s self-regulation is awakened: the physiology data is provided to the user in a comprehensible manner, and s/he learns to manage his/her own condition based on the received input. Such appliances already exist and are used.

Another promising use is neuroprosthetics where scientists have already achieved some solid results. Should there be no chance to ‘repair’ damaged conductive nerves in a paralyzed limb, there is an opportunity to inject electrodes which would then serve to conduct signals to muscles. The same applies to artificial limbs which can be connected to neural system instead of lost ones. Or, in an extravagant case, such systems might be used to manipulate ‘avatar’ robots.

There is one more branch we should talk about, which is called sensor-enabled prosthetics. Cochlear implants which help people to restore hearing ability, are already a reality. Also, there are neural retina implants which partially restore eyesight.

Games leave a lot of room for imagination — and not just to virtual reality: even such a down-to-earth idea as manipulating RC toys via neural interfaces sounds fabulous.

If the ability to read signals is augmented by a counter directional process of transmitting them back, stimulating certain parts of the neural system, it (in theory) means a lot of exciting opportunities for the gaming industry.

Is it possible to read and write down thoughts?

If we talk about the present state of tech, the answer is both yes and no. The signals we read cannot be considered thoughts per se, so one cannot ‘read’ what another person is thinking.

Those signals are just traces, imprints of neural system’s activity, enhanced with noises and delivered one second late. It is not even a separate neuron which is being read — it’s merely an activity of a certain brain zone or the neural system. It does not seem feasible to catch a single thought in this pool of information.

On the other hand, there are studies based on MRIs which ‘decipher’ images that originate from seeing certain images. The images are not very clear, but can be used to pull together the general picture.

It seems even more complex if we consider writing down someone’s thoughts. There are no openly available studies of this subject. But we can warn people, basing our assumptions on adjacent spheres of research. Take electric shock therapy: it can be successfully used to erase a patient’s memory and influence their cognitive abilities. Also, deep stimulation of the brain is successfully used to cure Parkinson’s disease.

How it refers to information security

Strange as it seems, this topic has direct correlation to information security. We don’t feel it is time now to discuss the ethical side of neural interfaces usage, only time will make it right. But what we should bear in mind is that as any other sophisticated piece of tech, such appliances need to be protected.

Now, when everything is connected, the neural devices are supposed to become such, too. An obvious  case which immediately comes to mind is using the Internet to send the data obtained during diagnostics of either a device or its user. When there is a connection, there is a possibility for it to be hacked.

We don’t mention a not-so-distant future when direct neural interfaces will be ubiquitous. Imagine you use implants to enhance your eyesight or hearing ability, and someone uses them to spam you with visual or aural ads, or even transmit false information for some unfriendly reasons.

Reading minds sounds even scarier, left aside recording memories. If there is a possibility to read video images (even with noise) today, give it several years, and this tech will evolve — so what could happen then?

It may sound like geeky mumbo-jumbo today, but, considering the pace at which new tech is developed and deployed at scale, neural appliances and collateral damage resulting from the careless use of such, may become a real problem even earlier than it seems.

P.S. By the way. Check out a nice gizmo which I happen to have at my work desk. Should anyone from the Kaspersky Lab’s Moscow office be interested, feel free to drop by and have a look when free.

Android: financial attacks and current security status

With an increasing amount of people using mobile devices for work, security of the data stored therein has become a hot topic. And since people also use mobile devices to access their finances, that makes them a prime target for cybercriminals. Android is the most popular mobile OS in the world right now, and the most targeted. How are users attacked and what is the current security status of Android?

Tips

Securing home security

Security companies offer smart technologies — primarily cameras — to protect your home from burglary, fire and other incidents. But what about protecting these security systems themselves from intruders? We fill this gap.