(Image from here.)
A very exciting piece of research came across my desk this week. We have been hearing about Brain-to-Machine-Interfaces (BMI) for a bit now. Not that they’ve become humdrum at this point but we’re familiar with them.
Two labs, one at Duke and one in Natal, Brazil, led by Miguel Nicolelis have successfully created a Brain-to-Brain-Interface (BTBI) using two rats.
That’s right. One rat gets data directly from the brain of another rat.
How, you ask? Through some fairly brilliant experimental design.
(Image from original paper. See below.)
We start with two rats, both are fitted with an array of micro-wires implanted over the M1 motor cortex. M1 is the primary route that data is fired down the spinal cord to effect muscle movement.
The encoder rat takes data that is coming into M1 and encodes it into a protocol that can be transmitted. The data is then transmitted. The decoder rat receives the encoded data, decodes it and stimulates M1.
Both rats were trained. Encoder rats were trained to expect a reward based on a stimulus. Decoder rats were trained to expect a reward based on stimulation. Then, once the rat was trained further trials were used to determine how to encode the M1 data. Then, the rats were hooked up to one another. The encoder rat was given the stimulus after which the decoder rat got a reward.
A second wrinkle of the experiment was to similarly connect the two rats but instead of inserting the array into M1, it was inserted into S1: the somatosensory cortex responsible for tactile information. They were able to show that sensory stimulation of the encoder rat was able to evoke similar responses in the S1 of the decoder rat by passively stimulating the encoder rat and measuring the S1 of the decoder rat.
Further, they showed that the sensory data of the encoder rat was able to influence the behavior of the decoder rat. Both rats were trained to expect a reward, when a reward was available, in either a “narrow aperture” or “wide aperture” portion of the cage. In this case, narrow or wide was defined in terms of how the rat facial whiskers touched the sides of that portion of the cage. When the encoder rat found a reward in one of the two possibilities, the decoder rat found a reward in the same location a significant portion of the time.
This was done in real time. At one point one rat was in the Duke lab and the other was in Natal, Brazil.
There are a number of interesting wrinkles to these experiments. First, in the M1 experiment, one rat was trained in effect to supply stimulation and the other rat to receive it. In effect, the rats were communicating that something had happened but the nature of the event was of a fairly low granularity. There’s little difference between the decoder rat responding to the encoded rat material or something synthesized. You could consider this more of a brain-to-machine-machine-to-brain interface.
The S1 experiments were much more interesting. In this case sensory data that originated in the encoder rat was interpreted as sensory data by the decoder rat. This is much more akin to true brain to brain interaction.
A few caveats. First, the success rate was not great. 64% is significantly better than chance but not terrific. Second, we’re talking about a simple binary choice. We’re not talking telepathy. We’re talking about a rat getting some data from another rat that it cannot know is from another rat.
However, Miguel Nicolelis has done some other interesting BMI work. Notably, training a monkey to use a virtual arm and get sensory data in return among other experiments. This is clearly a significant but early step.