Physical models and electrical analysis of the Brain
Authors – Benjamin Stott, Matthew Christie, Maya Clinton and Hannah O’Driscoll
The human brain is the most efficient information processing system on the planet. The average computer requires about 100 Watts to run while the brain, which performs much more complex tasks, requires but 10 Watts. This is due to the fact that the brain is structured with great parallelism and has had many thousands of years of evolution to develop to this stage. One’s personality, experiences and actions are the product of this wonderful organ. It is useful to create theoretical and physical models of the brain in order to understand its operations to potentially replicate them in building intelligent machines. Since the brain runs by firing electrical signals, it is then obvious that circuit reconstructions of various phenomena within the brain can be made. This field is more commonly known as computational neuroscience and is deeply intertwined with physics.
It will also be useful to look at the more experimental ways in which physics is involved with neuroscience in the form of electroencephalography (reading of electrical signals within the brain) to monitor brain activity and, in the case of some novel devices, known as BCI(brain controlled interfaces) to translate this electrical activity into a command or some sort of technological output.
Brief context of the brain’s structure to the interested reader
If we are using physics to model the processes of the brain, we are most interested in the information processing side of this. If one looks at the pretty graphic above, we see a neuron which sends an electrical signal in response to a signal, dendrites which act as cables to transmit this and synapses, a bridging point for the signal to the receiving neuron. That is the bare bones of the information sending process in the brain. Now that we understand this we now have some context for later achievements in this area.
History of Physical models of the brain:
The Hodgkin-Huxley model of a neuron
One of the great breakthroughs of computational neuroscience was in developing a physical model of the neuron. Hodgkin and Huxley released a series of 5 landmark papers between 1938 and 1952 in The Journal of Physiology containing both theoretical and experimental data. Their theory rested on the framework that current passes through a neuron via sodium ion channels and potassium ion channels releasing a voltage spike called an action potential(neuron “fires”). Utilising their knowledge of electromagnetism they were capable of constructing a mathematical and theoretical model of the neuron according to the following circuit
Since this was a relatively long time ago, the tools at Hodgkin and Huxley’s disposal would have been of much less accuracy and utility as today, thus, to get ahead of the curve they conducted experiments upon the giant squid axon. They simply found a neuron big enough to conduct experiments upon. This model could reproduce many features of the squid giant axon such as the shape and propagation of the action potential and its sharp potential among others. This electromagnetic circuit model of the neuron has proven to be quite useful to computational neuroscience. Advantages include the simply circuit model which could easily be described to a computer to solve quantitative problems. Being the earliest of such achievements and such a mathematically and biophysically rigorous achievement at that, this paved the way for further physical models of the brain.
Dendrites and Cable Theory
Dendrites are extensions of the neuron, or nerve cell, that splinter off from the soma, the cell body. These extensions of the neuron’s primary functions are to receive information in the form of electrochemical signals from other neurons, specifically the synapses of other neurons, and transport these signals to the nucleus within the soma.
Rall realised in the 1960’s that innovations developed for analysing how current and voltage act in telegraph cables running under the ocean could be applied to dendrites. This was the beginning of Cable Theory. The main goal of cable theory is to create models, ranging from very simplified to extremely complex, of neurons and neural systems to understand how voltage propagates throughout these systems. These models are composed of making a set of assumptions about the system and then making an equivalent circuit that correctly models the system being analysed. There are two types of circuits that can be modelled using cable theory, one describing a passive (linear) circuit where the effects of voltage-gated ion channels are ignored and active (nonlinear) circuits where these voltage-gated ion channels are included.
For a good model to be produced assumptions have to be made to simplify the system. The main assumption is that a dendrite is a very long and thin conductor and because of this the change in voltage, or potential difference, on two points separated by the diameter of a dendrite will not vary as much as if the points were separated by the length of the dendrite, so it can be approximated to be zero. Other important assumptions are that every dendrite can be split into segments of uniform radius, that the cell membranes of the cells composing the dendrite have an inherent capacitance and resistance and that the cytoplasm of the cells act as a resistor.
These assumptions have allowed us to model dendrites as the circuit above, which is the passive cable theory. Further complex situations can be modelled using this theory such as for ‘leaky’ dendrites.
Synapses
A synapse is a small gap at the end of a neuron that allows a signal to pass from one neuron to the next. Pre- and post-synaptic endings are typically half a micron across, which is about the same as the wavelength of green light! The small gap itself, called the synaptic cleft, is about 20 nanometers — to put this into perspective, a strand of human hair is typically up to 100,000 nanometers wide. An action potential propagates down towards the synaptic cleft and a pulse of depolarizing voltage reaches the synaptic terminal. This activates these ion channels that turn on, and allow ions to flow to the pre-synaptic terminal. Ions bind to the presynaptic proteins that dock vesicles onto the membrane facing the synaptic cleft. That causes those vesicles to fuse with the membrane, open up and release their neurotransmitters. Thus, the neuron has “fired”.
Scientists Magleby and Stevens conducted an experiment in 1972 to understand the notion that when that ion channel opens, it turns on a conductance. They directly measured this conductance by doing a voltage clamp experiment using muscle fibers from a frog. Their results agree with the notion that the synapse can be modeled by a resistor and a battery and so synaptic currents can be modeled as a change of permeabilities in the sodium and potassium ion channels. This reinforces that the equivalent circuit diagram of a cell membrane popularized by Hodgkin and Huxley was truly a landmark accomplishment in understanding how our brains work, and the electrical processes that happen in such a complex and fascinating organ.
We can see from our models of neurons, dendrites and synapses that it is possible for biophysicists to apply physical models to the workings of the brain in order to better understand its activity and to, perhaps, be able to mimic it. This is all well and good, but theory goes hand in hand with experiment and this is where we shall introduce the next topic of electroencephalography. This is what allows one to verify our physical models by directly monitoring electromagnetic interactions within the brain.
Electroencephalography (EEG) and Brain Computer Interfaces (BCI)
Electroencephalography or EEG for short, is a test to measure and record brain activity using small metal electrodes attached to the scalp. The billions of cells in the human brain communicate by electrical impulses. The electrodes analyse the impulses in the brain and send signals to a computer. The brain activity can be seen here as wavy lines with peaks and valleys on an EEG recording. An EEG is used to diagnose epilepsy, sleep disorders, brain tumors, stroke and brain damage and other brain disorders.
Brain-computer interfaces or BCIs for short, use brain signals, analyze them, and then translate them into instructions that are passed onto an output device that carries out the desired action. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. A robotic limb is fitted on a person and when they imagine that they are moving their arm or leg the brain activity is analysed and the robotic limb carries out the action.
Conclusion
In conclusion, physics can be applied to the brain in both theoretical and experimental manners to understand the brain and apply its functionality to either computing(machine learning) or brain-computer interface devices. As a greater understanding of nature’s greatest work is achieved, so too will we be able to employ the mechanisms it spent ages developing.
References:
1.
Yasmin Zakiniaeiz. The mind’s machine: Foundations of brain and
behavior, second edition. The Yale Journal of Biology and Medicine,
89(1):110–110, 03 2016
2.
Constance Hammond, in Cellular and Molecular Neurophysiology
(Fourth Edition), 2015
3.
Hodgkin AL, Huxley AF (April 1952). “Currents carried by sodium and
potassium ions through the membrane of the giant axon of Loligo”. The
Journal of Physiology. 116 (4): 449–72.
- Editors, B., 2021. Dendrite. [online] Biology Dictionary.
5.
Byrne, J., Heidelberger, R. and Waxham, M., 2014. From molecules to
networks. Amsterdam: Elsevier/AP
6.
Spruston, N., 2013. Fundamental Neuroscience (Fourth Edition). Aca-
demic Press.
- Hille, Bertil., 2001 – Ion Channels of Excitable Membranes (3rdEd.)
8.
- L. Magleby C. F. Stevens, 1972 – A quantitative description of end-
plate currents
9.. Mohammed Ubaid Hussein, 2018, Electrical Physics Within the Body
10.
Hongmei Cui et al. 2020, The Key Technologies of Brain-Computer
Interface