As simulations become more advanced and begin being able to mimic situations ranging from video games to complex star systems, it becomes increasingly difficult to determine whether we are living within such a simulation.

Although rather ridiculous sounding, Neil DeGrasse Tyson, a well-regarded astrophysicist, as well as the world richest man, Elon Musk, among others believe in the possibility that life is all a computer simulation.

In 2003, an Oxford philosopher Nick concluded in his paper entitled “Are You Living in a Computer Simulation” that we probability are in such a simulation and since then the theory that our reality is not the ‘Base Reality’ has been heavily discussed. Thinking back over 20 years ago the film, The Matrix, is based upon the premise that our reality isn’t the true reality.[1]

One hypothesis in support of the simulation theory is the Planck scale argument[2]. This argument suggests that at the earliest stage of the Big Bang when cosmic time was equal to Planck time the simulation was created. In this case then Planck time would be the reference point for the simulated “real time” and that the simulation would build itself using Planck units of mass, length, time etc.

Basically, this theory suggests that by taking the Planck units as the units which our simulator is based upon, we can create our own simulation of the Universe if it was simulated and then compare this with our observed Universe.  Complicated but can be done!

Since Quantum gravity and spacetime models of Universe use Planck time as the smallest discrete unit of time and by taking the age of the Universe (14 billion years) and converting this into Planck time it is possible to calculate values for the Cosmic Background Radiation, which is the observable radiation left over from the Big Bang[3].

In the table below the calculated values by using Planck mass and Planck length are compared which the observed values. These values appear to be very close with some even giving the exact number…[3]

Although this research has been conducted no one can truly be sure. Neil DeGrasse Tyson doubled back on his original support of the simulation theory to now being a strong disputer.

His reasoning for this change is that, if the Universe was a simulation, then we would be able to simulate another high-fidelity Universe. The fact that simulations to this level of precision are still out of our reach debunks this theory.[4]

But as technology continues to advance who is to say that this will not be possible, especially as quantum computing is becoming such a large field of research.



Although this theory is, just that, a theory, the ongoing development of computer simulations and the huge steps towards the future of quantum computing is a big factor into whether something like this could be the answer to the Universe, although without a more solid backing this theory remains a strange conspiracy. For now….



[1] Jason Kehe; “Of Course We live In a Simulation”. Wired. 9 March 2022

[2]Anil Ananthaswamy ; “Confirmed! We Live in a Simulation”. Scientific American. 1 April 2021

[3] Macleod, Malcolm; “Programming cosmic microwave background for Planck unit Simulation Hypothesis modeling”. RG. 26 March 2020. doi:10.13140/RG.2.2.31308.16004/7.

[4] Paul Scutter; “Do we live in a simulation? The problem with this mind-bending hypothesis”., 21 January 2022

by Ava Morrissey, Aisling Hussey, Grace O’Sullivan, Ross Brandon and Sam O’Neill.

When we talk about Aliens, what generally pops into mind is a short, bald, green man with massively clubbed fingers. I wouldn’t be the first to admit my perception of aliens has been tainted by Hollywood. When I use my (somewhat) scientific brain, I think of single-celled organisms as depicted in Figure 1.The fact of the matter is, however, that my depiction, or Hollywood’s, hold equal merit when we hold it to the standard of what science generally knows aliens to look like and especially when compared to the vastness of space.


Figure 1: Bacterium


Luckily, there have been great minds before us to think about this; among many, Frank Drake and Sara Seager. It was generally agreed upon that there must be life of some sort, apart from our own, in the Universe. This was further solidified when fossils that resembled bacteria were found on Mars as early as 1996. However, back in 1961, Frank Drake decided to tackle the more intimidating question; what the possibility or probability is of contacting another civilisation.


The Drake Equation

N = R* × fp × ne × fl × fi × fc × L

Where N = the number of civilisations with which humans could communicate,

R*average rate of star formation in our galaxy,

fp = the fraction of stars that have planets orbiting them,

ne = the number of those planets that are capable of supporting life as we know it,

fl = the fraction of exoplanets where life evolves,

fi = the fraction of life that develops intelligence,

fc = the fraction that develops communication through electromagnetic waves,

L = longevity of the civilisation.

The Drake equation is detailed above. Let’s try to understand this. R* is the average rate of star formation per year. This is generally agreed to be about 3 solar masses, or 3 times the mass of the sun. fp is the number of stars that have planets orbiting them. The most recent number is 1.6 planets per star, but previously the probability of a star having a planet has varied between 0.5% and 50%. ne is the probability of those planets supporting life as we know it. These planets are in ‘the goldilocks zone’, an area within a star’s orbit that can support liquid water due to gravitational pressure and temperature. Most recently, it appears the number of stars with one earth-like planet is closer to 50%, which is promising news. The final two conditions give us a hint to the major problems with finding life, the first being communication. The nearest solar system to our own, Alpha Centauri, is 4.3 light-years away, but we are, at a best guess, between 24,000 and 28,000 light-years from the centre of the Milky Way. This means that if a civilisation is trying to contact us, it could take between 5 and 50,000 years for us to only receive the message. To put that in different terms, the strongest radio waves human beings have ever emitted into space are only 200 light-years away. The modern form of humanity has been around for 300,000 years. Since humanity evolved, there has only been a brief amount of time from which we could emit any form of communication, never mind receiving or looking for messages. This emphasises the importance of the longevity of a species which we hope to one day contact. Regardless of the probability of life being out there, Drake’s equation highlights the issue of communication and longevity necessary to prove any of the predictions for the other variables.


The Seager Equation

N = N* × FQ × FHz × FO × FL × fS

Where N = the number of planets with detectable biosignature gases,

N* = the number of stars within the sample,

FQ = the fraction of quiet stars,

FHz = the fraction of stars with rocky planets in the habitable zone,

FO = the fraction of observable systems,

FL = the fraction with life,

FS = the fraction with detectable spectroscopic signatures.


Seager’s equation is a variation of the Drake equation but with a more practical approach. Seager attempts to look for life in any form, microbial or civilised.

There are calculated values that can be plugged in for the first 4 variables, but Seager has said that the values inputted for FL and FS are guesses. The basis of the equation is to look for N which is the number of planets with detectable biosignature gases. A value of 30,000 can be calculated for N*, which is the number of stars in the sample. It is determined by the number of stars bright enough to be seen by the James Webb telescope. FQ is the fraction of quiet stars. Active stars are very volatile, changing temperatures by thousands of degrees within minutes, making sustained life close to impossible. Beyond that, the brightness or luminance of the star would also be changing rapidly making looking for exoplanets difficult. This is due to the method of finding exoplanets; using a telescope, the shadows caused by planets are what is looked for, as the planets themselves are too small to be seen. If the luminance is changing, the already faint shadows are hard to distinguish. This value was calculated to be 0.2. FHz; the fraction of stars with rocky planets in the habitable zone was found to be 0.15 and FO; the fraction of observable systems can be determined by basic geometry to be 0.001. FL; the fraction of planets that have life and FS; the fraction of detectable signals of biosignature gases are educated guesses. FL was given a value of 1, which is extremely optimistic. FS is dependent on what is defined as a biosignature gas. 


Comparing the equations

When comparing the two equations, there are three main differences. The first, the largest, is the intent of the equation. The Drake equation was written theorising what it would take to contact an intelligent civilisation. Seager’s equation was written to create scientific steps to have a better understanding of the possibilities of life, whether it be microbial or intelligent. A big indicator of this is to look at where the equations begin their analysis. The Drake equation begins at a level beyond a time or position relative to us, whereas Seager’s equation starts with a variable that is specific to our own time and what we are capable of currently observing; average rate of star formation per year versus the number of stars in the sample as seen by the James Webb telescope, respectively.

The second difference between the two equations is what they are looking for. The Drake equation is looking for a civilisation as advanced or more advanced than our own. The Seager equation is merely looking for evidence of life. The Drake equation’s final two variables signify the aim to contact life as intelligent or more intelligent than our own.

This brings us to the final and most intricate difference; the form of wave each equation uses. To most people, who haven’t spent a large portion of their time learning about waves, be that harmonic, vibrational, longitudinal, translational or wave-particles, the matter of waves probably seems insignificant. However, this tells us a lot. How long of a time span or distance, and what magnitude we should prepare for if something was found. Both equations focus on EM or electro-magnetic waves. These waves can travel long distances in a short amount of time. Types of EM waves include, light waves, radio waves and X-rays. They are on a spectrum, as seen in figure 2. We use these waves to communicate, exchange energy and we depend on them for our vision. How each equation uses them is a defining factor. The Drake equation looks for EM waves with a long wavelength, permeating the vacuous space. Drake’s equation looks at what could be found within the radio waves frequency. In the Seager equation, transit spectroscopy is used. This form of spectroscopy is a technique used to infer the atmospheric properties of exoplanets.


The electromagnetic spectrum

Figure 2: The electromagnetic spectrum (credit: NASA’s Imagine the Universe)

Spectroscopy is the study of the absorption and emission of light and other radiation by matter. It involves the splitting of light (or more precisely electromagnetic radiation) into its constituent wavelengths (a spectrum), which is done in much the same way as a prism splits light into a rainbow of colours. The composition of exoplanetary atmospheres may be determined using transit transmission spectroscopy. As an exoplanet orbits its star, it blocks some of the light transmitted by the star. The light must now pass through the atmosphere of the planet. The composition of the planetary atmosphere affects the transmitted light, as different molecules absorb light of different wavelengths, to different extents. Again, this shows how Seager’s equation has the intention to physically measure the number of habitable planets even though we are not quite at that stage yet.


Be a physicist or a philosopher, neither or both, the Drake and Seager equations offer great insight into what possibilities are out there, what we will need to be capable of detecting and contacting life and the probability that we are not alone. Although the equations are very different, they complement each other nicely. The Drake equation asks what it would take to contact a different life of our own and the Seager equation tells us what is necessary in that first step on a more scientific level.



By: Colin Clarke, Sadhbh Leahy, Chloe Tap and Aidan Wright.

The human desire to explore the unknown has driven the development of telescope technology to unforeseen levels. Although, we may be on the verge of pushing telescope standards to new heights.

Telescopes have been improving ever since 1608, when the first telescope design patent was submitted. This patent was for a refracting telescope, which was able to obtain a magnification of up to 3 times. This design was quickly improved by astronomers such as Galileo, and the magnification improved to 20 times in less than a year. In the 400-year journey of design development, telescopes have developed to be sent into space (Ex. Hubble), orbiting our planet, for better images and to avoid atmospheric attenuation. Modern telescopes and all proposed next-gen telescopes use segmented mirrors as the primary objective, which is multiple smaller curved mirrors placed together to act as a larger mirror. They can focus on specific wavelengths, and can have a light-gathering mirror larger than 20 meters in diameter.

Segmented Mirror primary objective for telescope. Credit:

As the telescope designs improved and became more effective since the 17th century, important astrophysical discoveries about our universe were revealed, and the ‘age of astronomy’ began. The discoveries, such as the Moons of Jupiter by Galileo continued to drive the development of astronomy and the telescopes used, and the rate of telescope improvement increased.

Galileo studying the stars (L) & Newton’s first ever reflecting telescope. Credit: Hulton Archive/Getty Images & ®Science Museum Group Collection

Modern and future proposed ELTs (Extremely Large Telescopes) such as the ‘Thirty Metre Telescope’ under construction seem very impressive, although the idea of the terrascope could make ELTs look very small. Imagine a telescope with a lens the size of a planet. This concept could lead to a terrascope that, at only one metre across, could collect as much light as a 150-meter mirror.

Rendering of ‘Thirty Metre Telescope’ currently under construction. Credit: Courtesy TMT International Observatory

Telescopes today are designed to provide higher and higher magnification of celestial objects. But this comes at a steep price. The magnification provided by a telescope is related to its focal length. I.e. The bigger the telescope the greater the magnification. But large telescopes require a high cost and a lot of work to build. But what if we could use nature to provide long focal lengths that could lead to unimaginable magnification, at a cheaper price? This is where the ideas of Albert Einstein come into play. 

In 1916, Einstein published his theory of general relativity which described how a massive object causes a distortion in the gravitational field around it. Evidence of this theory lies partially in the phenomenon known as gravitational lensing. Gravitational lensing is the process by which light rays are “bent” or distorted as they travel around a large mass, where the mass acts as a “lens” to bring the light rays to a focus. This was first observed in 1979 when two identical quasars were detected a short distance from one another. It was later discovered that it was in fact a single quasar imaged twice by a galaxy cluster acting as a gravitational lens. This was aptly named the “Twin Quasar”. This effect was first predicted by Einstein in 1936, upon completion of his Theory of General Relativity. Thus far, the ideas of gravitational lensing have been used to analyse the distribution of matter in galaxies and galaxy clusters as well as estimating the mass of such lensing objects using instruments here on Earth. But what if we could get closer to the action?

An illustration of gravitational lensing by a galaxy cluster. Credit: Leon Koopmans

In the 1990s, Dr Claudio Maccone proposed applying the concept of gravitational lensing in spaceflight. In particular, he suggested using the Sun as a huge gravitational lens to magnify and converge light rays from a distant source onto a detector in space. This mission was dubbed the name FOCAL (Fast Outgoing Cyclopean Astronomical Lens) and would provide insurmountable magnification of celestial bodies. Using Einstein’s theory of general relativity, he found that light rays from a celestial object bending around the Sun’s surface could be detected at a minimum distance of 550 AU (Earth-Sun distance) from the Sun. Maccone outlined his mission to NASA and ESA in 1999; yet at the time, neither organisation were convinced that such a feat could be achieved and the project was left untouched. In particular an issue is the distance at which such a detector must be placed in order to detect these light rays. Recently however, the idea has resurfaced, this time using the Earth as such a gravitational lens. This was proposed by Professor David Kipping from Columbia University, aptly named the Terrascope.

The Terrascope is the concept of using a planet as an atmospheric lens in astronomy. Light from distant stars or objects refract through the Earth’s atmosphere towards a detector on the other side of the planet. If a lens the size of a planet could be used in a telescope, the magnification would make the most sophisticated current telescopes look unimpressive. We could potentially learn so much about our universe, including the chemical composition of exo-planets atmospheres, and the evidence of intelligent life. The future of astronomy is in Terrascopes.

Artwork showing how light would refract through the atmosphere.

Credit: James Tuttle Keane/David Kipping

So what makes the Terrascope different from these other ideas of natural astronomy, where light is bent by natural phenomena? The answer lies in what is doing the bending. Where light in projects such as FOCAL is bent due to the warping of space around a massive object, light incident on a Terrascope is refracted through the Earth’s atmosphere.

Why does it refract? Because the effective speed of light changes as it travels between media, and the Earth’s atmosphere can be thought of as layers of differing refractive indexes. Each time light travels from one medium to another, it bends, just like how your drinking straw appears bent when it rests in a liquid.

So, picture this: light from a distant star strikes our atmosphere, bending about half a degree. This light continues until it leaves the other end of our atmosphere, bending an additional half a degree. Now imagine light from the same star striking the opposite side of the Earth’s atmosphere. It goes through the same bending process. The two rays will eventually meet and form a focus, a point where the intersection could form an image.

Light from a distant star is refracted on its way into the atmosphere, and again on its way out the other side, creating a focal line. Credit: Kipping

As with all things in science, there are certain limitations to the Terrascope. For example, there is a limit on how close the beam can be to the earth. If the beam is too close it will collide with the earth’s surface and will not be refracted by the atmosphere to a focal point. Also, while the atmosphere plays a key role in bending the light beams, the atmosphere could also cause scattering or extinction of the light beams which would prevent the beams from reaching the detection point. Similarly, clouds could block the light rays and stop them in their path, so to prevent this the light beams would need to travel above the clouds. In this case, with the beams travelling at a higher altitude the atmosphere would be thinner which would lead to less refraction (bending) of the light which would cause the focus point to be further away.

The Terrascope could also be significantly more cost efficient than the current space telescopes. For example, the James Webb space telescope is one of the most advanced telescopes to date and is due to launch from French Guiana in October 2021. So far, the James Webb space telescope has cost over 9 billion US dollars but the Terrascope could produce an increased collecting power at a fraction of the price.

James Webb Space Telescope which has cost in excess of 9 billion US dollars to date. Credit: Nasa Webb telescope.

While a Terrascope would no doubt provide an immense amount of data for research, there are still many variables that would have to be considered before such a system could be set up. For example, there is currently very little known about the effects that atmospheric turbulence would have on the effectiveness of the Terrascope. Further investigation of such factors could lead to the next big step in space observation and exploration for the human race.

Physical models and electrical analysis of the Brain

Authors – Benjamin Stott, Matthew Christie, Maya Clinton and Hannah O’Driscoll

The human brain is the most efficient information processing system on the planet. The average computer requires about 100 Watts to run while the brain, which performs much more complex tasks, requires but 10 Watts.  This is due to the fact that the brain is structured with great parallelism and has had many thousands of years of evolution to develop to this stage. One’s personality, experiences and actions are the product of this wonderful organ. It is useful to create theoretical and physical models of the brain in order to understand its operations to potentially replicate them in building intelligent machines. Since the brain runs by firing electrical signals, it is then obvious that circuit reconstructions of various phenomena within the brain can be made. This field is more commonly known as computational neuroscience and is deeply intertwined with physics.

It will also be useful to look at the more experimental ways in which physics is involved with neuroscience in the form of electroencephalography (reading of electrical signals within the brain) to monitor brain activity and, in the case of some novel devices, known as BCI(brain controlled interfaces) to translate this electrical activity into a command or some sort of technological output.


Brief context of the brain’s structure to the interested reader

Basic structure of the brain’s information processing system


If we are using physics to model the processes of the brain, we are most interested in the information processing side of this. If one looks at the pretty graphic above, we see a neuron which sends an electrical signal in response to a signal, dendrites which act as cables to transmit this and synapses, a bridging point for the signal to the receiving neuron. That is the bare bones of the information sending process in the brain. Now that we understand this we now have some context for later achievements in this area.


History of Physical models of the brain:

The Hodgkin-Huxley model of a neuron

One of the great breakthroughs of computational neuroscience was in developing a physical model of the neuron.  Hodgkin and Huxley released a series of 5 landmark papers between  1938 and 1952 in The Journal of Physiology containing both theoretical and experimental data. Their theory rested on the framework that current passes through a neuron via sodium ion channels and potassium ion channels releasing a voltage spike called an action potential(neuron “fires”). Utilising their knowledge of electromagnetism they were capable of constructing a mathematical and theoretical model of the neuron according to the following circuit


Hodgkin-Huxley equivalent circuit of a neuron

Since this was a relatively long time ago, the tools at Hodgkin and Huxley’s disposal would have been of much less accuracy and utility as today, thus, to get ahead of the curve they conducted experiments upon the giant squid axon. They simply found a neuron big enough to conduct experiments upon. This model could reproduce many features of the squid giant axon such as the shape and propagation of the action potential and its sharp potential among others. This electromagnetic circuit model of the neuron has proven to be quite useful to computational neuroscience. Advantages include the simply circuit model which could easily be described to a computer to solve quantitative problems. Being the earliest of such achievements and such a mathematically and biophysically rigorous achievement at that, this paved the way for further physical models of the brain.

Alan Hodgkin and Andrew Huxley


Dendrites and Cable Theory

Dendrites are extensions of the neuron, or nerve cell, that splinter off from the soma, the cell body. These extensions of the neuron’s primary functions are to receive information in the form of electrochemical signals from other neurons, specifically the synapses of other neurons, and transport these signals to the nucleus within the soma.

Rall realised in the 1960’s that innovations developed for analysing how current and voltage act in telegraph cables running under the ocean could be applied to dendrites. This was the beginning of Cable Theory. The main goal of cable theory is to create models, ranging from very simplified to extremely complex, of neurons and neural systems to understand how voltage propagates throughout these systems. These models are composed of making a set of assumptions about the system and then making an equivalent circuit that correctly models the system being analysed. There are two types of circuits that can be modelled using cable theory, one describing a passive (linear) circuit where the effects of voltage-gated ion channels are ignored and active (nonlinear) circuits where these voltage-gated ion channels are included.

For a good model to be produced assumptions have to be made to simplify the system. The main assumption is that a dendrite is a very long and thin conductor and because of this the change in voltage, or potential difference, on two points separated by the diameter of a dendrite will not vary as much as if the points were separated by the length of the dendrite, so it can be approximated to be zero. Other important assumptions are that every dendrite can be split into segments of uniform radius, that the cell membranes  of the cells composing the dendrite have an inherent capacitance and resistance and that the cytoplasm of the cells act as a resistor.

The circuit model for passive cable theory in dendrites

These assumptions have allowed us to model dendrites as the circuit above, which is the passive cable theory. Further complex situations can be modelled using this theory such as for ‘leaky’ dendrites.




A synapse is a small gap at the end of a neuron that allows a signal to pass from one neuron to the next. Pre- and post-synaptic endings are typically half a micron across, which is about the same as the wavelength of green light!  The small gap itself, called the synaptic cleft, is about 20 nanometers — to put this into perspective, a strand of human hair is typically up to 100,000 nanometers wide. An action potential propagates down towards the synaptic cleft and a pulse of depolarizing voltage reaches the synaptic terminal. This activates these ion channels that turn on, and allow ions to flow to the pre-synaptic terminal. Ions bind to the presynaptic proteins that dock vesicles onto the membrane facing the synaptic cleft. That causes those vesicles to fuse with the membrane, open up and release their neurotransmitters. Thus, the neuron has “fired”.

Scientists Magleby and Stevens conducted an experiment in 1972 to understand the notion that when that ion channel opens, it turns on a conductance. They directly measured this conductance by doing a voltage clamp experiment using muscle fibers from a frog. Their results agree with the notion that the synapse can be modeled by a resistor and a battery and so synaptic currents can be modeled as a change of permeabilities in the sodium and potassium ion channels. This reinforces that the equivalent circuit diagram of a cell membrane popularized by Hodgkin and Huxley was truly a landmark accomplishment in understanding how our brains work, and the electrical processes that happen in such a complex and fascinating organ.

We can see from our models of neurons, dendrites and synapses that it is possible for biophysicists to apply physical models to the workings of the brain in order to better understand its activity and to, perhaps, be able to mimic it. This is all well and good, but theory goes hand in hand with experiment and this is where we shall introduce the next topic of electroencephalography. This is what allows one to verify our physical models by directly monitoring electromagnetic interactions within the brain.


Electroencephalography (EEG) and Brain Computer Interfaces (BCI)


Electroencephalography or EEG for short, is a test to measure and record brain activity using small metal electrodes attached to the scalp. The billions of cells in the human brain communicate by electrical impulses. The electrodes analyse the impulses in the brain and send signals to a computer.  The brain activity can be seen here as wavy lines with peaks and valleys on an EEG recording. An EEG is used to diagnose epilepsy, sleep disorders, brain tumors, stroke and brain damage and other brain disorders.

Example of electro-encepholography readings and setup

Brain-computer interfaces or BCIs for short, use brain signals, analyze them, and then translate them into instructions that are passed onto an output device that carries out the desired action. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. A robotic limb is fitted on a person and when they imagine that they are moving their arm or leg the brain activity is analysed and the robotic limb carries out the action.



In conclusion, physics can be applied to the brain in both theoretical and experimental manners to understand the brain and apply its functionality to either computing(machine learning) or brain-computer interface devices. As a greater understanding of nature’s greatest work is achieved, so too will we be able to employ the mechanisms it spent ages developing.







Yasmin Zakiniaeiz. The mind’s machine: Foundations of brain and

behavior, second edition. The Yale Journal of Biology and Medicine,

89(1):110–110, 03 2016



Constance Hammond, in Cellular and Molecular Neurophysiology

(Fourth Edition), 2015



Hodgkin AL, Huxley AF (April 1952). “Currents carried by sodium and

potassium ions through the membrane of the giant axon of Loligo”. The

Journal of Physiology. 116 (4): 449–72.



  1. Editors, B., 2021. Dendrite. [online] Biology Dictionary.



Byrne, J., Heidelberger, R. and Waxham, M., 2014. From molecules to

networks. Amsterdam: Elsevier/AP



Spruston, N., 2013. Fundamental Neuroscience (Fourth Edition). Aca-

demic Press.


  1. Hille, Bertil., 2001 – Ion Channels of Excitable Membranes (3rdEd.)



  1. L. Magleby C. F. Stevens, 1972 – A quantitative description of end-

plate currents



9..  Mohammed Ubaid Hussein, 2018, Electrical Physics Within the Body



Hongmei Cui et al. 2020, The Key Technologies of Brain-Computer



Machine learning in physics


Machine learning is going to shape the future. In fact, it has already shaped the present. It is found in almost every facet of our lives, from speech recognition on our smartphones, to fraud prevention in banks, to medical diagnostics. As we browse the internet we are bombarded with recommendations and advertisements “based on your interests”, all of which are products of some machine learning algorithm taking in your browsing data and spitting out some video, article or product that it thinks will pique your interest. Due to this, scientists naturally wish to use this powerful tool to aid in their own research.

This begs the question, how exactly does it work? How is the machine actually learning? There are a couple of different methods to “train” the machine, or algorithm, how to recognise patterns, the most common of which is known as supervised learning. Essentially, this means inputting a huge amount of data which the algorithm performs some operation on and returns a label for this data. In supervised learning, the algorithm is first trained with a large set of data which has the labels already assigned. The algorithm takes in the data, performs its operation, and compares its output to the correct label which is provided. It can then see how right (or wrong) its output was, go back and adjust its process, and hopefully be slightly less wrong next time. Training these algorithms generally takes an enormous amount of examples of the data, but eventually the goal is that it will be able to recognise things on its own.

This is somewhat similar to how one could train a person to recognise something. If I showed someone a picture of an animal they had never seen before, for example a lemur, and asked them what it is, they would likely give some nonsense answer. Or if they had “trained” in recognising other animals, it might say it is a skunk, due to the blank and white tail. If I then told them “No, this is a lemur!”, they could go back, adjust their thinking process, and if I show them more pictures of lemurs, they should be able to recognise them. The same thing is happening in a machine learning algorithm, except they can be used to find patterns in massively complicated objects, provided they are trained with enough data.

A classic example of machine learning is image recognition. Say you want to train an algorithm to recognise pictures of elephants. For simplicity, it might convert it to greyscale. What a greyscale photo really is, is thousands of pixels, each with some amount of darkness assigned to it, which can just be represented as a number. You feed these numbers into the algorithm, telling it “This is an elephant” and it will eventually be able to tell you whether something is or isn’t an elephant independently. Of course, you would also have to feed it pictures of things that aren’t elephants so it can tell the difference between the two.

The pattern the algorithm recognises which it uses to define an elephant are often completely meaningless to us. Where we might recognise an elephant by seeing trunks, big ears, tusks and huge body, the algorithm may not be able to distinguish any of this. It may see completely different things as the defining features of an elephant.

The use of machine learning in scientific research is becoming increasingly useful and important. In particular, physicists and machine learning algorithms have very similar goals. Both are focused around developing a model, based on some experimental data, which describes the universal behaviour of whatever the experiment is investigating. Where they differ is in their methods. A physicist will try to find some elegant law describing the fundamental workings of nature. A machine learning algorithm will brute-force its way to some pattern that will fit the data, and be able to predict new results, but will often be completely meaningless to those who try to interpret the pattern. This can be extremely useful, as there are many processes which are still mysterious to science, but with machine learning, we can take these processes and use them to generate predictions anyway, without yet having a true understanding of them.

Here we will discuss just a few of the ways in which machine learning is being used in scientific research. First, we will explore how machine learning is being used to design new objects by mimicking evolution through natural selection. We will also see how machine learning is being used to better understand and generate designs for new molecules, before going deeper and seeing how it is used to understand the fundamental forces governing our universe.

A wonderful introduction to some concepts of machine learning, and in particular neural networks (a type of machine learning algorithm), can be found on 3Blue1Brown’s YouTube series on the topic.


Generative design of hardware and mechanical devices

The same design process is used to create every form of hardware from microscopes to spacecraft. This design process always involves developing a design which meets a set of physical constraints while minimising the cost of producing the object. This process has two main drawbacks. Firstly, it requires technical expertise and is highly manual; every dimension and feature of each part must be precisely defined using domain-specific software tools and knowledge. As well as this, the creativity and level of exploration of the design space is limited by the capabilities of the software available and how fast the designer can iterate through and generate new designs. This results in much of the viable design space being left unexplored.

A common misconception about machine learning is that it will design highly mechanical and logical structures to meet the required constraints. However, the designs produced using generative algorithms only use material where it is absolutely necessary to meet the constraints. This results in the development of very organic structures which appear to mimic those found in nature such as a tree’s branches or animal’s skeletal structure. This biomimicry arises from how the machine learning algorithm produces its final solution.

Most generative algorithms are genetic algorithms which act in a similar way to the process of natural selection. These recursive algorithms take some randomly generated initial designs and of those designs the ones which best fit the constraints are kept and mixed together to produce new and better designs. This process repeats while disregarding poor designs until the algorithm converges on a small number of possible solutions which may differ greatly depending on the initial designs they were evolved from.

The use of machine learning in design generation has found many applications in recent years. NASA has used this process to evolve antennas for their ST5 and TDRS-C missions. The main motivations for using a generative design over the classical design process were to overcome the requirements of significant amounts of domain expertise, time and labor when designing new antennae. The use of a generative design method also took into account the effects of the antennas surroundings which even the most skilled antennae designers find difficult to do due to the complexity this adds to the signal the antenna is trying to detect and emit. The fitness of each design was measured by evolving designs which minimize the fraction of the signal energy that wasn’t picked up by the antenna (VSWR), the amount of error in the received signal (RMSE) and the amount of data that would be lost when receiving the signal. The antennae were also designed to transmit signals using a similar measure of fitness.

Generative design of molecules

The discovery of new molecules and materials can usher enormous technological progress, in fields such as drug synthesis, photovoltaics, and redox flow batteries optimisation. However, the exploration of the chemical space of potential materials is of the order of 1060 and far too demanding computationally to run traditional optimisation methods. This is apparent in the time scale for deployment of new technologies from laboratory discovery to commercial product is historically 15 to 20 years. The underlying discovery process steps in material development involve generation, simulation, synthesis, incorporation in devices or systems, and characterization, with each step potentially lasting years at a time.

Generative (or inverse) design using machine learning allows this discovery process to close the loop, in concurrently proposing, creating, and characterizing materials with each step transmitting and receiving data simultaneously. In traditional quantum chemical methods, properties are revealed only after the essential parameters have been defined and specified, i.e. we can only investigate the properties of a material after we create it. Inverse design, as the name might suggest, inverts this, with the input now being the functionality, and a distribution of probable structures the output. In practical settings, a combination of functionality and suspected materials are used as an input to the generative neural network. In certain applications this can decrease the time involved through molecular discovery and deployment by a factor of 5

The mechanism for such neural networks is a joint probability distribution P(x, y): the probability of observing both the molecular representation (x) and the physical property itself (y). Without going too deep in the details a generative model is trained on large amounts of data, and attempts to generate data like it, a loss function encodes the notion of likeness, that measures the differences between the empirically observed probability distribution, and P(x, y) generated.

Generative machine learning methods for molecules, materials, and reaction mechanisms have only recently been applied, with the majority of publications emerging in the past 3 years. One such paper from William Bort and colleagues published in Feb 2021 demonstrated that AI was successfully able to generate novel chemical equations that were stoichiometrically viable. Startup companies such as Kebotix recently received $11.5 million in funding to continue their work in the promising field of automated material discovery.


Improving models in particle physics

Beyond discovering patterns in the maze of forces working to hold atoms in a material together, AI is used at the current bedrock of our understanding of matter; helping us understand the substructure of subatomic particles. Subatomic particles are the protons and neutrons that sit at the centre of an atom, their substructure is comprised of quarks glued together by particles which mediate the strong force between them, known as gluons. Exactly how these completely fundamental particles fit together to give the structure of matter as we know it is accounted for in what particle physicists call the Standard Model.

The notion of symmetry is critical to particle physics; explaining underlying particle physics dynamics by dictating interaction, and necessitating the existence of particles that have been experimentally shown to exist. It’s important that in solving problems of particle physics these symmetries are leveraged to simplify the physics, since the problems are so big it’s no longer as easy as plugging in the numbers. There is not enough supercomputing in the world to compute Standard Model predictions for dark matter scattering, for example.

The reason why the physics is so computationally heavy to run these simulations is that in low energy limits the theory is non-perturbative, meaning physicists can’t solve an easier problem and nudge their answer to the harder situation they really want to solve. They have to be exact. The solution is to take the region of 4 dimensional space-time that the quark and gluon fields dance within, and “grid-ify” it.  To give a 2 dimensional example of why we would do this; if you were at a beach with waves gentle enough so that they weren’t crashing, you can imagine placing a large floating net of lights on the water. When night arrives, you can’t see the waves, but you can see the lights, and the way the waves move the lights gives a really good impression of the waves themselves.

To take this analogy one step further, imagine that it’s very costly to run a light, to collect data on a grid point that is, but you still want to see the waves, or at least know where they are. You have to then sample the grid with a couple of lights, but you want your sampling to be maximally informative since it’s so costly. We have a priori information that there is, say, 10 metres between successive waves, so that there’s little point looking 5 metres after one wave expecting to see another. There’s also no use in sampling lights too close together, as the information they carry is probably not that different (they’re correlated). Non-perturbative Standard Model sampling methods are similar, the most common is a combination of Hamiltonian mechanics and Markov Chain Monte Carlo. The Hamitonian mechanics aspect is what brings the a priori physics knowledge to the sampling process, telling us where is more useful to look, and the Markov Chain Monte Carlo optimises for choosing grid points that are as little correlated as possible to squeeze the most information out. However, the Markov method quickly becomes more expensive in its checking for correlation in neighbouring grid points than it would be to just sample the grid.

It happens in particle physics calculations that some grid points are less expensive to compute than others. It also happens that the aforementioned symmetries mean that the state of the field at one point strongly implies the state of the field nearby, further than what the Markov chain can check for without bloating in computing power requirement. This is where AI steps in. The AI can be taught the symmetries at play, so that we may sample in an inexpensive grid region close to an expensive one, and let AI (Convolutional Neural Net) transform the result of an inexpensive sampling under the rules of Standard Model symmetries into a sampling of the expensive region. What’s really cool is that mathematics responsible for the way the AI builds up its transformation means that its inference is provably exact, no approximation necessary, which is good because we’re working within a non-perturbative theory.


Best practices for physicists making ML models

To put all of this into practice, and to ensure that physicists are not wasting too much of their time wrestling with code, there needs to be a relationship between the physics and computer science faculties similar to that which exists between the physics and mathematics faculties. As with the introduction of calculus, machine learning and computational physics have and will push physics forward into new and interesting directions. Calculus provided a whole new way to model and think about physical systems – the language of change that describes our ever-so-dynamic universe. If a physicist wished to take advantage/partake in this new way of thinking, they first needed to learn the corresponding formalisms, or to put it simply, they needed to learn the mathematics; there is no getting around the derivatives, limits, integrals, proofs and theorems (with theoretical physicists being the bastard child of both the maths and physics departments). It is necessary then, to establish a common language that will enable computational physicists to collaborate and push forward our understanding of the universe in such a manner that minimises the amount of time some post-grad spend reinventing the wheel. Established fields such as Mathematics and physics don’t even use the same convention for spherical coordinates, let alone how to ensure that one’s machine learning model is usable across computational environments, because I’m willing to bet not many undergraduates know what docker is or a The point is that physicists need to learn and embrace the principles and tools of software engineering to succeed.


Before even getting into the particulars of ML and the corresponding pipelines/workflows for training and evaluation using open source platforms such as Kubeflow, we need to discuss some of the basic foundations that every physicist must know, because I am nearly certain that most physics students have yet to complete their first pull request.


The most fundamental of these working principles is project structure and version control. This upholds the very nature of scientific enquiry and empiricism by enabling scientific results and evidence to be reproduced and confirmed by other researchers. So where in the repository is the raw data stored? The transformed training data? The script that runs the model? These are the sort of project structure issues that frameworks like Cookiecutter Data Science are designed to solve; they facilitate the implementation of more streamlined workflows. Being able to communicate your work and have others be able to reproduce it is essential, and version control in a nutshell is a way of tracking and managing changes to a set of files, in this case computer code and more then likely .py files and ipynb. One of the most popular bits of software that allows one to implement it, is Git. Now once you have your repository and have turned it into a git repository you now need a way to manage it, and this is where tools such as Gitlab, Bitbucket or Github come into play. These management tools then allow you to easily work on and develop your project or research in a constructive manner.


So now that you can share your code and how it has developed over time, now you need a way to make sure new changes to your code does not end up breaking your code. This is where testing comes in. It is essential and without going into too much detail (it’s in the name), this is code that makes sure your code works. Things like unit testing, integration testing and system testing. Unit testing, being the easiest to start implementing, are tests for individual elements of your code such as functions. There are many fantastic open source libraries such as Pytest that will have you writing unit tests faster than you can do your stats phys homework. Unit tests are essential to test the correctness of individual code components for internal consistency and correctness before they are placed in more complex contexts, such as a pipe line for training your model.


Now, your project is structured in such a way that you, and more importantly your colleagues, can now understand what you are doing, and it affords you a way of stopping/dealing with any bugs that arise from development. All you need from here is a way for other people to run your models on their own systems or resources. This brings us nicely onto environments. What distro are you using for Linux? What dependencies do you need? These questions are solved by using virtual environments (if you use anaconda, conda is already doing this for you). They are a tool that helps to keep dependencies required by different projects separate, which helps ensure you do not brick your project by creating dependency or package conflicts. Finally, you want to run your code on something other than your laptop; it’s now time to become familiar with kubernetes and the concept of containers . Containerising allows for scalability and for you to run your code in the cloud or on whatever compute resources your network has access to. In this context scalable means that your project can access more compute resources when needed and then release them when they are not. To do this a few years ago, you would have needed to be a software developer or an engineer as well as a physicist, but due to the maturity curve for technology, things that started off as being custom implementations can, if they become more ubiquitous/popular, start to emerge as products (e.g kubernetes), and the tech starts to become a commodity. We are now in the exciting situation where we can yet again stand on the shoulders of giants and can make use of these powerful tools.


Once you have a container image/images for your project you will be able to distribute your ML workloads when needed, which means you can go to bed and not need to leave your laptop running overnight.


Now you have a project that can be understood and replicated by others. The only question left is how can you share the final model? And again, we will look to another discipline outside, yet still connected to physics: computational neuroscience. ModelDB is a fantastic example of how a particular discipline makes their models accessible. To directly quote their website: “ModelDB provides an accessible location for storing and efficiently retrieving computational neuroscience models. A ModelDB entry contains a model’s source code, concise description, and a citation of the article that published it.”


So on a final note, a theoretical physicist not including in their work the proof to a given theorem should be as shocking as a computational physicist not linking their source code.

Computer processors are an integral part of the modern world. Most, if not all of our day-to-day lives require processors to run smoothly. On board computers in modern cars, phones, tablets, shop tills, smart TVs, other smart Home technologies and so on, all require processors to function.

The constant push for faster, more efficient technologies helps to drive research and development in the area of semiconductor and microchip technologies that will continue to allow technology to improve our lives for years to come.

Processors in a computer are made of different parts that perform different functions. Each part is a chip made of integrated circuits built from semiconductor wafers that are connected with copper or aluminium wiring. A wafer is just a thin slice of semiconductor material. Semiconductors are materials that allow conduction of electricity more easily that insulators e.g. Wood, but not as easily as conductors such as copper.  The semiconductors and wiring are used to produce components that make up transistors and this allow different computing processes to occur.

The metal oxide semiconductor field effect transistor is the most produced artifact in human history, with production since it’s creation exceeding  MOSFETs. It was invented by an Egyptian engineer at Bell Labs in the 1950s.

In 1965, Intel co-founder Gordon Moore estimated that the number of transistors that could be fit into an integrated circuit would double every year, and in 1975 revised this to claim that the number would double every two years.  However, this has become a sort of self-fulfilling prophecy as Moore’s Law as it was dubbed, is still used for long term R&D planning.

Around 2010, reports began to suggest that advancement in semiconductors was beginning to slow down, yet if you glance at a plot containing the transistor count over time for processors, there does not really seem to be a significant shift in the rate of progress.

Moore himself expects that his Law will cease to apply around 2025.

The Huawei Mate 40 and the Apple iPhone 12 were released in late 2020, sporting chips with transistors that were only 5nm across.  It is expected that the first commercially available products containing 3nm chips will be available in 2024/25 , with the next expected size being 2nm. However, experts do not know whether transistors smaller than 2nm will be possible (Consider that 1nm is roughly the width of only five silicon atoms). Currently, the transistor components are so small that any defects (which are a common thing in materials) such as missing atoms can hinder the transistor’s performance.

Other potential issues include the density of wiring in the wafers. Now that there are millions of transistors in each square millimetre, there are vast amounts of wiring required in these chips, and as the transistor count increases, so too will the amount of wiring, and in spite of increasing clock frequency within the chips, there will be a slowing of transit of the signals produced by the chips due to the amount of increasingly narrower wire the signals need to transverse for an operation to occur.

As the size of processors continues to decrease, and the density of transistors in them continues to increase, the energy consumption of the chips are not decreasing proportionally. This is Dennard Scaling, a hypothesis that ran parallel to Moore’s Law that stopped describing the trend in energy consumption of these components in the 2000s. As a result, in order to maintain performance and efficiency, some transistors in these chips must be left unused. This effect means that the CPU is slowed down and as a result, IBM and Microsoft believe that multi-core processors will be no more due to this effect.

Physical constraints are also at the mercy of economic constraints, as some potential solutions could present more inconvenience for the consumer than is worth it, or would require increasing chip size, which would then defeat the purpose of decreasing chip size for cost and energy efficiency. However decreasing wire cross section will increase the resistance in the wires, so even spacing out smaller wires would present problems.

For the time being, it seems that we must continue with our dedication to the status quo and continue in the pursuit of miniaturisation.

This then begs the question, what will happen when we reach the atomic scale and cannot make transistors smaller?  Many large businesses rely on the ‘outdating’ of old models to continuously generate profit. When we reach the limit of transistor size, this will no longer be the case, and will likely force many businesses to either adapt and restructure, or develop new ways of maintaining continued growth and improvement.

One consideration would be to be rid of transistors and use something that more suits our purpose. However, thus far, the study of solid-state devices has been incapable of producing a device that could succeed the transistor.

Making technology larger to facilitate greater large chips and hence greater computing power also has it’s limits. Phones and laptops are popular for their convenience and so making them larger defeats their purpose. As a result, soon we will need to turn to New methods in computing

We must somehow adapt the transistor or replace it if we wish to maintain the continued rate of advancement in technology. In saying this, using the lasers that we use, we expected to run into issues when the transistors decreased below 65nm in size, however New methods of allowing us to use the lasers accurately has meant that this unsurpassable barrier has been long forgotten about.

Regardless of whether we were capable of overcoming the previous issues we came up against, What ideas do we have that could present a possible solution?

In 2016, professors at Stanford university and the University of Texas successfully produced a transistor gate that was only 1nm in size. This gate was produced from a transistor that was made from molybdenum disulfate () and carbon nanotubes as opposed to the standard silicon. The reason that this is possible as the  has a lower di-electric constant (roughly 4) than Silicon (roughly 11.7) the study performed showed promising results, however scalability and ability for mass production was left in question but the study, as certain properties of the material were prone to cause unwanted effects.

Another proposed theory has been that reading an electrons spin state provides information in the same way that on/off gates in transistors do. Electrons have a certain property known as spin, which is measured to be either “spin up” or “spin down”, and using this could hypothetically be a solution to our enigma. However, again, some issues have arisen, those mainly being that transistors are extremely useful since they at capable of  changing resistance in the circuit by switching from one state to the next. However, when this would be attempted by a system in which electrons are used as the bits, the difference in resistance in different states produced by the components that measure the spins is minimal.

Further alternatives include quantum computing. With quantum computing, like with the electron spin measurement alternative, we deal with spin states. However, for this proposal, instead of having a binary system of states, the information is stored in bits that are a superposition of states, that is a combination of spin up or spin down states, and so the parameters that describe the system are continuous. This means that the computer has much greater capacity than classical computers. However, this also has limits as simple systems are described by vast amounts of parameters. If you wish to build a system that has some practical purpose, the number of parameters required becomes huge, meaning that quantum computing is impractical with current technology.

Yet another alternative proposed has been to use light in CPUs in order to vastly speed up systems and decrease energy expenditure.

Optical components are difficult to make as they require what is known as quasi-particles in order to mediate the signal. In most cases the set up is required to e maintained at extremely low temperatures, however IBM’s most recent offering has succeeded in operating at room temperature. Within this device, switching times of below a pico-second were achieved.

Many discussions about the viability of these devices have focused on the high energy requirements of optical logic. It is possible that the technology can e used in order to mitigate the predicted drastic increase in energy loss arising from increasing lengths of chip interconnects.

In another paper a team of researchers successfully produced an optical transistor using a single dye molecule. The set up required cryogenic cooling however, and also cannot be scaled down below the size of a micron.

Using Cadmium Sulfide, researchers from Penn university[4] have successfully produced a photonic transistor with similar properties to those of the electrical transistors used in devices nowadays. The authors of that particular paper believed that using the devices that they had developed that on-chip photonic devices were very much a possibility in the near future.

The scaling down of transistors may possibly e continued through the use of caron nanotubes. These are tubes produced from graphene and are very conductive. They have a much higher carrier mobility than silicon FETs and so could e more energy efficient and faster, however they are not without their complications.

For example, there is a layer of insulation between the gate and channel known as the gate dielectric. The issues arising in silicon transistors due to tunnelling effects were dealt with using hafnium dioxide, however in Caron nanotube transistors creating dielectrics thin enough to control the devices proved difficult. In the end the researchers were ale to produce a dielectric only 4nm across that had an on off ratio similar to that of silicon devices.

One other issue with this is that the nanotubes are difficult to reliably orient and pack densely, however researchers in Peking University had promising results regarding an experiment where they successfully packed  200 nanotubes per micrometre in alignment in 2020.

Power dissipation occurs due to high resistance between the nanotubes and the contact metals. Some research has been undertaken to try to eradicate this problem, however minor success has only been achieved with p-type devices. Doping has been shown to decrease the resistance using a process known as molybdenum oxide y a factor greater than .

Small percentages of nanotube transistors are metallic, and the presence of these metallic transistors can lead to high leakage currents and even possibly incorrect logic functionality. This is certainly not wanted in computers and other devices and so more work is required in order to rectify these issues. All in all, Carbon nanotubes present a very interesting case and do have the potential to revolutionise the semiconductor industry, however this will ve some years away yet!

Quantum computing meanwhile uses the concept of superposition to perform computations. When compared to the standard on and off setup of the current transistors, the states in a quantum computer are continuous defined through combinations of different vectors. The result of this is that a very small quantity of qubits would be sufficient in order to surpass the capabilities of a classical computer.

The issues arise from this in that in order for the system to remain isolated from the surroundings it requires superconductors that must remain at roughly 77 Kelvin. Ambient temperature alternatives are being produced but they are even more costly. Qubit interaction is being promoted through the use of entanglement, however it is unlikely that we will see a quantum computer available for consumer use in the near future!

Again economics will always be  a limiting factor in the production of these devices and the decreasing prices of high speed processors isn’t helping the case for high-cost quantum computing.






[] optical%20transistor,amplified%20by%20placing%20the%20molecule%20in%20its%20focus.


[] Strong modulation of second-harmonic generation with very large contrast in semiconducting CdS via high-field domain | Nature Communications


[7]Hills, G., Lau, C., Wright, A. et al. Modern microprocessor built from complementary carbon nanotube transistors. Nature 572, 595–602 (2019).


[1]Dürkop T, Getty S A, Cobas E and Fuhrer M S 2004 Nano Lett. 4 35


[2]NSM Archive – Physical Properties of Semiconductors.


[6]Molybdenum oxide on carbon nanotube: Doping stability and correlation with work function, Rebecca Sejung Park et al, Journal of Applied Physics 128, 045111 (2020)


[5]Franklin, A. D.; Chen, Z. Length Scaling of Carbon Nanotube Transistors. Nat.

Nanotechnol. 2010, 5 (12), 858–862.


[4]Aligned, high-density semiconducting carbon nanotube arrays for high-performance electronics, Lijun Liu et al, Science  22 May 2020: Vol. 368, Issue 6493, pp. 850-856 DOI: 10.1126/science.aba5980


Low-Temperature Side Contact to Carbon Nanotube Transistors: Resistance Distributions Down to 10 nm Contact Length, Gregory Pitner et al, January 24, 2019

Nano Lett. 2019, 19, 2, 1083–1089 January 24, 2019


Rocket propulsion refers to an object being accelerated (it’s velocity changed) by the expulsion of stored “propellant” material from that body. This ejected matter has momentum, meaning that due to momentum conservation the body’s momentum must then change; causing a thrust effect in the rocket.

The physics behind rockets was put to use as early as 1232 A.D., unsurprisingly motivated by warfare. The Chinese used fire arrows to defend against Mongol invaders. These were solid-propellant rockets containing gunpowder in a tube, which once lit produced gases which escaped through a hole in the back of the arrow, causing thrust. Fire arrows are the basis of the legend of Wan-Hu, a Chinese general who attached forty-seven rockets to a chair and disappeared after ignition, never to be seen again. These rockets also laid early foundations for the bazooka, used by the U.S. in WWII, which launches rockets out of a tube to better control their direction.

The idea of staging, discovered by a German fireworks maker, is fundamental to modern rocketry. This is where a primary rocket is ignited and carries smaller rockets which ignite once it burns out. The first stage rocket may then detach to reduce the weight of the overall rocket and allow for greater heights and speeds to be reached, as the smaller rockets have reached a certain height ‘for free’, i.e. they haven’t had to use up their own propellant yet. This can also be done with several small booster rockets attached in parallel to a big rocket, for example the NASA Space Shuttle.

Rockets may be differentiated by the type of energy source they use. These include chemical combustion, nuclear reaction and radiation, although radiation methods are not limited to the sun – they can also involve microwaves and laser beams sent from Earth to a flying receiver, known as a light sail. These three methods define three categories which rockets can be divided into: chemical propulsion, nuclear propulsion and electric propulsion.

Chemical rockets are the most widely used type of rocket, with two common types being solid propellant and liquid propellant. These rockets have a combustion chamber in which chemical reactions between fuel and oxidizer (which together make up the propellant) produce exhaust gases, ejected to provide thrust. Solid propellant designs are often simpler, however liquid propellant rockets allow for greater efficiency and control of fligh

Nuclear thermal rocket (NTR) designs use nuclear fission to heat propellant and are efficient in generating extremely high thrust. These rockets have undergone ground tests, the earliest taking place in the 1950s, although they have never been flown. The Pentagon plans to launch an NTR in 2025, which would allow for more agile and rapid transit through space than current rockets.

Electrical rockets come in three main types – electrothermal, electrostatic and electromagnetic, each having its own pros and cons. Electrothermal rockets have relatively high thrust but low efficiency when compared to electrostatic and electromagnetic. Electrostatic are very low thrust but very high efficiency. Electromagnetic thrusters can in theory combine the high thrust and high efficiency of the other two types but they pose many difficulties in both the theoretical and engineering aspect.

Electrothermal rockets work by electrically heating a propellant and then accelerating it out of a nozzle. The main limitations on these types of thrusters are:

  1. How hot you can heat the propellant before melting the rocket.
  2. How much propellant you can heat per unit time.

Due to these limitations, electrothermal rockets only produce a few Newtons of thrust. However, they are much more efficient than chemical rockets and so are useful once in space.

Electrostatic thrusters ionise their propellant and use the Coulomb force to accelerate the ionise propellant out of the back of the rocket to produce thrust. This produces extremely high exhaust velocities. However, electrostatic thrusters only produce a few hundredths of a Newton of thrust. Despite this, electrostatic thruster are the most common type of electric rocket thruster in use today because of their extremely high efficiency. They are often called ion engines.

Electromagnetic thrusters also ionise their propellant However, instead of using the Coulomb force to accelerate the ionised propellant, they instead use the Lorentz force. This allows for much greater thrusts. This, in theory, allows electromagnetic thrusters to combine the high efficiency of electrostatic thrusters with the relatively high thrust of electrothermal thrusters. However, the power requirements of electromagnetic thrusters is very large and to control the magnetic fields used, superconducting magnets are needed. This makes for a difficult and expensive engineering endeavour. Because of this, research into electromagnetic thrusters lags behind that of electrothermal and electrostatic thrusters.


By Adam Bourke, Liam McManus, William Cosgrave and Jason Basquill.

For a long time holograms have captured the imagination of people and appear in many of our favourite works of science fiction, be it the hologram of Princess Leia in the first Star Wars film or the holographic advertising in Back to the future 2.

While we may not be at the the level of science fiction yet it is true that we are able to create holograms in the present day. Invented in 1948 by Dennis Gabor later earning him the Nobel Prize in physics, holography has many uses ranging from data storage to fraud prevention.

Recent research from the University of Glasgow has now shown us a new quantum technique for holography which overcomes certain obstacles imposed by classical physics. This development will greatly advance the applications of holography.

How classical holograms work

A laser beam is split into two separate beams one of which illuminates the object we want to recreate as a hologram. This is known as the object beam. The second is known as the reference beam this will travel a different path to the object beam but they will eventually recombine on the holographic plate. Because a laser is a coherent light source the two beams will have the same frequency and were initially identical however this object beam will now have a slightly different phase because of its interaction with the image object, so that when the two beams recombine their interference pattern created will be that of the object we want to recreate as a hologram.

fig1.Classical Holography

Quantum Holography with Entanglement

We have established that we need optical coherence for a hologram to be created. This is because we need to light to be coherent to interfere. But light made of entangled photons can interfere without being coherent. This phenomenon of entanglement means that two entangled photons are connected even if they are separated by a great distance, any interaction experience by one photon of the pair will be experience by its entangled twin. This is the principle underlying the University of Glasgow experiment. One photon will have the image phase information encoded on it and the other will also experience this despite them never overlapping like in the classical case. Separate cameras detect the photons and phase correlation measurements are taken to reconstruct the image. This is astounding as it means an object could effectively be imaged by a photon and then the image reconstructed by its entangled photon twin at any other point on the planet.

fig2. How the quantum hologram is created in the University of Glasgow experiment.

This breakthrough will no doubt lead to significant advances in all applications that utilise holography such as quantum information and medical imaging.




[3]Defienne, H., Ndagano, B., Lyons, A. et al. Polarization entanglement-enabled quantum holography. Nat. Phys. (2021)

Infrared imaging and spectroscopy have become one of the most fascinating areas of modern research in physics, with applications ranging from the study of plant tissue structures all the way to investigating the structures of galaxies. Indeed, despite not being as flashy as the String Theories or General Relativities of the world, the science behind IR imaging has become a somewhat pivotal part of modern technology and provides a key insight into the fabrics that make up our universe. 

So, what is IR imaging? IR, or infrared, refers to a specific wavelength band of the electromagnetic spectrum, with IR waves being slightly longer than visible light and shorter than radio waves. 

FIG.1. The electromagnetic spectrum, with IR light highlighted [1]

The important feature to note is that all objects will emit some form of IR radiation. Thus the main benefit of studying IR spectra is that it allows one to study an object whose visible light spectrum either doesn’t exist or is blocked, and to determine properties and features of its inner workings


IR Imaging in Medicine

Perhaps the most well known and what can be considered the most useful application of IR imaging with regards to general society is its use in medicine. Infrared thermography has been present within the medical landscape for decades, beginning in 1928 when Professor Czerny in Frankfurt presented the first IR image of a human body. This was the first time that technology could be used to accurately describe the temperature/radiation produced by nearly any part of the human body, all without even making contact with or interfering with the measured skin.

Of course, with all discoveries made in history, there were, and still are some drawbacks. In the beginning these were associated with poor thermal and geometrical resolution, the original prototypes not offering the highest in quality or perhaps even usability, as the stability and exact measurements were often questionable. The biggest downside, however, was its reproducibility. It was often too expensive to build even the most basic infrared cameras. This was due to many reasons: the unavailability of the required natural resources as well as the heavy machinery required in the construction, the manpower required to operate and build such machinery as well as the believe that with x-rays being as successful as they were in the medical field, the use of IR imaging would not be required.

Most of these obstacles were overcome eventually, through funding from large corporations as well as the invention of better and more cost-effective cameras. Despite the setbacks experienced, the end products have been able to pave the way for a new era of disease detection. IR images are used to show abnormally increased focal surface temperature on specific parts of the human body, which are related to radiation produced by cancers, tumours, etc. There have also been uses in rheumatology and orthopaedics, neurology as well as vascular imaging.

FIG.2. Breast cancer detected from radiation spike [2]

In more recent news, scientists from the Institute of Image Communication and Information Processing, Shanghai Jiao Tong University have even developed a way to efficiently detect respiratory infections such as Covid-19. This is done by performing health screenings through the combination of RGB and thermal videos obtained from a FLIR (Forward Looking IR) camera and an android phone. The data used in the experiments were collected from Ruijin Hospital as well as from themselves and produced results with 83.7% accuracy compared to real world datasets. Of course, in this day and age, this accuracy would be considered quite low, but through more research and testing, this could be a means to quick and precise testing that can be carried out in schools, work and hospitals.


IR Astronomy

The use of IR imaging has not been limited to use on Earth; it has managed to make its way to our satellites and probes travelling into the farthest reaches of space. Infrared astronomy is an area of active research that is hugely beneficial in the understanding of galaxies, stars, planets and other astronomical structures. It involves both the imaging of structures in space based on their infrared radiation and the spectroscopic analysis of electromagnetic radiation in the infrared range that either comes from or passes through astronomical structures., which gives information about the composition of astronomical objects.

FIG.3. Two pictures of the Orion constellation, the left being an image in the visible spectrum and the right in the infrared spectrum. This shows the power of making observations outside the visible spectrum, allowing us to observe much more detail [3]

Light from stars and galaxies have a continuous spectrum associated with it, whose intensity peaks at a certain wavelength. When this light passes through matter, however, depending on the type of molecules present, certain wavelengths are absorbed. By analysing the light spectrum around an exoplanet as it transits a star, we can determine the composition of its atmosphere, which is hugely important in determining if an exoplanet could harbour life. 

Many observatories and telescopes are capable of detecting infrared light and there has been a significant amount of telescopes specifically designed to detect infrared light or have instruments for which infrared light can be detected. These include ground-based telescopes,  airborne telescopes and space-based telescopes.

FIG.4. The infrared windows of our atmosphere


Applications to Agriculture

One area one might not expect to see IR imaging methods and more specifically spectroscopy is in the agricultural and food industries, but in fact it has been in use for decades.

Today the powerful compositional analysis provided by various spectroscopic methods is invaluable in the production of food and other agricultural goods. Traditional methods of determining the quality of produce are often time consuming, destructive and have a negative impact on the environment due to the use of harmful reagents and intensive water costs. 

Spectroscopy, specifically in the Near-IR and Mid-IR regions are employed in all areas of the agricultural industry, from soil analysis, (one interesting example close to home being Johnstown Castle), to post-harvest produce quality control. Quality control is a theme when it comes to NIR spectroscopy; an example of which is its use in the grain industry where it is used extensively in the study of the characteristics of flours and grains. Using this technology flour mills can quickly determine the nutritional value and moisture content of wheat and flour, allowing them to ensure it is up to standard. Another example is ensuring that the crisps in your cupboard stay crisp and are uniformly flavoured. Seabrook Crisps in Bradford, UK, have employed the use of an NIR spectrometer to ensure their flavour machine is running optimally by performing tests throughout the work shift and between flavour changeovers. By testing the spectra of crushed crisps the machine determines the percentage of flavouring present, if this flavour concentration was not within specification the flavour application machine is recalibrated to deliver the optimal, even coated crisp.

MIR and specifically Fourier Transform Infrared (FTIR) spectroscopy is an important method in managing the quality and safety standards of fat and oils used in the preparation and production of food. A large problem when storing and processing fats and oils is deterioration due to oxidation when in contact with atmospheric oxygen. This reaction produces hydroperoxides which can further break down into many secondary oxidation products, giving rise to an off-taste and bad odours. Therefore the oxidative stability of an oil is an important factor when considering the quality and stability of oil and FTIR spectroscopy provides an effective means to test the oxidative state of an oil.  This is done by measuring the chemical concentration profiles of important oxidation products such as hydroperoxides using FTIR spectroscopy.


Where to next for IR imaging?

Of course, as is the case with most modern technologies, the use of AI and machine learning algorithms in IR imaging has led to many interesting advancements in the field. For many, the thought of AI technology slowly creeping into more of our lives isn’t the most comforting thought. It cannot be denied however that these new technologies possess a vast well of potential when it comes to advancing our current systems, with a few notable possible applications to the world of IR imaging. 

Machine learning is a way of creating artificial intelligence, by programming a computer to make a decision, and telling it when the decision was correct. By repeating this process, the computer gets better answers each time, and you eventually create an algorithm that can accurately make the right decision. This can be applied to many different areas of technology, including thermal imaging technology.

FIG.5. [4]

One example of this is how IR Imaging can be used as a basis for developing an intelligent microwave oven, which uses machine learning to accurately determine what food is placed inside and to what temperature it should be heated, and then uses thermal imaging to measure the temperature so that it is cooked for the right length of time. Over time the microwave becomes more intelligent, by learning the types of food consumed by each member of the household and how long they usually cook it for.

Looking further afield, IR sensors can be integrated into drones and together with deep learning algorithms can be used to identify ground-nests of birds on agricultural land before being damaged by machinery. 

FIG.6. [5]

The thermal imaging cameras on the drones take pictures of the field, which are then analysed by artificial intelligence to determine where the birds’ nests are. The farmer is then alerted to where the nests are so they can be protected.

To sum up, it’s clear that the areas of IR imaging and spectroscopy have a wide variety of applications to modern science and certainly possess a lot of potential for further development into the future.





[3] Encyclopedia Britannica



Maria Babu, Deaglan Farren, Thomas Long, and Myles Power.


We cannot look outside of our windows without noticing the beauty and vibrancy of the natural world, from the crisp red petals of a tulip to the interesting purple and greens of a pigeon’s plumage. The range of colours we observe correspond to light of different wavelengths, with shorter wavelengths corresponding to violet light and longer ones corresponding to red light with the whole visible spectrum of light in-between. Usually objects have their colour because they absorb some wavelengths of light but reflect others. However, some creatures in nature have found ways of manipulating light in such a way as to make them stand out against the more ordinary backdrop of the natural world, contributing to their biological processes and behaviours. These biological structures are the products of millions of years of evolution and, in some cases, are so sophisticated that modern technology can be explored and even advanced through studying them.

Figure 1: Blue morpho butterfly displaying iridescence.

Figure 1: Blue morpho butterfly displaying iridescence.

This process of observing and adapting structures found in nature to our own technology is called biomimetics, or biomimicry. Biomimetics in photonics involves combining the aforementioned ways in which naturally occurring structures can interact with light and the application of this knowledge to research and development.

Some animals such as the blue morpho naturally exhibit a unique wing structure that manipulates incident light, rather than simply having pigment in their cells as we usually find with colourful animals. This results in their signature and uncanny blue colour. The butterfly’s wings accomplish this by causing specific interference, diffraction and iridescence of the incident light to reflect blue light of a highly specific wavelength. Usually, these iridescent colours vary depending on the angle at which light is incident, such as in a soap bubble. When observed from different angles, a soap bubble displays multiple colours. However, thanks to the wing’s specific structure, it manages to reflect only the dazzling blue colour. In order for this to work, these structures must be on the nanometre scale, comparable to the wavelength of the light it is manipulating.

The wings themselves are composed of stacked layers of almost crystal-like structures, with a protective scaled layer on top that give them their glossy appearance. The spacing between these layers is constant, at about 300 nanometres. This specific spacing, along with how the different shelves diffract light is the key to the butterfly’s iridescent blue at all observable angles[1]. By manipulating the light as it passes through these structures, the wing diffracts the blue light into a larger than normal angular range. Reflecting light in this structural manner has several benefits. For example, it allows them to appear much brighter in their vibrant, natural habitats, enabling them to frighten off even the fiercest of predators.

Figure 2: Lamprocyphus Augustus, an iridescent green weevil.[14]

Several other less dainty creatures such as the Lamprochypus augustus green weevil also exhibit a similar iridescent colouration. Unlike the butterfly, weevils and beetles owe their colouration to chitinous photonic crystal scales that coat their protective shells[2]. This particular weevil has already been the subject of interest to scientists studying photonic crystals. These weevils, unlike butterflies, take on a brilliant iridescent emerald green rather than blue, perhaps as a means of camouflage, rather than drawing attention to themselves[2]. This variety shows the versatility of this tiered photonic crystal structure in reflecting a wide array of brilliant iridescent colours. 

We can also see similar structures and photonic effects in plants, the most obvious of which are flowers. Flowers are not just colourful, vibrant, beautiful additions to our gardens; they also serve an important reproductive function in angiosperms, or flowering plants. The anther of the flower is the structure that produces pollen, which is then transferred to the stigma, the female gonad, of a different flower. The vehicles of pollen transport are insects such as bees and butterflies. In order to attract these animal pollinators, flowers have developed various visual cues such as displaying striking colours, iridescence and glossiness.

As previously mentioned, colour mainly comes from pigment which absorbs certain colours of light whilst reflecting others. It is this reflected colour that we see exhibited by the petals. For example, in the buttercup flower, the yellow colour comes from carotenoid pigments that absorb blue-green light and reflect light that correspond to a yellow colour [6].

Microscopically, the shape of the cells of the flower can affect how intense the colour is. If the cells are flat, this means that only light that travels straight down into the cell can interact with the pigments. Most of the light that arrives at an angle gets bounced off the cell and is lost. However, if the shape of the cell is conical, then this can act as a lens, concentrating the light into the cell. Any light that gets reflected will travel into a neighbouring cell, increasing the contact with the pigment [3].

Figure 3: Reflection of the buttercup on a woman’s chin.

Buttercups also have a very distinct glossiness to their petals. If you hold a buttercup under your chin, the brilliant yellow hue is reflected onto your skin. This special quality is unique to this flower and is a result of the combination of the pigments and the different layers within the petals interacting. This effect utilises the same mechanism that allows you to see the sheen of a soap bubble or an oil slick. The top layer of the flower is called the epidermal layer, and in the buttercup, the epidermis, only one cell thick, is covered by a thin wax coating allowing for an ultra-smooth surface. Below that, there is a starch layer that is separated from the top cells by pockets of air. Interference of light between the smooth top surface of cells and the air layer creates the mirror-like reflection of the buttercup [3]. 

Figure 4: Queen of the Night Tulips displaying iridescence.

Another visual feature that plants use to attract insects and other pollinators is iridescence. In the Queen of the night tulip, the shimmering quality is due to cellulose, the main structural component of plant cells, arranging itself into ridges that act as a diffraction grating [5]. A diffraction grating is a surface with slits occurring at regular intervals. These slits can disperse white light into its spectrum. The spectrum is comprised of different wavelengths of light, each corresponding to a different colour. Iridescence is caused by the light waves interfering with each other, either constructively or destructively. Constructive interference is when the crests and troughs of the waves match and amplify, reinforcing the colour. On the other hand, destructive interference reduces the vibrancy of the colour as the troughs and crests cancel each other. As we change our viewing angle, there is a change in the degree of constructive and destructive interference, leading to a change in the colours observed, i.e. iridescence [4]. 

Now we turn our attention to the biomimetic photonics, such as those we’ve talked about in animals in plants, being put to practical use in technology. One area that has benefited greatly from this field is that of optical sensing which deals with the detection of light. Most animals, including us humans, have sensors that can detect light. The kind of light that our eyes can see is known specifically as visible light, because there are other types of light that exist. One of these other types is known as infrared (IR) radiation, which some animals such as some beetles, snakes, and vampire bats can detect. So, of course scientists have taken great interest in studying how these creatures are able to detect IR radiation, both out of curiosity and to help with developing technology that can help us ‘see’ in the infrared. Let’s briefly look at how a team at CAESAR mimicked the IR-sensing capabilities of a certain kind of beetle with their technological design. Now, anyone who has been outside on a sunny day will be aware of how effective light from the sun is at heating things up – that is, the light can raise the temperature of objects. So if IR ‘light’ is shone on, for example, a liquid, it’s temperature will increase. Now imagine if we wanted to detect the temperature change of the liquid, one might introduce a device which can do so in one way or another, which then means we have a way to detect the IR radiation itself. It is through this kind of setup that the Melanophila acuminata beetles (a species of fire beetle) can see IR radiation.

Figure 5: Simplified cross section of one infrared sensillium of the Pyrophilous beetle.[12]

These IR-sensing organs are called sensilla, and are filled with a liquid that expands when heated by the IR radiation, with the resulting pressure rise being detected by the beetle. The technology that emulated this utilised a similar setup, with only the devices used to detect the pressure changes differing from the beetle’s, naturally. While we may be inspired by the interesting quirks of other creatures in the animal kingdom, looking at ourselves and how our own eyes work is also an area that can bring about the development of impressive technologies. One recent project has been that of VODKA or Vibrating Optical Device for the Kontrol of Autonomous robots [10]. This technology attempted to replicate a phenomenon termed ‘hyperacuity’. Visual acuity refers to the ‘sharpness’ of our eyesight and is measured during those vision tests at the optician’s that we all know and love, where one must try and read out all the letters through the mirror. Hyperacuity occurs when we are able to see details with our vision to a much better degree than is predicted by looking at just the biology of our eyes’ receptors. One example, known as vernier acuity, is our ability to notice the misalignment of borders with a precision up to ten times better than what is expected with our normal visual acuity.

So what gives us this ability? There are two main factors that have been identified, one is the brain itself processing the information and enhancing our sight with its mysterious algorithms. Another is what has been dubbed ‘tremor’, which refers to irregular movements of the eye. It was these movements that inspired the VODKA optical sensor, which used a pair of very small sensors behind a lens and had them move repetitively while signal processing worked with the information to track a moving target. The results were impressive – with only two pixels, the device was able to locate the angular position of an edge with a precision 900 times greater than if the sensors hadn’t been moving. However, while applying these biological structures to technology is undoubtedly promising in some areas, nothing is perfect and there are some notable disadvantages to go with the advantages of these applications.

There are many advantages and disadvantages associated with biomimetics. One of the key advantages is how easy and efficient it is to apply design principles from nature to new technologies. Some photonic structures observed in nature might never be achieved through ordinary research. It is much more efficient to use inspiration from naturally occurring structures which already display desirable properties. Sources of inspiration from naturally occurring structures are also extremely abundant.[7] This means that similar properties may be achieved in different animal and plant species providing alternative routes to the desired effects. It follows from this that biomimetics accelerates the development of new technologies. Natural design principles not only have the potential to be directly copied into new technologies but may also inspire new research. Biological materials, structures and processes also have the advantage of being intensively trialled and gradually improved and developed over millions of years and have stood the test of time.[8] This knowledge can be applied to new technologies to avoid unnecessary testing and trialling processes.

Another advantage of biomimetics is sustainability. Sustainability is now one of the key considerations in the development of new technologies. Biological organisms have managed to establish efficient structures and processes using sustainable materials and with virtually no adverse effects on the environment. Most of these materials are organic compounds and are generally nontoxic minerals.[9] Therefore, biomimetics is currently, and will continue to play a vital role in climate change mitigation.

However, there are a number of disadvantages. The main obstacle to biomimetics is the complexity of structures and processes found in nature. The elaborate architecture of naturally occurring structures can make replication using current technologies almost impossible. Simplification may allow for structures to be reproduced but this often is accompanied by some loss in efficiency or functionality.[7] And even if it is possible to develop biomimetic devices, factors such as expense and time constraints come into play. Furthermore, biomimetics is a highly interdisciplinary field. It requires expertise from biologists, physicists, chemists, material scientists and engineers.[9] Effective collaboration between different disciplines may be impeded by conflicting technical languages and approaches to solving problems.

In conclusion, we can see that there’s much to learn about photonics from the natural world. Creatures and plants that we would normally consider mundane such as weevils, butterflies and buttercups boast incredible, nanometre accurate, yet completely organic structures capable of manipulating light. Some of these naturally occurring structures are so impressive that they still overshadow even modern technology. These specialized abilities and incredibly complex structures have been developed over the course of millions of years of evolution. It seems natural then that we humans, a relatively young species, would have much to learn from these evolutionary veterans in the field of photonics.


[1] C. P. Barrera Patino, J. D. Vollet Fillio (2020), “Photonic affects in natural nanostructures on Morpho Cypris and
Greta oto butterfly Wings”, Scientific Reports, 10(5786),

[2] R. EbiharaH. HashimotoJ. KanoT. FujiiS. Yoshioka, (2018), “Cuticle network and orientation preference of photonic crystals in the scales of the weevil Lamprocyphus augustus”, J. R. Soc. Interface, 15(20180360),

[3] Karthaus, O., (2013), Biomimetics in Photonics, 1st ed, CRC Press Taylor & Francis Group, pp 1-15.

[4] Vignolini et. al. (2013), “Analysing photonic structures in plants”, J R Soc Interface, 10(20130394).

[5] Rofouie, P., et. al., (2015), ‘Tunable nano-wrinkling of chiral surfaces: Structure and diffraction optics’, J. Chem. Phys., 143(113701).

[6] Van der Kooi, C. J., (2017), ‘Functional optics of glossy buttercup flowers’, JR Soc Interface, 14(127).

[7] Yu, K., Fan, T., Lou, S., Zhang, D., (2013), “Biomimetic optical materials: Integration of nature’s design for manipulation of light”. Progress in Materials Science, 58(6), pp 825–873, pmatsci.2013.03.003.

[8] Xu, J., Guo, Z., (2013), “Biomimetic photonic materials with tunable structural colors”. Journal of Colloid and Interface Science, 406, pp 1¬17,

[9] Hwang J., Jeong Y., Park JM., Lee KH., Hong JW., Choi J., (2015), “Biomimetics: forecasting the future of science, engineering, and medicine”. Int J Nanomedicine. 10(5701-13), doi:10.2147/IJN.S83642.

[10] Kerhuel L., Viollet S., Franceschini N., (2012). “The VODKA sensor: a bio-inspired hyper-acute optical position sensing device”. IEEE Sensors Journal, Institute of Electrical and Electronics Engineers, 12 (2), pp.315-324. 10.1109/JSEN.2011.2129505. hal-00612378.

[11] Martín-Palma, Raúl J., Kolle, Mathias, (2019). “Biomimetic photonic structures for optical sensing”, Optics and Laser Technology, 109, pp 270–277,

[12] Siebke, G., Holik, P., Schmitz, S., Tätzner, S., Thiesler, J., & Steltenkamp, S. (2015). “The development of a μ-biomimetic uncooled IR-Sensor inspired by the infrared receptors of Melanophila acuminata”. Bioinspiration & Biomimetics, 10(2), 026007.

[13] Strasburger H., Huber J., Rose D., (2018), “Ewald Hering’s (1899) On the Limits of Visual Acuity: A Translation and Commentary: With a Supplement on Alfred Volkmann’s (1863) Physiological Investigations in the Field of Optics”, Iperception, 4;9(3):2041669518763675, doi:10.1177/2041669518763675.

[14] Ebihara R., Hashimoto H., Kano J., Fujii T., Yoshioka S., (2018), “Cuticle network and orientation preference of photonic crystals in the scales of the weevil Lamprocyphus augustus”, J. R. Soc. Interface, 15(20180360),