Our Earth is teeming with millions of diverse life forms. From the domestic to the feral, bipedal, quadrupedal, insects and reptiles, the natural world hosts an incredible abundance of animalia. The Bumble (Bombus) or Honey Bee (Apis) are among the very few species of animals who can boast to have baffled the minds of mathematicians and physicists alike for centuries. At a glance, the bee appears much too heavy and stout for flight. It’s wings are too small with respect to their round bodies, which already seem poorly aligned with traditional aerodynamic models and pale in comparison with other airborne creatures. Bees, of course, with secrets of their own, turn these impossibilities to trivialities. In recent years, however, with our current physical knowledge and computational advancements, these secrets have been unraveled, and we can now decisively pinpoint the physics of the flight of the bumblebee.

I generalize this blog to the bumble or honey bee, but these physics apply to all bees under the Apoidean superfamily. The basis of bee flight is founded on two core aspects. Firstly, we must directly examine the concept of flight unique to the bee in this case, and how it contrasts other modes of aviation found in birds or airplanes. We have all seen man-made plane wings before, or can imagine the great pinions of an eagle in flight. They are blunt and rounded in the direction of the air flow, designed to make the air flow on top of the wing flow faster than the air on the bottom of the wing. This creates a pressure differential in the air, which is balanced out by an upward force on the wing, which pulls the plane upward. The bee’s wing, on the other hand, is much more akin to a thin foil, and as such, it can manipulate the airflow around its wings, creating tiny tornadoes or vortexes at their tips. These are known as Leading Edge Vortices (LEVs), and are a core factor in the flight of bees, wasps, moths and many other insects.

The Leading Edge Vortices provide the insect with increased mobility and lifting power in the air.Figure 1: The effect of the leading-edge vortex.

As can be seen in the above diagram, the Leading edge vortex provides the minibeast with an “induced downwash” of air. This accounts for a major portion of the creature’s weight and offers it a huge amount of stability mid-flight. It is through this method that bees are capable of hovering in place, it is also worth noting that all insects that fly in this manner must beat their wings hundreds of times a second to achieve this feat. This stunning amount of motion in a short time frame is what causes the signature buzz of the bee.

Crucially, these leading-edge vortices allow the bee to maintain a distinct wing path. Rather than just flapping them up and down, the bee is able to follow a curved loop with its appendages. These are not rigid appendages, and can bend and twist at will; this enables not only a way of directional influence in the bee’s flight, but is in fact paramount to the performance of the aforementioned wing path. The LEVs produced at the tips of its wings allow the bee to angle its wings higher against the air, this higher angle ingrained into the wing path provides the insect with enough force to become airborne. There is a delicate yet essential pressure difference again between the top and the bottom of the wing, maintained exactly and constantly by these vortices. (3)

Figure 2 : The wing path of the bee.

In a way, one can see that the flight of a humble bumble bee and a Boeing 747 are (perhaps) not too dissimilar after all. The natural solution to flight is both wonderfully elegant and simple. By means of updrafts, induced downwashes and differences in pressure, advanced aerodynamics, muscle control or jet fuel, the bee, bird and Boeing 747 are all capable of achieving flight in their own unique way.



References : Figure 1):

Figure 2):

(3) :

Have you ever gotten into a car, and as soon as you drive onto the main road, particularly at rush hour, there is some sort of traffic congestion? The answer is yes for pretty much everyone reading this of course. Now, imagine what it would be like if you never experienced a traffic jam again. What about being able to sit down and read a book while your car takes you to your destination? Now, you’re probably thinking either that I am crazy, or you probably already know about self-driving cars.

Self-driving or automated cars are cars which incorporate the use of vehicular automation, which uses artificial intelligence among other things. According to SAE International, there are six levels of automation in vehicles: level 0 providing no automation whatsoever, level 1 providing hands on or hybrid control, level 2 is hands off, level 3 is eyes off, level 4 is mind off and level 5 means the steering wheel is totally optional.

Currently, we are at level 2, where we can let the car do the steering, accelerating and braking, while us humans still need to be well aware of what is happening and be ready to intervene when needed.

There has been steady progress in this field over that past number of years, although unfortunately a little slower than expected. However, Alphabet, the parent company of Google, has recently announced that it had begun carring employees in electric Jaguar I-Pace SUVs without human backup drivers. So, it is clear that we are getting somewhere.

So, how long will it be until our dream of escaping those terrible traffic predicaments and endulging in the latest Sally Rooney novel in the driving seat (or “driving” seat, I should note)? Well, fully autonoumous vehicles will probably take a good few decades before we see them flying down the M50, but their safety, efficiency, convenience and cost, along with many other factors, will make them ever-present in the years to come.

Atoms, the building blocks which make up our universe, are made up of 3 important elements: the proton, the neutron and the electron. You may have recognize this image from the Big bang Theory.

The protons are the particles in red and have a positive electric charge, the neutrons in blue and have no charge. Together they comprise the nucleus. The grey particles are the electrons, which have negative charge, and attracted to the protons, much like the moon is attracted to the Earth. They execute an orbital motion around the nucleus.

When a very high energy ray of light comes near an atom, it has a chance to spontaneously disappear and leave behind an electron and a positron (the antimatter counterpart of the electron. Like an electron, except with a positive charge). This is called pair production. At first glance, this is a very strange prospect, that a ray of light can spontaneously turn into matter and antimatter. But this process obeys all of the laws of physics (obviously, because it happens), i.e. conservation of momentum, conservation of charge and conservation of energy. Conservation of energy is achieved when you take into account the rest energy of the proton and the electron.

Einstein figured out that matter is a form of bundled up energy, which is described by the famous equation E = m, which says that the energy from an electron or a positron existing is equal to its mass multiplied by the speed of light squared. So if the energy of the light ray is more than twice m, with m being the mass of the electron or positron (both have the same mass), then conservation of energy is satisfied.

The figure above shows the light ray coming from the left near the atom, and transforming into the electron positron pair on the right. So it obeys the laws of physics, but why does it happen? And why does it have to happen near an atom?

Well it has to happen near an atom due to  a subtle interaction between the atom and the light ray. Every atom produces electromagnetic fields, which the ray interacts with. The result of this interaction is probabilistic, the probability is determined by taking into account the total number of possible outcomes. It turns out, through the study of a field called quantum electrodynamics, that the probability that pair production occurs depends on the energy of the light ray, the higher the energy the more likely, and with the square of the number of protons in the nucleus, again, the more there are, the more likely. This interaction with the atom through its electromagnetic field includes it in the equation for the conservation of energy and momentum, which is why it is seen in the figure to have an extra momentum after the interaction.

Pair production is also a process which can happen backwards, in a process called pair annihilation. If you imagine running the process in the graph above backwards in time, i.e. the electron and the positron run into each other near an atom and annihilate each other to create a gamma ray, with all of the various conservation laws being obeyed.


Image 1: Big bang theory: Why Leonard & Sheldon Spent exactly 139.5 hours rebuilding the model, , Accessed May 2020

Image 2: Conversion of energy into mass, , accessed May 2020

I think the process of learning about science can create unique experiences which students studying physics can relate to. The actual process of comprehending new mathematical concepts understanding old information in new frameworks, the process of continuously leaving behind past beliefs about the world and updating them until you don’t describe your knowledge of the world as a belief anymore. The inherent surety and yet also doubt in the scientific worldview. These experiences are so relatable to scientists yet so distant from peoples who haven’t travelled down those paths. The purpose of this piece that a reader may relate to one of these experiences without ever having to learn physics. I started thinking on this topic today while listening to a public lecture given by Feynman (who is arguable one of the best educators to have ever existed). I thought he was able to explain what the process of learning these facts and theories about the world feel like. The video was in some ways cathartic and comforting. I thought that if I could get my parents or grandparents to listen to this maybe they could to a better degree relate to some of the emotions that are brought on by studying physics.

To this end I want to talk about the framework in which you understand a concept. I want to talk about what it’s like to constantly relearn something using a different framework. Firstly, to understand anything you need to be existing in a space where somethings are taken as a given. If you asked why did the chicken cross the road? A satisfactory answer could be because there was chicken feed on the other side. Here we accept this as a reasonable answer because the concept of being motivated by food is taken as given. However, a sentient plant would be confused by this explanation and would need to be told of mammalian life requires consumption of food. The process of physics often requires you to list what you take about the world as a given and then to show what this would mean for phenomena. Similarly, you spend time relearning things you thought you knew but in a different framework of assumption.  For example, in Newtonian Mechanics we take time as an absolute entity that exist separate to the particles and location of any events. This proved to be wrong but  it is approximately true at small enough speeds which allows Newtonian mechanics to be useful still. In modern day we have brought the basic axioms of this world back further and further. One assumption made today that is that the laws of physics is the same now as they were yesterday or a hundred years ago. Another would be that the speed of light is the same in every reference frame. Yet at no stage do we have any idea as to why these givens are the case. Just that the implications of these assumption match up with nature as we have observed her.

To illustrate this process, I will describe my understanding of the concept of energy. Firstly, I thought energy was to do with motion and light and explosions and food. It was something objects could have or something that could be passed between forms but never destroyed or created. Further study had me thinking of energy as just a quantity that could be calculated in any scenario. That what made it special was that after some time the same calculations would give you the same result. As such it was useful to physics because the calculation involved the speed and so on of objects so that one term in the sum had a smaller value then another would have to have a larger value. I took the existence of such a constant quantity as a given. I had no idea why that would be the case. Now I understand energy as a quantity that must be the same if the laws of physics are to be the same today as they were yesterday. If the value of the energy was different the motion of the particles would be different for different values of time. As such energy is now a consequence of the assumptions that the laws of nature are unchanged in time. This assumption is now the new given and I understand energy in this framework. I hope this illustrates that learning physics can sometimes feel like this constant relearning of old ideas in a new framework with new givens.

Figure 1: Can quantum mechanical effects play a significant role in biology?

Since the dawn of the quantum revolution in physics during the early 1900’s, there has always been the lingering question of whether the phenomena described by quantum mechanics could ever significantly influence the behaviour of complex organisms. This is of course a very natural question to ask, due to the fundamental nature of quantum mechanics, describing the way in which the particles that make up everything in our universe behave. Superficially, it seems valid to assume that quantum theory plays a non-trivial role (i.e affecting the actions and behaviours of organisms) in biology, since it is so inextricably linked with mathematics and the other sciences of physics and chemistry which govern the underlying principles of biology. However, as with most things in life, the answer to such a question is not as simple as it may first appear and is still up for debate to this day. In this piece, I will discuss one of the many possible examples of “quantum biology”, namely, the theorised mechanism with which migratory birds navigate (avian magnetoreception). In order to truly understand the source of all of this debate, we must first look back at the beginning of quantum mechanics.


The Brave New World of Physics

The development of quantum mechanics was a total paradigm shift for the field of physics, with our understanding of the universe at the macroscopic level being shown to be incompatible with behaviour of the universe at very small length scales (at the atomic scale of the order of approximately 10-10 m). At the heart of this revolutionary theory was the idea that the quantum world is probabilistic, contrary to the deterministic nature of the world we experience. In essence, this means that we can never exactly predict the location of particles at the atomic scale and that we can only know that there is some probability to find a particle in a given region of space. The discovery of this weird and wonderful world, within the one we thought we knew so well, was truly groundbreaking. So much so that many of the architects of quantum mechanics such as Neils Bohr, Pascual Jordan and Erwin Schrödinger began to question the true nature of our reality, going so far as to probe what it means to be alive.This new found interest in the meaning of life is clearly demonstrated by Schrödinger’s book ‘What is life?’ from 1944, in which he predicted the mechanism with which genetic traits are stored. Thus, it was in this age of unrivalled discovery that the idea of quantum mechanics influencing the biological function of organisms was born.


Figure 2: The European Robin is one of the species of birds that is theorised to use magnetoreception to navigate.


A Biological Compass

One of the most interesting examples of “quantum biology” is that of avian magnetoreception (the ability to sense magnetic fields) which is proposed to be the mechanism with which certain species of birds navigate. This new theory was first described using quantum mechanics in 1978 by K. Schulten, C. Swenberg, A. Weller at the Max Planck Institute in Göttingen, Germany. The origins of this radical theory can be traced back to experiments carried out on European robins, from which it was determined that their magnetoreception was photoreceptor based and dependent on changes in the intensity of external magnetic (or B) fields. In addition to this, it was also found that their navigational abilities were impaired by weak external oscillating magnetic fields. Combining these observations, the researchers concluded that the robins must navigate using their ability to sense the Earth’s magnetic field. It should be noted that the Earth’s magnetic field is relatively weak when compared to an average fridge magnet, being approximately 3.05X10^{-5} T and thus over 30 times weaker than an average fridge magnet.


The mechanism suggested by the team of researchers was called the Radical Pair mechanism and involves the quantum mechanical effect of entanglement. The mechanism is initiated by photons of light falling onto the retina of the bird and interacting with a light sensitive family of proteins called Cryptochromes. If the photons are of a suitable energy (or equivalently wavelength), it is possible for a single photon to eject an electron from one protein molecule and the electron to subsequently become bound to another protein molecule, creating what is known as a radical pair (i.e each molecule is called a radical in these unstable states). Since these radicals are created at the same time, the molecules are said to be entangled particles, as a result of one of the many quirks of quantum mechanics. These entangled particles have the downright weird property that affecting the state of one particle also instantaneously affects the other particle, no matter the distance between the particles. In this entangled state, the molecules switch between two distinct chemical states, with one of the states supposedly causing a buildup of a certain compound (which currently has not been identified) in the retina which influences how the bird’s eyes send signals to the visual cortex of the bird’s brain. However, the presence of an external magnetic field causes the radical pair to spend more time in one of the chemical states, meaning that the signals sent from the bird’s eyes to its brain are dependent on the magnetic field, effectively giving the bird a biological compass.


Figure 3: The structure of a cryptochrome protein.


The Debate

As previously mentioned, there is still no consensus in the physics and biology community as to whether quantum mechanics has a non-trivial role in biology, with avian magnetoreception being a prime example of this discourse. Sceptics of this theory and “quantum biology” in general, such as Niels Bohr, have suggested that sustaining quantum entanglement for relatively long periods of time (of the order of 100 μs) as required by the radical pair mechanism is impossible in the wet and hot conditions of a bird’s eye. This is a result of quantum mechanical effects only being significant at very low temperatures, with the theories of classical physics, which do not include entanglement, dominating at high temperatures. Of course, it is possible that nature has found a better way to create and sustain the quantum entanglement of particles, however, this is very unlikely based on our current understanding of quantum mechanics. Nevertheless, even if “quantum biology” is proven to be an impossibility, the discoveries made in the process of disproving these theories will be vital additions to our understanding of the quantum world and the strangeness it entails.






TIME CRYSTALS!!! A name and concept that seem to come straight out of a science fiction or fantasy novel. But are they as fictional as we expect them to be?

A time crystal is a novel phase of matter. It is a system which oscillates repeatedly from one stable ground state to another without absorbing or “burning” any energy in the process. Despite being a constantly evolving system, a time crystal is perfectly stable. Analogous to regular crystals in space which break the spacial-translational symmetry, time crystals spontaneously break time-translational symmetry – the usual rule that a stable system will remain the same throughout time. No work is carried out by such systems and no usable energy can be extracted from them, so finding them would not violate the well-established principles of thermodynamics.

In 2012, the Nobel laureate in physics, Frank Wilczek proposed the existence of the titular time crystals. Wilczek envisioned a diamond-like multi-part object which breaks the time-translation symmetry in its equilibrium, moving in periodic, continuous motion and eventually returning to its initial state. However, this model turn out to be impossible as laws of thermodynamics dictate that in order to minimise their energy, quantum particles in the thermodynamic limit prefer to stop rather than to move. Scientists had to come up with other, slightly different models to make the creation of time crystals more plausible.

“If you think about crystals in space, it’s very natural also to think about the classification of crystalline behavior in time,”

– Frank Wilczek

Over the past few years, researchers have tried to developed various methods and approaches to create systems which very closely resemble the theorised time crystals. Such systems require some “ingredients” or specific techniques to be constructed. Consider a one-dimensional chain of spins. First, particles such as electrons are prepared in a polarised spin state. Naturally, these particles will try to settle into an arrangement which minimalised their energy. However, random destructive interference will trap them in higher-energy configurations. Our system is now experiencing a many-body localisation. These many-body localised systems exhibit a very special kind of order: if you flip the spin orientation of each particle, you will create another stable many-body localised state.

If you act on our system with a periodic driver such as a very specific laser, you will find that the spin orientations will flip back and forth, repeatedly and indefinitely moving between two many-body localised states. It is important to note that the particles do not heat up and absorb any net energy from the driving laser. By definition, our system has formed a time crystal.

In 2021, a new development in the fields of quantum computing and theoretical condense matter physics made the headlines. Researches at Google and physicists Stanford and Princeton and other universities were able to demonstrate the existence of such time crystals using Google’s revolutionary quantum computers.

Quantum computers operate on qubits – controllable quantum particles. The controllable aspects of the qubits prove to be especially useful in creating a time crystal. We can randomise the interaction strengths between the qubits, creating the necessary destructive interference between them, which in turn, allows us to achieve the many-body localisation. In this experiment, microwave lasers act as our periodic drivers, flipping the spins of the qubits. By running thousands of such demonstrations for various initial configurations, the researchers were able to observe that the spins were flipping back and forth between two many-body localised states. During these processes the particles never absorbed or dissipated any energy from the microwave laser, keeping the entropy of the system unchanged. They were able to create an extremely stable time crystal within a quantum computer.

“Something that’s as stable as this is unusual, and special things become useful,”

–  Roderich Moessner, director of the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany and co-author of the Google quantum computer time crystal paper.

Time crystals have the potential to finally allow us to take our condense matter research into the fourth dimension. They may help us create a whole new generation of novel devices and technologies. Their applications might include: being used as a new techniques of more precise timekeeping, simulating ground states in quantum computing schemes and even being implemented as a robust method of storing memory in quantum computers. However, due to the exotic nature of these systems and our poor knowledge of their physics, it might be a while until we will be able to grasp time itself in the palm of our hands.


  • Classical Time Crystals, A. Shapere and F. Wilczek, Phys. Rev. Lett. 109, 160402 (2012),
  • Eternal Change for No Energy: A Time Crystal Finally Made Real, Natalie Wolchover,
  • Time crystals enter the real world of condensed matter, P. Hannaford and K. Sacha,
  • Viewpoint: Crystals of Time, Jakub Zakrzewski,
  • How to Create a Time Crystal, Phil Richerme,
  • Physicists Create World’s First Time Crystal,

Quantum mechanics is regarded as one of the crowning achievements of 20th century physics with it’s predictions backed up by countless experiments in the decades since its formulation in the 1920’s. Along with it’s success as a physical theory for all things microscopic it has also garnered notoriety in the mainstream for its perceived complicated and abstract subject matter and it’s reputation for being impenetrable to any layman. The reason for this singling out of quantum mechanics from the great canon of physical theories is due to its unique philosophical position in relation to the physics it describes, or better put: it’s not necessarily what quantum mechanics tells us, rather how we interpret what we are told.

To begin this discussion of interpretations of quantum mechanics it is best to start with the most widely accepted and universally taught interpretation, The probabilistic interpretation. This is the belief that quantum mechanics doesn’t tell us what has happened in a given system but more precisely how likely it is to happen in a given system. Within this framework we can imagine that all possible results of a measurement of a quantum system have assigned to them a particular probability which represents how likely one is to find the system in that arrangement when measured. 

To clarify this idea we can consider a widely understood concept, Pokémon cards! We know that the only way to ‘measure’ what Pokémon we have is to remove the card from the pack and ‘observe’ it. If we know beforehand that there are only 3 types of Pokémon available, let’s say, fire, water and electric and we have heard from others who have bought the same cards that 20% of people get fire cards, 50% get water cards and 30% get electric cards. We can describe our card before we open it as follows:

(card type) = 0.2 (fire) + 0.5 (water) + 0.3 (electric).

All information about the card is represented in this statement (i.e. the type of card and the probability of getting it). We also know that after we open the card and ‘observe’ it we will only possess a single card belonging to only a single card type and it will not change. Therefore, (assuming we got a fire card) after it has been opened the card can be described as:

(card type) = 1.0 (fire)

This change in the description of the card is fundamental to this interpretation of quantum mechanics and is said to take place instantly upon measurement.

The many worlds interpretation of quantum mechanics is an alternate interpretation which proposes a different view on the ‘collapse’ of the description of quantum systems. It states that every possible result of an observation is realised in it’s own universe. Using this point of view, at the moment when the card is opened we can imagine the arrow of time branching into 3 distinct paths and in each new path a different type of card was obtained.

This interpretation was formulated in the 1970’s with contributions from physicists such as Hugh Everett and Bryce DeWitt and aimed to resolve some of the paradoxes of quantum mechanics, the most famous of which being Schrödinger’s cat. While sounding like nothing more than a purely science fiction concept it remains a very real and respected academic hypothesis to this day.

One of the biggest threats to the world’s telecommunications infrastructure is large emissions of radiation and magnetic energy from solar flares and coronal mass ejections (CME), also known as solar storms. As human civilization has become more and more dependent on the internet and technological infrastructure, the rare occurrence of severe space weather events has posed a much larger threat to industry and human civilization as a whole than ever before. Researcher Abdu Jyothi of the University of California, in her research paper, termed the impact of a solar superstorm event as the ‘Internet Apocalypse,’ where she examines the worst-case scenario of global internet outages from damaged electronic systems caused by rare solar superstorms.

The unique behaviour of the sun’s magnetic field gives rise to the ejection of radiation, particles, and matter from the surface of the sun, called space weather. The sun is made up of plasma, which is an extremely hot gas of ionised particles. The magnetic field of the sun is created in a system which is called the solar dynamo [2], where the motion of the electrically charged plasma in a magnetic field induces a current, which in turn generates more magnetic field [10]. Astrophysicists have deduced the shape of the magnetic fields at the surface of the sun by examining the motion of the plasma in corona loops in the sun’s atmosphere.

Image 1: Corona loops of plasma at the surface of the sun

Sunspots are regions of relatively lower temperatures on the surface of the sun where very strong magnetic fields prevent heat from within the sun from reaching the surface. In these regions, strong magnetic fields become entangled and reorganized. This causes a sudden explosion of energy in the form of a solar flare often accompanied by a coronal mass ejection, which is the ejection of electrically charged solar matter from the sunspot [3].

Some of the electromagnetic energy released by the flares, in the form of x-rays, and ejected particles can reach the earth. However, the earth has its own protective mechanisms against the regular occurrence of mild solar flares and CMEs. The upper layers of the earth’s atmosphere absorb the influx of x-rays. The earth is also surrounded by its own magnetic field, called the magnetosphere, which acts as a protective shield against the ejected solar matter from a CME that reaches the earth. Therefore, telecommunication infrastructure on the surface of the earth avoids the harmful effects. However, telecommunications satellites and GPS satellites further away from the earth’s surface are left more exposed and have been damaged or rendered inoperable due to solar flares [4]. Human health can also be compromised by direct exposure to harmful radiation and high energy particles emitted due to solar activity. On the surface of the earth, we are shielded from the harmful effects of space weather, however, astronauts in space must use special protective gear due to the extra exposure.

One of the positive side effects of space weather interacting with the earth is the spectacular display of the aurora borealis, more commonly known as the northern or southern lights, where charged particles become trapped in the earth’s magnetosphere and accelerate towards the earth’s poles. They collide with atoms and molecules in the earth’s atmosphere, releasing a burst of light and a colourful display in the night sky [5].

Image 2: The deflection of coronal mass ejections by the earth’s magnetic field

Image 3: The Aurora Borealis

More worryingly, there is the unlikely chance of a large-scale coronal mass ejection striking the earth in its direct path causing widespread damage to electrical infrastructure even on the surface of the earth. Such an event has been named a ‘solar superstorm.’ The enormous ejection of electrically charged solar matter causes shock waves in the magnetosphere and releases its energy toward the earth in a geomagnetic storm. As the earth’s magnetic field varies, electric currents are induced on the earth’s conducting surfaces by electromagnetic induction. These are called geomagnetically induced currents (GIC) [1]. This, in turn, induces electrical currents in the power grid and other grounded conductors, potentially destroying the electrical transformers and repeaters which keep the power grid running and damaging the vast network of long-distance cables which provide internet.

There has also been growing concern about the weakening of the earth’s magnetic field over the past few centuries, with some physicists believing it is because of the long overdue flip of the earth’s magnetic poles, something which occurs around every 200,000 years, but has not happened in over 750,000 years. This could potentially leave humans and telecommunication infrastructure on earth more exposed to more moderate and frequent space weather events.

The last large-scale geomagnetic storm, called the Carrington Event, was recorded in September 1859. Its main impact was on the mode of telecommunication at the time, the telegraph network, with reports of telegraph wires catching fire, electrical shocks, and messages sending even when it was disconnected from power. The CME was so strong that auroras could be seen from as far south as the Caribbean! In March 1989, magnetic disturbances caused by a strong solar storm wiped out the entire electrical grid in the Canadian province of Quebec [7]. Of course, since 1859, modern civilisation has become very dependent on electrical infrastructure to provide homes and businesses with power and internet for our constant connectivity demands, so a storm on the scale of the Carrington Event could have catastrophic implications for the world’s economy and society in general. A study by the National Academy of Sciences estimated that the damage caused by a Carrington-like event today could cost over $2 trillion and multiple years to repair [8]. By analysing the records of solar storms over the past 50 years, Peter Riley of Predictive Sciences inc. calculated that the probability of such an event happening in the next 10 years is 12%.

So what can be done to minimise the damage caused by a large-scale geomagnetic storm? As the sun is just coming out of a period of inactivity in its solar cycle, we have not experienced a significant solar storm to test the resilience of modern technological infrastructure against such events. However, nowadays we have a series of satellites which monitor solar activity, such as NASA’s Advanced Composition Explorer [9], which gives forewarning of a large incoming solar storm that would take at least 13 hours to reach earth. This gives power grid operators enough time to shut down their stations and minimise damage caused as it passes. Even with this precaution, an unprecedented Carrington-like event will likely cause widespread damage to the earth’s telecommunications and internet infrastructure, so better damage prevention and recovery plans will need to be put in place to ensure the maintenance of vital technological systems that the world’s population depends so heavily on.



[1] Sangeetha Abdu Jyothi. 2021. Solar Superstorms: Planning for an Internet Apocalypse. In ACM SIGCOMM 2021 Conference (SIGCOMM ’21), August 23–27, 2021, Virtual Event, USA. ACM, New York, NY, USA, 13 pages. https: //

[2] NASA. 2022. Understanding the Magnetic Sun. [online] Available at: <>.

[3] 2022. Sunspots and Solar Flares | NASA Space Place – NASA Science for Kids. [online] Available at: <>.

[4] Encyclopedia Britannica. 2022. How solar flares can affect the satellites and activity on the surface of the Earth. [online] Available at: <>.

[5] NASA. 2022. Aurora: Illuminating the Sun-Earth Connection. [online] Available at: <>.

[6] 2022. [online] Available at: <>.

[7] NASA. 2022. The Day the Sun Brought Darkness. [online] Available at: <>.

[8] 2022. Near Miss: The Solar Superstorm of July 2012 | Science Mission Directorate. [online] Available at: <>.

[9] O’Callaghan, J., 2022. New Studies Warn of Cataclysmic Solar Superstorms. [online] Scientific American. Available at: <>.

[10] Paul Bushby, Joanne MasonUnderstanding the Solar Dynamo. Astronomy & Geophysics, Volume 45, Issue 4, August 2004, Pages 4.7–4.13.

[11] 2020. Could Solar Storms Destroy Civilisation? Solar Flares & Coronal Mass Ejection [online] Available at: <>.

Image Sources

Image 1: Vatican Observatory. 2022. Coronal Loops on the Sun – Vatican Observatory. [online] Available at: <>.

Image 2 & 3: GoOpti low-cost transfers. 2022. Aurora Borealis: where to see the Northern Lights in 2021?. [online] Available at: <>.

At this stage Albert Einstein is a household name across the globe, his name being synonymous with the word ‘genius’. His theories and thought experiments have had an immense impact on our understanding of physics, and he seemed able to imagine ideas that no one else possibly could. This post tells the story of how, in 1929, Einstein retracted one of his theories – calling it the “biggest blunder” of his life.

Einstein had included in his equations of gravity what he called ‘the cosmoligical constant’, a constant represented by capital lambda, which allowed him to describe a static universe. This model of the universe complied with what was the generally accepted theory at the time in 1917, that the universe was indeed stationary.

Then, in 1929, Edwin Hubble (whom the Hubble telescope is named for) presented convincing evidence that the universe is in fact expanding. This caused Einstein to abandon his cosmological constant (i.e. presuming its value to be zero), believing it to be a mistake.

But that wasn’t the end of the story. Years went by and physicists repeatedly inserted, removed and reinserted lambda into the equations describing the universe, unable to decide whether or not it was necessary. Finally, in 1997/8, two teams of theorists, one led by Saul Perlmutter, published papers outlining the need for Einstein’s cosmological constant.

Through their analysis of the most distant supernovae ever observed – one of which was SN1997ap – and their redshifts, they had reached the conclusion that the distant supernovae were roughly fifteen percent farther away than where the prior models placed them. This could only mean they were accelerating away from us. The only known thing that ‘naturally’ accounts for this acceleration was Einstein’s lambda, and so it was reinserted into Einstein’s equations one last time. Einstein’s equations now perfectly matched the observed state of the universe.

So while Einstein’s initial use for the cosmological constant was incorrect, it proved vital to forming an accurate picture of our world. The great theorist had once again foreseen a factor no one else could – this time a good 70 years before anyone, including himself, was able to prove it.

As a rule, the universe tends towards disorder. It can seem like a rather depressing fact to some, but no matter how concerted and deliberate you try to be, physics guarantees that your actions will always act to increase the overall amount of disorder in the world. Want to have a spoon of sugar in your tea? You’ve just ruined your sweetener’s fine crystal structure by letting it dissolve. Take it without sugar? In boiling the kettle you’ve already set the water molecules in your drink into ever faster and disordered motion just by heating them up. There’s no stopping it. This universal law is codified physically in the second law of thermodynamics, which dictates that after carrying out any irreversible process (irreversible in the sense that you cannot stir the sugar out of your tea), entropy, a measure of disorder, must necessarily have increased. 


Beyond the depression, at first this principle can seem somewhat illusive. Why does Nature decide things must be messied? The answer lies in probability. Take again the example of our cup of tea and sugar. Each sugar molecule, given the chance, can move relatively freely through the tea. They’ll bump into a water molecule here or there, another sugar molecule,  or potentially a caffeine molecule (should you not take decaf), but on average, over time, they get around the entire cup. If you consider the probability of different arrangements of the sugar molecules, you can see that an unmixing of a spoon of sugar is incredibly unlikely. For this to happen, we’d need every sugar molecule from all around the cup to conspire to all at once stick back to our spoon, meanwhile enough of the water molecules would have to decide to get out of the way to make room for our spoonful (presuming your sugar was dry to begin with). The odds of this happening are staggeringly small. They’re so astronomically small in fact that in principle we can say it’ll essentially never happen, even if we stood and stared diligently at our cup for a few billion years. The second law of thermodynamics, under this guise, and once we note that generally there’s just a higher chance of things being disordered, is simply a statement that Nature does the most probable. 


Now, in 1867, James Clerk Maxwell, feeling rather devious, proposed a simple thought experiment regarding entropy that went without a complete solution by physicists for some 115 years. What Maxwell tried to do was concoct a system whose entropy would decrease rather than increase over time, thus becoming more ordered and flying in the face of the second law of thermodynamics. To do this he imagined a container split in two. One side of the container, say the right, is completely empty to start, the other is full of gas, and the two are separated by an impenetrable wall with a small tap to let gas flow between the containers. First, we open the tap and let things progress according to the laws of physics. Predictably, the gas spreads out between the two containers, diffusing out from the left portion into the right until everything settles down into equilibrium, the gas pressure now having gone down due to its expansion. This so far is completely ordinary. If we run and check that nothing has broken yet, we’ll be assured that everything is in order. The system has undergone an irreversible process (the gas won’t magically saunter back into its original place on one side of the container), and so it should be more disordered than before, with a higher entropy. This we find to be true, since in having spread out the positions of all the molecules making up the gas are now inherently more random. The second law remains law and we have no problems so far.


Now comes the demon. Maxwell allows for a tiny creature, you can imagine them with horns if you like, to open and close the tap at will. This demon, conspiring to annoy physicists, decides it rather liked things the way they were before. It decides that should it see a particle moving to the left from the right side of the container it will open the tap and let it into the left side, whereas if it sees a particle moving to the right from the left side it’ll refuse to budge the tap. Over time, this leads to a gradual return to our original situation; all the gas is back on the left, whereas there’s nothing but vacuum on the right. Thus our demon, through a slight bit of trickery wiggling a tap open and closed, seems to have restored order and reversed entropy! Even more startling still is that if we had placed a small turbine at the tap that spins as gas flows between the containers, we could have harvested some of the energy from our experiment and, since we’ve returned everything back to its starting position, there’s no stopping us doing that again, and again, and again. We’d get more energy out of the system each time, and a free lunch.


There’s no such thing of course. The solution to the problem of Maxwell’s demon lies not in trying to manufacture a replica in a lab and generate infinite power using little devilish creatures and boxes of gas, rather it lies in trying to prove that either no such demon could exist, or in as much as they could they don’t break the laws of physics. One idea first tried out on the problem was that the demon would necessarily have to actually be able to see the gas molecules in order to know when to open the tap, and to do this it’d have to shoot light at them, using up energy and thus squashing our hopes of a sustainable future being powered by Satan. Equally, this light must reflect off the molecules, jostling them about thus increasing the randomness of their motion. This solution falls through however, since the demon, outwitting us, can always decide to use lower and lower intensity light to sense the particles, and thus use an absolute minimum of energy, eeking out an advantage and still breaking physics (this might mean they’re worse at seeing the particles, but they’ve got time and can miss a few so long as overall gas only ever travels one way through the tap).


The beginnings of a solution to the problem interestingly came from the work of pioneering computer scientist Claude Shannon. In 1948, Shannon showed that the information content of any message could be directly quantified mathematically by what he called information entropy. The more information a message holds, the higher its information entropy. On the face of it Shannon’s notion of information entropy seems quite some distance from what physicists mean when they talk about entropy. What’s the connection between the amount of information stored in a text message and how disordered my cup of tea is? The answer lies in a distinction between the measurements us physicists normally take of systems and measurements of their exact states. Normally, we don’t measure where every single gas molecule is in a system, in fact normally we can’t possibly do this. To store all the position data for just one gram of hydrogen gas at one instant in time would require something like a quadrillion gigabytes of storage. Every bit of data storage on the planet currently amounts to less than 3% of that, and that’s just for a snapshot of the particles at an instant. We’d be in an even worse state trying to measure how the particles move over time. What we normally measure in a lab are things like pressure, temperature, or volume, which all come from the behaviour of large numbers of particles rather than individual ones, and so require much less data to store. Physicists say that what we typically measure is the macrostate of systems, or how they appear on a macroscopic scale, rather than their microstate, or how they appear down to the level of atoms and molecules. In general, many different microstates can correspond to the same macrostate, since they give the same pressure, temperature, etc., just with the particles moved around a bit. Information entropy, when applied to a gas in a box, is a measurement of how much additional information we’d need to figure out the exact microstate our gas is in (where all the molecules are, how fast they’re moving, etc.) once we know what macrostate it’s in. Thus, if we know all the particles are on the left side of the box, we’d need to know less information to figure out exactly where they are, and thus the system has lower entropy than if they were spread out over the entire box.


Information entropy alone doesn’t solve the problem, however. The second step was made by physicist Rolf Landauer who, in 1961, showed that theoretically there is a lower limit to the energy efficiency of any form of computation, in what’s known as Landauer’s principle. What Landauer proved is that if a computer is to perform a calculation, whether that be adding two numbers, running a line of code, or even something as simple as storing or deleting data, there is a quantifiable minimal energy cost in doing so, and a corresponding entropy increase for having gone to the bother. This entropy increase comes from the fact that, in using up some power, whatever circuit we run our computations on must heat up, and thus either it or its surroundings must become more disordered, since this heat causes its molecules to jiggle about in more random motion.


With all of this at hand, it still took another 21 years for physicists to finally come up with a solution to Maxwell’s demon when, in 1982, Charles Bennett finally figured it out. Bennett’s stroke of genius was to reduce the demon down to its core function; an information-processing machine connected to a tap. The tap connection isn’t too important, since we can always in principle minimise its energy consumption and corresponding entropy increase, and so the solution comes directly from considering the information-processing (the problem isn’t in the demon’s body, just whatever they have between the ears). Bennett reasoned that the demon is going to need to store some data, since they need to record both where the particles near it are and how they’re moving, alongside when they have to open and close the tap to let the right ones through. He equally reasoned that the demon doesn’t have infinite storage space, and so will eventually run out of room and have to start overwriting their storage to record when next to open the tap. This overwriting, however, must take at least the theoretical minimum of energy given my Landuer’s principle, and this must correspond to a least corresponding increase in entropy. This entropy increase, when calculated, will always exceed the decrease the demon gets out of playing with the tap, and thus the demon is thwarted and the second law persists.


Maxwell’s demon is one of the many striking examples in physics of the power of a simple thought experiment and particularly how their solutions, sometimes taking years, can draw in results from seemingly unrelated fields of study entirely into a unified whole. Physicists from Galileo to Newton to Einstein have all pondered mock setups like this in their minds, daydreaming about lobbing cannonballs into orbit, jumping in elevators, or poisoning cats, and from these imaginings has sprung some of the greatest developments in the history of science and physics, alongside ever deeper insights into the minutiae of the Universe. The whole process of carrying out thought experiments, or Gedankenexperiment as Einstein liked to call them, runs down to the core of what physicists do; simplify the world to its most fundamental parts, play with them, learn something, then build it back up again.