For almost 70 years, humanity has been launching rockets, satellites, people and animals into space. While most people and some animals came back, much less of the equipment comes back. For most satellites there are no plans to return them to sender. When their lifetimes run out, they simply stay orbiting Earth. This build up of space junk and debris is a growing problem that requires constant surveillance. The US department of defence even has a global space surveillance network whose sensors have to constantly monitor more than 27,000 pieces of space junk. However, the estimated number of dangerous debris is much higher considering that a lot of it is too small to be detected, even though with the speed it travels at orbiting the earth it is still a big source of danger. Functioning satellites’ orbits are carefully monitored along with the international space station. If it is calculated that a collision may occur between the space station and some orbital debris an emergency manoeuvre may have to take place. However, even with this surveillance and tracking costly accidents can still occur. So, what can be done to prevent this from continuing to be a growing problem.

The one saving grace we have so far is that some spaced junk drops down low enough into the Earth’s atmosphere and begin to feel its presence in the form of atmospheric drag. This causes the debris to burn up and disintegrate. This will eventually occur for all of the Earth’s orbital debris, however it may take decades and the rate at which the amount of debris is increasing vastly outdoes the amount that is getting burnt up.

The first planned mission to remove orbital debris is slated to occur in 2025, with the European space agency funding a company to send an experimental 4-armed robot into space to collect a large payload adapter and drag it into the Earth’s atmosphere where it and the robot will burn up. While this is a positive step, it is only one piece of large debris being removed at quite a high cost. A better, more economic solution is required.

A new idea is to possibly add drag sails to future satellites. Rather than a light sail which is used to propel spacecraft further from the sun using the force of the photons radiating from the sun, these will act more like parachutes. They will be very thin, and extremely sensitive to any force, so even just slightly grazing the top of the earths atmosphere and barely encountering any air, the satellite will be slowed and force down towards earths atmosphere, causing a much quicker burn up.

So hopefully with more creative ideas and solutions, the amount of junk in Earth’s orbit will soon begin to decrease.


Have you heard of the Irish Woodstock1? It isn’t a strange retro festival coming this summer, but an important historical event that has affected Irish politics and energy generation to this day, despite happening in the long distant days of 1978. ‘Get to the Point’, the event’s proper title, was the culmination of five years of effort by citizens concerned about government plans to build four nuclear power plants at Carnsore Point in Co. Wicklow, a protest concert headlined by none other than Irish musical luminary Christy Moore. In every way, the protestors succeeded. The government discretely scuttled the plans and, in 1999, made it illegal to use nuclear fission for the purposes of electrical generation. While it is an admirable example of non-violent action by citizens leading to change in an unpopular government policy, in light of the current climate crisis and Ireland’s commitments as part of the Paris Agreement to cutting emissions, was this decision beneficial for the country in the long run?

The main fears1 around the proposed Carnsore Point Power Plants, ignoring the political dimensions of the Cold War era, were the safety risks posed by the by-products of the fission process- radioactive nuclear waste- and the danger posed by failure of the reactor, which could render the land all around the plants deeply irradiated. It is likely that similar concerns are first and foremost on the minds of those who continue to oppose nuclear power in Ireland today- though the most recent surveys indicate that the percentages of the population in either the pro- or anti- camp are roughly even.2

Are these concerns founded? Dealing with the latter concern first, there is no doubt that the consequences of an accident, like at Chernobyl, can be catastrophic and should not be taken lightly. On the other hand, the majority of nuclear accidents are caused by a mixture of human and equipment error, which are exceedingly rare, especially since nuclear power plants are held to high safety standards in light of the danger an unmitigated accident poses. Consider France3, which opened its first nuclear power plant in 1962 and now has over 50 nuclear fission reactors, supplying about 70% of the nation’s energy. They’re power plants have not had a serious accident since 1980 and it was resolved without the endangerment or loss of human life. France’s success runs counter to the common anti-nuclear narrative that each power plant is a disaster waiting to happen. As such, it is unlikely that the fears around a nuclear disaster in Ireland would have come to pass- though for some people, any chance of such a disaster might be too much of a risk.

The concern of nuclear waste is more insidious. The physics cannot be denied- the by-products of Uranium-235 fission can remain dangerous to humans for up to thousands of years and need to be carefully managed of to avoid endangering human life4. The best solution is deep geological disposal, either in natural caves or in purposely created boreholes using repurposed oil drilling technology, with hundreds of meters of bedrock protecting civilization from the harmful rays5. Attempts to engage in the creation of large-scale waste disposal facilities, however, are often kneecapped by the protests of citizens within the selected locality, who are unwilling to take what they perceive to be an unfair risk. It is only in Finland that serious work has been done in creating a facility purpose built for nuclear waste disposal.  It can be said then that the concerns of the protestors at Carnsore Point were perhaps well founded here- if the Irish Government could not safely dispose of nuclear waste, sooner or later someone would be hurt. If one analyses the situation here, however, a certain cycle can be seen to emerge- without a way to dispose of nuclear waste safely, people oppose the construction of nuclear power plants- yet, equally, when nuclear waste disposal facilities are to be constructed, these two are opposed because people don’t view them as safe.

This latter point dovetails neatly in the final point I wish to make in this blog post. It may be that the concerns of the protestors at Carnsore continue to be valid in the modern day- but only because such concerns made taking actions to alleviate the worries difficult, if not impossible. The risk of an accident can never be made zero. Nuclear waste cannot magically disappear. At the same time, we cannot generate energy out of nothing. It may be that accepting the risks of nuclear power plants at Carnsore would have been better for the island of Ireland in future. There is no single silver bullet to solve the current energy crisis, but rather a chain of connected solutions that require us to consider every option fairly, and not fall into the trap of 20th century hysteria.



1: Ireland’s Woodstock: the anti-nuclear protests at Carnsore Point – HeadStuff







To a layperson, the words “Time Crystal” probably evokes images of scientific magic, like phasers, flying DeLoreans and flubber. Real-world time crystals are likely not as exciting as whatever sci-fi gobbledygook you might have imagined (Physicists are very good marketers). But their potential applications in quantum computing are genuinely something to pay attention to in the coming years.

Google’s sycamore microprocessor. In 2021, Google’s Quantum AI team, in collaboration with researchers from the Max Planck Institute for Physics of Complex Systems, Stanford and Oxford managed to create a 20-atom time crystal on the sycamore microprocessor chip. Credit: Erik Lucero


The earliest modern proposal of time crystals was made in 2012 by Nobel Laureate, Frank Wilczek in his paper titled ‘Quantum Time Crystals’. To understand Wilczek’s proposal, we should first consider a regular crystal. Crystals are ordered periodically in space. The pattern of a crystal repeats across space.

This is ‘discrete space translational symmetry’. In the crystal structure above, you have to walk some discrete distance to get from one black atom to the closest black atom, and the same distance to get from one white atom to the closest white atom. Crystals are said to spontaneously break ‘continuous space translational symmetry’ as the crystal is not spatially homogenous. Wilczek wondered if there was a system that could break ‘continuous time translational symmetry’ spontaneously. Rather than the discrete ordering of atoms, this system would be characterised by the discrete temporal ordering of events. He dubbed such a system a ‘Time Crystal’ 1. The idea is that time crystals repeat some state periodically after some period of time. What could this look like? Wilczek originally supposed a superconducting ring of atoms carrying a current. The ring is threaded by a small magnetic flux. This generates a constant current in the lowest energy or ‘ground’ state. The charged particles could then be ordered to travel in “lumps” that complete the loop in a constant period, indefinitely, giving us our conditions for a time crystal [1].

Wilczek’s time crystal exists in thermal equilibrium – The system does not require any energy exchange with the surroundings. If the indefinite motion of charge around the ring seems wrong to you, you’re not alone. What Wilczek proposed is ‘Perpetual Motion’ in the ground state. The idea of macroscopic perpetual motion has been discredited among scientists since the 1700s. Technically, Wilczek’s proposal doesn’t violate the current laws of thermodynamics, as no mechanical work can be extracted from the ground state system [2]. However, Wilczek’s superconductor model was shown to be impossible in a paper from Patrick Bruno in equilibrium in 2012.2 Kablamo! Your theory is dead, Wilczek. It was good while it lasted.

Woah, not so fast. What if we also let go of our equilibrium condition. What if we allowed an external energy source to drive the system. And what if our system was arranged so that none of this energy was absorbed or dissipated by the system, so it could behave as though it was in equilibrium. A paper from Yao et al suggested just this in 2016! Yao et al posited that an array of 1D ions could be created as shown.

Certain isotopes have a ‘spin-state’. The concept of spin is too complex to dissect in this blog. The spin axes align parallel and antiparallel with each other as these states are lower energy than random orientations due to their interacting magnetic fields. Suppose these alignments are along the z axis. Normally, as our system would tend towards equilibrium, the arrangement of these spins would randomise. This is ‘thermalisation’. But we want temporal order so we can’t permit this to happen. What if the atoms were radiated with pulses of electromagnetic radiation which caused each of the spins to rotate 180 degrees about the x-axis.

This sort of system is a ‘Discrete Time Crystal’ because the atoms return to their initial state after some multiple integer of the period of the driving force. It’s not spontaneous like Wilczek originally suggested, but it’s good enough. The period of the time crystal would be twice the period of the laser as it oscillates between the two states. Imperfections in the laser could destabilise the ions if the interaction strength between ions is not strong enough. Additionally, if the interaction is too strong, the system would thermalise. The existence of such a system outlined by Yao et al was confirmed by Monroe et al in 2016. They used an array of ten 171Yb+ ions which were oscillated using a Raman laser.

The paper from Yao et al also suggests that this periodic behaviour should continue for extended periods of time if the driving beam is stopped (provided that the frequency of the laser was sufficiently high). These so-called ‘Pre-Thermal Time Crystals’ were observed by Monroe et al in 2021.

In 2021, the researchers at Google’s Quantum AI team, in collaboration with researchers from Stanford, Oxford and the Max Planck Institute of Physics of Complex Systems managed to create a time crystal of 20 qubits on Google’s Sycamore processor [3-4]. Not to be outdone by Google, two physicists from the University of Melbourne, Philipp Frey and Stephan Rachel, managed to create a time crystal of 57 qubits using IBM’s Brooklyn and Manhattan processors [5-6].


Quantum Computing

One possible pathway for quantum computing is spintronics. Currently data in computers is binary. It consists of an array of 1s and 0s. These 1s and 0s are bits. In quantum computing, information is stored by ‘Qubits’, which can be 1, 0 or a ‘superposition’ of the two. The essence of spintronics is using the spin-states of particles as our qubits. One of the challenges of quantum computing is finding a sustainable way of storing these 1s and 0s as information across long periods of time in an energy-efficient manner. These systems are easily disturbed by heat and the preservation of the memory often entails cooling the qubits to temperatures on the scale of tens of millikelvins [7]. The stable oscillations of the discrete time crystals could open the door to long-term, energy-efficient, quantum-memory storage.


The future of quantum computers is, however, still uncertain. What does the future hold for time crystals? Time will tell.


1 Wilczek was not the first to use the term ‘Time Crystal’ to describe periodically repeating systems. Biologist Arthur Winfree gave the name to biological systems that repeat periodically in his book, The Geometry of Biological Time in 1980.

2 Additionally in 2015, a paper from Haruki Watanabe and Masaki Oshikawa purported to disprove time crystals in equilibrium. However, this proof contained a discrete error, as outlined in the appendix of a paper from Khemani et al [8]. However the authors of this paper agreed that the conclusion of the falsified proof is likely correct.




[1]: Zakrzewski, J., 2012. Crystals of Time. [online] Physics. Available at: <> [Accessed 10 May 2022].

[2] Andersen, T., 2021. Here’s how time crystals really work. [online] Medium. Available at: <> [Accessed 12 May 2022].

[3] Mi, X., Ippoliti, M., Quintana, C. et al. Time-crystalline eigenstate order on a quantum processor. Nature 601, 531–536 (2022).

[4] University, S., 2021. Time crystal in a quantum computer | Stanford News. [online] Stanford News. Available at: <> [Accessed 11 May 2022].

[5] Frey, P. and Rachel, S., 2022. Realization of a discrete time crystal on 57 qubits of a quantum computer. [online] Available at: <> [Accessed 13 May 2022].

[6] Cho, A., 2022. Physicists produce biggest time crystal yet. [online] Available at: <> [Accessed 13 May 2022].

[7] Moss, S., 2021. Cooling Quantum Computers. [online] Available at: <> [Accessed 10 May 2022].

[8] Khemani, V., Moessner, R. and Sondhi, S., 2019. A Brief History of Time Crystals. [online] Available at: <> [Accessed 12 May 2022].


The Cosmic Symphony: A Brief Exploration of String Theory

According to theoretical physicist Michio Kaku, “in string theory, all particles are vibrations on a tiny rubber band; physics is the harmonies on the string; chemistry is the melodies we play on vibrating strings; the universe is a symphony of strings, and the ‘Mind of God’ is cosmic music resonating in 11 dimensional hyperspace” [1]. This post will not attempt to cover the physics behind the need for an 11-dimensional hyperspace. Instead, it will provide an insight into the beauty of a remarkable theory, which has the potential to be the theory that physics has strived for since its inception: the theory of everything.


The Edge of Knowledge

Currently, there are two core theories upon which the entirety of modern physics is built. The first is quantum mechanics, which allows us to understand the universe at the smallest scales, giving us an insight into the behaviour of atoms; and their even smaller constituents, electrons and quarks. The second is Einstein’s theory of general relativity, which allows us to understand the universe at the largest scales, giving us an insight into the behaviour of stars and galaxies. Both of these theories have been tested rigorously and the accuracy of the predictions made by these theories is incredibly precise. We have used these theories to make remarkable advancements in technology and the world has derived great benefit from them. However, quantum mechanics and general relativity cannot both be right. Our best understanding of the movement of the heavens and of the building blocks from which our universe is created directly contradict each other. 


This seems like it should create an urgent problem which needs to be solved immediately, but the contradiction has not caused many issues in most physics research, as each theory applies to very different extreme circumstances which do not overlap much. Quantum mechanics is used when we would like to study things that are small and light (like atoms) and general relativity is used when we study things that are large and heavy (like stars and galaxies). However, when we encounter things that have a mix of these properties, we get into trouble. When we’d like to study the centre of a black hole or the beginning of the universe, we encounter situations where immense masses have been crushed into a tiny scale (small and heavy).  Which theory do we use here? It would appear that we need a combination of the two. However, when we try to bring these two theories together, we get chaotic and nonsensical results. The laws of physics break down. Clearly, if we want to advance our understanding of the centre of black holes, we need something more.


Superstring Theory

Physicists have discovered that through the lens of superstring theory, the conflicts between quantum mechanics and general relativity are resolved. In superstring theory (or string theory for short) we no longer need to change the theory we use depending on the situation. One theory of the universe fits all situations. For the first time in human history, we have a theory that has the capacity to explain the entirety of the known universe. We have a candidate for the theory of everything. 


The Fabric of The Universe

Figure 1    [2]

String theory states that the elementary particles in our universe (the smallest known building blocks of our universe) are not actually indivisible little balls, but instead tiny loops called strings. In Figure 1, the apple is shown on increasingly smaller scales, starting off with the whole apple, then the atoms, the protons and neutrons, the elementary particles (electrons and quarks) and then finally, strings.


Musical Strings

Figure 2     [2]

In order to understand how the strings in string theory operate, it’s helpful to first think about more familiar strings, such as those on a violin. Each string on a violin can experience a huge number of different vibrational patterns called resonances. Examples of these vibrational patterns are shown in Figure 2. Each different vibrational pattern will create a different musical note. The resonance patterns consist of a number of peaks (top of the wave) and troughs (bottom of the wave) that are equally spaced across the length of the string. Each different resonance pattern will have a different number of waves and troughs that fit between the two ends of the string.


The strings in string theory operate in a similar way in that different resonance patterns have different numbers of peaks and troughs that can fit across a given length but now instead of the peaks and troughs fitting inside a straight line, they now fit inside a loop as shown in Figure 3. 

Figure 3     [2]

The first loop has two peaks and two troughs, the second has four peaks and four troughs, the third has eight peaks and eight troughs. Just like the resonance patterns in violin strings give rise to different musical notes, the different resonance patterns on a fundamental string give rise to different masses and force charges. So, the properties of a ‘particle’ are determined by the vibrations of its internal string.


Mass Visualised Through Strings

Figure 4     [3]

Let us cast our minds back to the violin strings. The energy of a given vibrational pattern will depend on the amplitude (vertical distance between the peaks and troughs) and wavelength (the horizontal distance between 2 peaks) as shown in Figure 4. The energy of the vibrational pattern increases as the amplitude increases and the wavelength decreases. This makes sense intuitively, because we can see that more frantic vibrational patterns have higher energy and the calmer vibrational patterns have lower energy. In Figure 2, the vibrational patterns increase in energy as you move downwards, as their wavelengths decrease and they become more frantic in appearance. We can also imagine plucking a violin string more vigorously (supplying more energy) will cause the string to vibrate more frantically and plucking a string less vigorously (supplying less energy) will cause the string to vibrate more calmly.


From special relativity, we know that energy and mass are equivalent (E=mc2); which means that if the mass of an object increases, its energy increases, and vice versa. Therefore, the mass of an elementary particle is determined by the vibrational pattern of its internal string.  Whilst a heavier particle will have an internal string vibrating more energetically, a lighter particle will have an internal string vibrating less energetically. The forces of the universe are explained by detailed aspects of the string’s vibrational pattern.


So, we can see that in string theory, the properties of matter can be determined by investigating the vibrations of the fundamental strings that make up our universe. This is a sharply different perspective from pre-string theory physicists, who claimed that each of the elementary particles that make up our universe were “cut from a different fabric”. Each particle was viewed as being made up of different “stuff”, for example, electrons were made up of “stuff” with negative electric charge and neutrinos were made up of “stuff” with no electric charge. String theory breaks this notion and declares that all “stuff” is the same, tiny vibrating strings. Different elementary particles are strings vibrating at different notes, joining together in enormous numbers to form planets, stars and galaxies – creating a cosmic symphony.



Term Definition
The Theory of Everything A hypothetical theoretical framework explaining all known physical phenomena in the universe.
Elementary particle The smallest known building blocks of the universe (examples include electrons, quarks and neutrinos).
Quarks Elementary particles that make up the nucleus of an atom.
Neutrinos Elementary particles with no electrical charge.


Further Reading

The Elegant Universe ~ Brian Greene

The Cosmic Landscape ~ Leonard Susskind



[1] Lubin, G., 2014. String Field Theory Genius Explains The Coming Breakthroughs That Will Change Life As We Know It. [online] Business Insider. Available at: <> [Accessed 12 May 2022].

[2] GREENE, B. (1999). The elegant universe: superstrings, hidden dimensions, and the quest for the ultimate theory.

[3] 2022. Waves and Wavelengths | Introduction to Psychology. [online] Available at: <> [Accessed 12 May 2022].

In 1990, we launched the Hubble telescope into orbit as the first sophisticated orbital observatory. This was an incredible achievement, but also allowed us to study things never before possible. The high resolution spectrograph allows us to observe and record ultra violet waves that could never make it through the earth’s atmosphere. This is hughely impactful to our observations and allows us to see the universe clearer than ever before.

Upon its launch, the telescope was malfunctioning and ineffective at making precise recordings, but through multiple missions and spacewalks, the telescope was fully functional and meeting its full potential. With fully functioning parts, we were able to make some remarkable discoveries. Through the observation of nearby cepheid variable stars, we were finally able to make an accurate calculation of the Hubble constant. While this had been estimated previously, we now had a reasonable calculation of the universe’s rate of expansion. Not only did we find values of important constants, we were able to get a clearer picture of the universe’s history as a whole. In the Hubble Deep Field, a photo including over 1,500 galaxies, we saw some of the “story” of the universe.

Hubble Ultra Deep Field | ESA/Hubble

Hubble Deep Field

This telescope was a huge success. It far outlived its expected lifespan and brought numerous incredible discoveries to mankind. So yes, the Hubble telescope was potentially the most important advancement in the study of the universe to date. Now the James Webb has taken over the mission and we can fully appreciate the impact the Hubble space telescope had on our understanding of the universe today.

God particle and physicists’ grail, what is the Higgs boson and why is it so important?


What it is?

Le Higg’s boson is elementary particle postulated in 1964 by three researchers, two Belgians, François Englert and Robert Brout, and then independently by the Scotsman Peter Higgs, even if history has only retained the name of the latter.

To summarise its function, it has often been said that “The Higgs boson is responsible for the mass of everything around us”, a shortcut that needs to be deciphered

Why do we need Higg’s boson?

The Standard Model of physics postulates that in the Universe, all the phenomena that surround us are the work of four elementary interactions, fundamental forces: the electromagnetic interaction (to the origin of light and magnetisation), the gravitational interaction, the strong nuclear interaction (explains the very cohesion of the nucleus of the atom) and the weak nuclear interaction (explains the radioactivity of certain atomic nuclei).

Even though there are four different forces, physicists are trying to unify them in order to arrive at a Theory of everything.

The electromagnetic interaction and the weak nuclear force, for example, are unified in the form of an original force that links them: the electroweak force. Except that to bring them together in this way, the Standard Model ‘theory of everything’ faces a big problem. Each force has its own type of elementary particle.

The electromagnetic force is associated with the photon, while the weak force is associated with the W and Z bosons. This is where unification gets stuck: the expected symmetry is broken, because the photon has zero mass (hence the possibility of a “speed of light”), which is not the case for the very massive W and Z bosons. How can the theory of unification be sustained in the face of forces that are so different with respect to such an essential ingredient as their mass? This leads to several questions like how to take mass into account or explain the fact that different particles have different masses.

The Standard Model, already included all types of elementary particles such as quarks, leptons and bosons.

Figure 1:Ordinary matter content of the Standard Model of Particle Physics.


The protons and neutrons inside the nucleus of the atom are made up of quarks. The more familiar leptons are the negatively charged electrons outside the nucleus. Bosons are responsible for various energy fields such as electricity and light. But none of these particles explain the mass.

In 1964, the theory of the Higgs field was postulated to solve this problem.


Higgs field

To understand the mechanism of Higgs field, we can compare it to a group of people who initially fill a room in a uniform way. When a celebrity enters the room, she draws people around her, giving her a large ‘mass’. This gathering corresponds to the Higgs mechanism, and it is the Higgs mechanism that assigns mass to particles.

It is not the boson that directly gives mass to the particles: the boson is a manifestation of the Higgs field and the Higgs mechanism, which gives mass to the particles. This is comparable, in this metaphor, to the following phenomenon: an outsider, from the corridor, spreads a rumour to the people near the door. A crowd of people forms in the same way and spreads, like a wave, across the room to pass on the information: this crowd corresponds to the Higgs boson.

In a more physicist way, the Higgs field is everywhere and it has an effect even in the most total vacuum of space, it gives mass. Since the quantum vacuum is full of the Higgs field, aligned along a particular direction of the abstract space mentioned above, particles “parallel” to this direction will be able to propagate without constraint, but those “perpendicular” will suffer a slowdown due to the incessant interactions with the Higgs field.


Origin of the universe:

According to this theory, The Universe would be filled with a specific field that gives mass to elementary particles. This field was present at the Big Bang, but it was zero. The force bosons were therefore also empty of mass – including the W and Z bosons. As the Universe cooled, it spontaneously became charged. All the elementary particles that interacted with the Higgs field acquired mass, and the longer the interaction lasted, the higher the mass turned out to be. The photon did not interact with the field because of its nature, so its mass is zero; but the W and Z bosons interacted so much that they got their mass.


The theory looks good on paper, but to be proven, it must be observed. The fields of the Universe are all manifested, when they exist, by a visible particle. In the case of the Higgs field, this particle is called the Higgs boson. Only one thing was missing: to detect it.


Some words about the LHC:

Figure 2:The tunnel and part of the particle accelerator at the Large Hadron Collider

The existence of the scalar boson is too short to detect it directly: we can only hope to observe its decay products, or even the decay products of these. Events involving ordinary particles can also produce a signal similar to that produced by a Higgs boson.

Thanks to the Large Hadron Collider (LHC), the particle accelerator that started operating in 2008 near Geneva. In this 27-kilometre long circular tunnel, protons are accelerated very quickly, raising their mass very high and producing collisions – an ideal tool in particle physics. It is possible to recreate conditions similar to the primordial environment of the universe.

Two parallel LHC experiments, the ATLAS and CMS detectors, have detected a boson in a mass region of the order of 126 GeV, exactly where the Higgs boson was expected to be. It could be nothing other than the long-sought particle that proved (fifty years after the theory emerged)  the existence of the Higgs field.

The experimental proof of the Higgs Boson led to the award of the Nobel Prize in Physics to François Englert and Peter Higgs in 2013.


To conclude, the impact of this discovery is huge as it clarified the Standard Model of physics and further confirmed the idea of a unification of forces. Knowledge of the Higgs boson’s properties can also guide research beyond the Standard Model and pave the way for the discovery of new physics, such as supersymmetry or dark matter.



Le boson de Higgs. (n.d.). CERN.

Universalis‎, E. (n.d.). BOSON DE HIGGS. Encyclopædia Universalis.

Pourquoi se préoccuper du boson de Higgs? (n.d.). Parlons Sciences.


Introduction to Physical Astronomy – Elementary Particles. (n.d.).

Pourquoi se préoccuper du boson de Higgs? (n.d.). Parlons Sciences. Retrieved May 13, 2022, from

There are three main ways to get faster on a bike; get stronger, get a lighter bike or become more aerodynamic.


Since getting stronger is hard and a lighter bike is expensive, the easiest way to go faster on a bike is by optimising aerodynamics.


One component of drag is the difference in air pressure in front of and behind you. Shapes are developed by using computational fluid dynamics (CFD) and wind tunnels. The most aerodynamic shapes are often these teardrop-shaped “aerofoils”, and these shapes influence the design of helmets and the tubes that make up the bike frame.


A popular joke in CFD is the aerodynamic cow, here the colours indicate the intensity of the wind drag.


Some designers have even used the CFD simulation heatmap to inspire the bike’s livery


The force of wind drag can be calculated by the following equation

Where ρ is the air density, A is the frontal area, C_D is the drag coefficient and v is the velocity. This means that at higher speeds the drag force increases a lot. Since we’re trying to go fast anyway, this doesn’t matter. The most important variable here that is easiest to manipulate is the frontal area. A cyclist wants to make their frontal are as small as possible to decrease drag. They can manipulate this by simply using a bike that is designed to be more aerodynamic, with complicated carbon layups that incorporate aerofoil shapes into the leading edges of the frame and with deep section wheels that decrease turbulence of the wind as it passes by the wheels. But the easiest, and cheapest way to decrease frontal area is to bring your elbows in or lean forward more.



The third cyclist in the figure is the most aerodynamic but is also the least equipped to delivery power into the pedals to propel themselves. Hence, this position is usually only adopted when they are going so fast that the cyclist is no longer able to pedal at a cadence that actually makes them go faster, having to rely solely on aerodynamics to gain more speed. Because of this, many different variations of the “supertuck” have been developed by prominent cyclists.



In 2021, the Union Cycliste Internationale, or UCI, the governing body of professional cycling banned the “supertuck” position (sitting on the top tube, as seen in the “Froome” and “Top tube safe” positions) from races, deeming it too dangerous and unsafe. An even more extreme and dangerous method of reducing frontal area can be seen in the following video.

Cymatics are a subset of vibrational phenomena which describe the motion of a material under a vibrational signal. Typically a material i.e a liquid, paste or group of particles are placed upon a plate of arbitrary shape. Different shapes are formed by the material as the surface of the plate is vibrated. The nature of the shape depends on the geometry of the plate and the driving frequency of the vibration.


The phenomena was observed by notable scientist Robert Hooker in 1680 as he saw nodal patterns emerge in flour when applying a vibration across a glass plate. This method was improved upon by German Musician and Physicist Ernst Chladni. He noticed that the vibration could be used to visualise the resonance of musical instruments. He achieved this by drawing a violin bow across a plate covered with a fine dust (sand/flour) until the plate reaches resonance. The phenomena observed can be explained using classical physics. When the plate is vibrated, the dust will always travel to a point of zero vibration. This is because the dust follows the path of standing waves along the nodal lines of the plate. The dust will move away from the antinodes, where standing wave amplitude is at a maximum, and toward the nodes where standing wave amplitude is at a minimum. These form the patterns known as Chladni patterns. These can take many different shapes depending on frequency mode and plate shape


Two Chaldni patterns on the same plate under different frequency modes


The applications of Cymatics and Chladni patterns present a huge potential in the fields of contemporary music and art. For example, the 2022 Eurovision competition uses Cymatics in its logo and poster.

The principles of formation of Chladni patterns can also be extended to liquids and dusts in 3-D orientations. The following video is a perfect example of such, and the fusion of science and music.


Cymatics are certainly an interesting facet of physics that shows the link between sound and vibrational motion.


  1. Pg 101 Oxford Dictionary of Scientists- Oxford University Press- 1999
  2. Forrister, How do chladni plates make it possible to visualize sound, COSMOL, 2018

When it comes to science and the supernatural, the common consensus is that the two areas are polar opposites and will never meet. The area of quantum mechanics, which is approaching the ripe old age of 200 years, continues to provide a great deal of confusion within the scientific community as to what it is that makes such an utterly baffling concept possible.

Quantum in a Nutshell

Quantum mechanics is, at its core, an explanation of why particles act the way they do. In classical mechanics a wave will always act like a wave, that is, it will always travel at the speed of light with some frequency and respective wavelength and will experience effects such as refraction, reflection and diffraction. Similarly, a particle in classical mechanics will always act like a particle; a solid mass with a momentum. Quantum mechanics is used to explain the motion of particles that are so small they do not act like a particle or a wave, but rather as both.

Now you may be thinking “big deal, particle go brrr” but I can assure you it gets weirder. In order to illustrate why the scientific community was and remains to be so perplexed by this field, we must first observe the results of the double slit experiment, the first example of wave-particle duality.

The Double-Slit Experiment

First performed by Thomas Young in 1802, the double slit experiment, as the name suggests, uses two slits in the surface of a solid material to create and interference pattern from an incident beam of light. This experiment was revolutionary in observing the physical properties of wave motion.

Figure 1: Young’s Double Slit Experiment. Credit: [1]

In 1927, in an experiment performed at Western Electric by Clinton Davisson and Lester Germer, it was proven that electrons could undergo diffraction and produce a diffraction pattern, thus proving the hypothesis of wave-particle duality, a fundamental building block of modern-day quantum mechanics. In 1961, the double slit experiment was carried out using a beam of electrons instead of light in order to see if they would produce the same result. Sure enough, the electrons produced an interference pattern on the screen.

So Young’s experiment shows us that electrons behave like waves. But we know that electrons interact with other particles in the same way a particle would. This wave-particle duality is what gives rise to the quantum-mechanical theory. An electron may act as either a particle or a wave at any given time. The real question is does an electron know when it has to act like one or the other? A common thought experiment details a detector being placed at the two slits so that the electron is observed going through the slit. Hypothetically, the interference pattern would not appear on the other side as before since the electron is observed passing through the slit in the form of a particle and therefore must continue to act like a particle. Richard Feynman used this thought experiment to prove that an electron must always act like a wave in this circumstance since this thought experiment cannot possibly be performed due to the impossibly small scale ([2] Harrison, 2006).

Varying Interpretations

Feynman’s thought experiment can cause some confusion as it gives the impression that an electron could choose to act like a particle at the slit because it somehow knows it is being observed. Similar to other natural phenomena, if left unexplained by physics, many will jump to believe that a higher power could control such behaviour. Science has always been used to explain natural phenomena that were previously considered to be acts of witches, gods or the supernatural.

The idea that a particle could be granted some form of consciousness by a higher power is something that would very much excite those who believe in or are searching for a god. Simply put, however, that’s not how it works. The particle does not “choose” its state, it simply is. If placed in a certain condition where a wave will experience a certain effect (as in the double slit experiment) it will do so and will undergo the same process as anything else with wave motion. Like when a non-Newtonian fluid is put under stress, it will alter its form to adapt to its surroundings. An electron is still a particle but simply acts like a wave sometimes.

The probabilistic nature of quantum mechanics does give rise to some theories about alternate worlds in which fundamental particles were to act like particles where they would act like waves in our world. This “Many Worlds” theory is more the stuff of science fiction as there is truly no way for it to be proven. The fact that it cannot be disproven, however, is an interesting notion that finds itself appearing more in cinemas than in the lab.

The Final Message

The scientific community’s interpretations of quantum mechanics vary from the perfectly normal to the apparently bizarre. The notion that this field of study could prove the existence of parallel dimensions seems to be pulled directly out of science fiction. The idea that a particle can choose its state of being gives rise to a plethora of philosophical and potentially religious questions.

Quantum mechanics is clearly the most vital area of physics today and the sooner we can come to an explanation that can be understood by all, the better.



[1] Dr. Doug Davis, Adventures in Physics, 20.2 Young’s Double Slit, East Illinois University.

[2] David M Harrison, 2006, The Feynman Double Slit, Dept. of Physics, University of Toronto.

The northern lights, also known as aurora borealis, are stunning luminous phenomena visible in the north pole region of the planet. In the south pole, they are called southern lights or aurora australis. The colorful dance of these lights in the night sky has fascinated humans for millennia. If once thought of as spiritual entities, today, the physics behind these magical events can be explained.

Figure 1: The Northern Lights, Alaska, night of Feb. 16, 2017. Credit: NASA/Terry Zaperach. [1]

The name aurora borealis was coined by the Italian astronomer Galileo Galilei in 1916 after Aurora, the Roman goddess of dawn, and Boreas, the Greek god of the north wind. The earliest reliable account of the aurora borealis comes from Babylon, in an astronomical notebook dated 567 B.C. The babylonian recorded to have seen a “red glow” on the night of 12/13 March. This observation, which is part of a series of astronomical events, occurred when the geomagnetic latitude of Babylon was about 41°N compared with the present value of 27.5°N, giving reasons to believe in a higher auroral occurence in 567 BC than at present on the territory. In the 20th century, the Norwegian scientist Kristian Birkeland theorized a scientific explanation for this phenomenon. The display of coloured lights is a consequence of the interaction of the solar wind with the Earth’s magnetic field and atmosphere.

The Earth is considered to be approximately a magnetic dipole, with a magnetic South pole (geographic North pole) and a magnetic North pole (geographic South pole) [Fig. 2 ]. The Earth’s magnetic field forms an envelope around our planet, the magnetosphere, and its strength varies roughly between 25,000 – 65,0 nT (0.25 – 0.65 Gauss) depending on its direction and distance from the Earth’s surface. The magnetosphere protects the planet from high-energy particles coming from solar winds.

Figure 2: Earth’s magnetic field. [2]

The solar wind is produced from plasma material escaping from the Sun’s corona (outermost atmosphere). The plasma at the Sun’s surface is heated continuously, up to a point where the Sun’s gravity can no longer hold it down. Strong solar eruptions, called solar flares, and coronal mass ejections produce streams of high energy particles which flow through the solar system reaching the Earth. Here, electrons and protons are drawn into the magnetic field, moving along its field lines towards the poles. Since the Earth’s magnetic field enters and exits the planet from the poles, the protection from high energy particles at the North and South poles is reduced. High energy particles are hence able to penetrate within the atmosphere. In the region between 100 and 500 km above the ground, the aurora forms.

Figure 3: Interaction between solar wind and Earth’s magnetic field. (Credit: NASA). [3]

Electrons from solar winds collide with oxygen and nitrogen gas in the ionosphere and knock electrons out of their shells, forming excited ions. The oxygen and nitrogen ions will release energy in the form of light to regain a stable condition. The aurora colors depend on the wavelength of the light emitted and hence on the type of gas that is excited. For instance, atomic oxygen (O), which is present in the higher layers of the atmosphere, is responsible for the red color, whereas molecular oxygen (O2), present in lower atmosphere layers, is responsible for the most commonly seen green color. Nitrogen is hit more rarely and produces pink and dark red light.

Figure 4: Colorful aurora taken in Delta Junction, Alaska, on April 10, 2015.
Credit: Image courtesy of Sebastian Saarloos. [4]




[3] LAGRANGIAN COHERENT STRUCTURES IN IONOSPHERIC-THERMOSPHERIC FLOWS – Scientific Figure on ResearchGate. Available from: [accessed 12 May, 2022]