In 1990, we launched the Hubble telescope into orbit as the first sophisticated orbital observatory. This was an incredible achievement, but also allowed us to study things never before possible. The high resolution spectrograph allows us to observe and record ultra violet waves that could never make it through the earth’s atmosphere. This is hughely impactful to our observations and allows us to see the universe clearer than ever before.

Upon its launch, the telescope was malfunctioning and ineffective at making precise recordings, but through multiple missions and spacewalks, the telescope was fully functional and meeting its full potential. With fully functioning parts, we were able to make some remarkable discoveries. Through the observation of nearby cepheid variable stars, we were finally able to make an accurate calculation of the Hubble constant. While this had been estimated previously, we now had a reasonable calculation of the universe’s rate of expansion. Not only did we find values of important constants, we were able to get a clearer picture of the universe’s history as a whole. In the Hubble Deep Field, a photo including over 1,500 galaxies, we saw some of the “story” of the universe.

Hubble Ultra Deep Field | ESA/Hubble

Hubble Deep Field

This telescope was a huge success. It far outlived its expected lifespan and brought numerous incredible discoveries to mankind. So yes, the Hubble telescope was potentially the most important advancement in the study of the universe to date. Now the James Webb has taken over the mission and we can fully appreciate the impact the Hubble space telescope had on our understanding of the universe today.

Mercury, Venus, Mars, Jupiter and Saturn are visible in the night sky. They were known by the ancient Babylonians and their current names are derived from Roman gods. Uranus was first sighted in 1690 and recognised as a planet almost a century later, the first discovered since ancient times. Neptune was predicted mathematically, based on its gravitational effects, leading to its observation in the 1840s, along with a whole host of other solar system objects.


It wasn’t until 1930 that Pluto was discovered. Using a blink comparator to scan the night sky for small changes in position, it took 23-year old Clyde Tombaugh ten months to discover it, and made headlines around the world. Initial proposals were to name it after Percival Lowell, the universities founder, or his wife. It was an 11-year old English schoolgirl, Venetia Burnley, who proposed Pluto. She had been learning about the Romans and Greeks, and a classical name was deemed appropriate. Like the king of the underworld, Pluto sits alone in a cold, dark and distant realm. 


Pluto was thought to be the mysterious Planet X, responsible for perturbations in Neptunes orbit, and several times the mass of the Earth. However, this was clearly not the case, as 1950s observations by Gerard Kuipier showed that it had a much smaller radius than the Earth. Later it was found that Pluto was highly reflective, and if it was as big as previously thought, should be incredibly bright. Finally, the discovery of Pluto’s moon Charon in the ‘70s determined Pluto’s size once and for all, at a radius of 1200 km, and 0.2% the mass of the Earth; nowhere near as large as originally proposed. 


In the 1990s, lots of objects similar in size and location to Pluto became known. Eventually, its classification as a planet began to become controversial. The discovery of Eris, with a greater mass than Pluto, led to a desire to formally outline the definition of a planet. In 2006, the International Astronomical Union defined a planet as follows:


  1. It orbits the Sun.
  2. It has formed a spherical shape under its own gravity
  3. It has cleared its neighbourhood of bodies of comparable size, due to its own gravitational dominance. 


Unfortunately, Pluto fails to meet the third criterion, making up only a fraction of the total mass of all the objects in its orbit. Thus, it was stripped of its status as a planet. A new designation, “Dwarf Planet” was created, and Pluto, along with Eris and several other large non-planets. Between its confirmation as a planet, and the point at which it ceased to be one, Pluto had only only completed a fraction of its 250-year orbit around the sun. 


In 2015, the New Horizons spacecraft came within 12,500 km of Pluto, sending back stunning images of its surface. 

One of the biggest threats to the world’s telecommunications infrastructure is large emissions of radiation and magnetic energy from solar flares and coronal mass ejections (CME), also known as solar storms. As human civilization has become more and more dependent on the internet and technological infrastructure, the rare occurrence of severe space weather events has posed a much larger threat to industry and human civilization as a whole than ever before. Researcher Abdu Jyothi of the University of California, in her research paper, termed the impact of a solar superstorm event as the ‘Internet Apocalypse,’ where she examines the worst-case scenario of global internet outages from damaged electronic systems caused by rare solar superstorms.

The unique behaviour of the sun’s magnetic field gives rise to the ejection of radiation, particles, and matter from the surface of the sun, called space weather. The sun is made up of plasma, which is an extremely hot gas of ionised particles. The magnetic field of the sun is created in a system which is called the solar dynamo [2], where the motion of the electrically charged plasma in a magnetic field induces a current, which in turn generates more magnetic field [10]. Astrophysicists have deduced the shape of the magnetic fields at the surface of the sun by examining the motion of the plasma in corona loops in the sun’s atmosphere.

Image 1: Corona loops of plasma at the surface of the sun

Sunspots are regions of relatively lower temperatures on the surface of the sun where very strong magnetic fields prevent heat from within the sun from reaching the surface. In these regions, strong magnetic fields become entangled and reorganized. This causes a sudden explosion of energy in the form of a solar flare often accompanied by a coronal mass ejection, which is the ejection of electrically charged solar matter from the sunspot [3].

Some of the electromagnetic energy released by the flares, in the form of x-rays, and ejected particles can reach the earth. However, the earth has its own protective mechanisms against the regular occurrence of mild solar flares and CMEs. The upper layers of the earth’s atmosphere absorb the influx of x-rays. The earth is also surrounded by its own magnetic field, called the magnetosphere, which acts as a protective shield against the ejected solar matter from a CME that reaches the earth. Therefore, telecommunication infrastructure on the surface of the earth avoids the harmful effects. However, telecommunications satellites and GPS satellites further away from the earth’s surface are left more exposed and have been damaged or rendered inoperable due to solar flares [4]. Human health can also be compromised by direct exposure to harmful radiation and high energy particles emitted due to solar activity. On the surface of the earth, we are shielded from the harmful effects of space weather, however, astronauts in space must use special protective gear due to the extra exposure.

One of the positive side effects of space weather interacting with the earth is the spectacular display of the aurora borealis, more commonly known as the northern or southern lights, where charged particles become trapped in the earth’s magnetosphere and accelerate towards the earth’s poles. They collide with atoms and molecules in the earth’s atmosphere, releasing a burst of light and a colourful display in the night sky [5].

Image 2: The deflection of coronal mass ejections by the earth’s magnetic field

Image 3: The Aurora Borealis

More worryingly, there is the unlikely chance of a large-scale coronal mass ejection striking the earth in its direct path causing widespread damage to electrical infrastructure even on the surface of the earth. Such an event has been named a ‘solar superstorm.’ The enormous ejection of electrically charged solar matter causes shock waves in the magnetosphere and releases its energy toward the earth in a geomagnetic storm. As the earth’s magnetic field varies, electric currents are induced on the earth’s conducting surfaces by electromagnetic induction. These are called geomagnetically induced currents (GIC) [1]. This, in turn, induces electrical currents in the power grid and other grounded conductors, potentially destroying the electrical transformers and repeaters which keep the power grid running and damaging the vast network of long-distance cables which provide internet.

There has also been growing concern about the weakening of the earth’s magnetic field over the past few centuries, with some physicists believing it is because of the long overdue flip of the earth’s magnetic poles, something which occurs around every 200,000 years, but has not happened in over 750,000 years. This could potentially leave humans and telecommunication infrastructure on earth more exposed to more moderate and frequent space weather events.

The last large-scale geomagnetic storm, called the Carrington Event, was recorded in September 1859. Its main impact was on the mode of telecommunication at the time, the telegraph network, with reports of telegraph wires catching fire, electrical shocks, and messages sending even when it was disconnected from power. The CME was so strong that auroras could be seen from as far south as the Caribbean! In March 1989, magnetic disturbances caused by a strong solar storm wiped out the entire electrical grid in the Canadian province of Quebec [7]. Of course, since 1859, modern civilisation has become very dependent on electrical infrastructure to provide homes and businesses with power and internet for our constant connectivity demands, so a storm on the scale of the Carrington Event could have catastrophic implications for the world’s economy and society in general. A study by the National Academy of Sciences estimated that the damage caused by a Carrington-like event today could cost over $2 trillion and multiple years to repair [8]. By analysing the records of solar storms over the past 50 years, Peter Riley of Predictive Sciences inc. calculated that the probability of such an event happening in the next 10 years is 12%.

So what can be done to minimise the damage caused by a large-scale geomagnetic storm? As the sun is just coming out of a period of inactivity in its solar cycle, we have not experienced a significant solar storm to test the resilience of modern technological infrastructure against such events. However, nowadays we have a series of satellites which monitor solar activity, such as NASA’s Advanced Composition Explorer [9], which gives forewarning of a large incoming solar storm that would take at least 13 hours to reach earth. This gives power grid operators enough time to shut down their stations and minimise damage caused as it passes. Even with this precaution, an unprecedented Carrington-like event will likely cause widespread damage to the earth’s telecommunications and internet infrastructure, so better damage prevention and recovery plans will need to be put in place to ensure the maintenance of vital technological systems that the world’s population depends so heavily on.



[1] Sangeetha Abdu Jyothi. 2021. Solar Superstorms: Planning for an Internet Apocalypse. In ACM SIGCOMM 2021 Conference (SIGCOMM ’21), August 23–27, 2021, Virtual Event, USA. ACM, New York, NY, USA, 13 pages. https: //

[2] NASA. 2022. Understanding the Magnetic Sun. [online] Available at: <>.

[3] 2022. Sunspots and Solar Flares | NASA Space Place – NASA Science for Kids. [online] Available at: <>.

[4] Encyclopedia Britannica. 2022. How solar flares can affect the satellites and activity on the surface of the Earth. [online] Available at: <>.

[5] NASA. 2022. Aurora: Illuminating the Sun-Earth Connection. [online] Available at: <>.

[6] 2022. [online] Available at: <>.

[7] NASA. 2022. The Day the Sun Brought Darkness. [online] Available at: <>.

[8] 2022. Near Miss: The Solar Superstorm of July 2012 | Science Mission Directorate. [online] Available at: <>.

[9] O’Callaghan, J., 2022. New Studies Warn of Cataclysmic Solar Superstorms. [online] Scientific American. Available at: <>.

[10] Paul Bushby, Joanne MasonUnderstanding the Solar Dynamo. Astronomy & Geophysics, Volume 45, Issue 4, August 2004, Pages 4.7–4.13.

[11] 2020. Could Solar Storms Destroy Civilisation? Solar Flares & Coronal Mass Ejection [online] Available at: <>.

Image Sources

Image 1: Vatican Observatory. 2022. Coronal Loops on the Sun – Vatican Observatory. [online] Available at: <>.

Image 2 & 3: GoOpti low-cost transfers. 2022. Aurora Borealis: where to see the Northern Lights in 2021?. [online] Available at: <>.

Featured image: Cassini’s narrow angle camera captures three images of Epimetheus (smaller moon) passing between the spacecraft and Janus, on the 14 February 2010 [4].

There are two of Saturn’s many moons, named Janus and Epimetheus, that have an interesting relationship with each other. They are both small rocky worlds, only 196km and 135km across at their widest [1,2]. They both have nearly circular orbits at a distance of about 151,000km from the centre of Saturn. If you had observed where these moons were in the Saturn system last year in 2021, you would have found that Epimetheus was about 50km closer to Saturn than Janus, its slightly smaller orbit being inside that of Janus’. But if you were to look again next year, 2023, this would no longer be the case. Epimetheus will be further from Saturn than Janus and its orbit will be completely outside Janus’. This is because in 2022, Janus and Epimetheus will do something that they only do once every four years – swap orbits [5].

This interaction took place multiple times while NASA’s Cassini spacecraft was investigating the Saturn system before the mission ended in 2017. It first imaged the moons in 2005 two months before the moons switched places [3], and the featured image of this article is a close approach of the two moons that the spacecraft captured in 2010.

How does this switching of places between these moons work exactly? To explain it, it’s useful to think about some of the basic physics of orbits.

Suppose we have a rocket in a perfectly circular orbit around a planet (figure 1). If we point the rocket in the direction it’s moving in, and fire the engine for a short amount of time, it will of course speed up. As well as that, the extra energy given to the rocket from firing its engine increases the size of the orbit. The orbit will be stretched from a circle into an ellipse. The point in the orbit directly opposite the rocket, 180 degrees or half an orbit away, will move away from the planet. This point in the orbit, that is now the furthest point from the planet, is called the apoapsis. The point where the rocket fired it’s engine is now the closest point in the orbit to the planet, and is called the periapsis.

At the periapsis the rocket is now moving faster than it was in the circular orbit, since it sped up by firing the engine. But as it moves along the orbit, getting further away from the planet, it will slow down. It will be travelling slowest at apoapsis. When it passes this point and starts to get closer to the planet again, its speed will increase once more. This is all a result of Kepler’s second law of planetary motion. Without going into detail, it basically says that the further an orbiting object is from the object that it is orbiting, the more slowly it moves.

Figure 1: (a) a rocket in a perfectly circular orbit around a planet, arrows indicating the direction it orbits in (b) The rocket fires it’s engine for a short enough time that it doesn’t move too far in its orbit. (c) After its finished firing the engine, the rocket is in an elliptical orbit, moving fastest at periapsis and slowest at apoapsis. Diagrams by yours truly.

Now what about these moons of Saturn that swap orbits? Suppose they start out with the situation in figure 2, with the orbit of one moon being slightly outside the orbit of the inner moon (Since they switch, either moon can be Janus or Epimetheus). Now another of Kepler’s laws, the third one, essentially says that the larger an orbit is, the more time it takes the object to go around once. But this is not only because the orbiting object has more distance to travel, it’s also because it moves more slowly. If you google the average orbital velocities of the planets, you’ll see that they decrease as the planets get further from the Sun, mercury moving the fastest and Neptune the slowest.

Figure 2: Janus and Epimetheus, either being the inner or outer moon, have roughly a 50km difference in their orbits. The inner moon in red, moves slightly faster than the outer moon in blue, meaning as time passes it gets further and further ahead of the outer moon. Distances/sizes not to scale.

So the outer moon in the slightly larger orbit moves a little more slowly than the inner moon in the smaller orbit. This inner moon gradually moves further and further ahead of the outer, slower one. The inner moon will take 4 years to “lap” the outer moon, i.e. to go all the way around and start catching up on the outer moon from behind, as seen in figure 3 when the moons are at position 1.

They never actually get closer than about 15,000 km from each other[2]. But what happens is that the two attract each other gravitationally. The inner moon, being behind the outer moon, is pulled forward in its orbit by gravity. The outer moon, being in front of the inner one, experiences the same force pulling it backward.

Figure 3: What WOULD happen if the gravitational force between the two moons were like a short burst like from a rocket, force F in diagram. The Moons start out on the black orbits. The inner moon is accelerated so that it ends up on the red orbit, with new apoapsis above it’s old orbit (red dot, position 2). The outer moon is decelerated so that its new periapsis is below its old orbit(blue dot position 2). Distances/sizes once again not to scale.

Just like the rocket in figure 1, the inner moon has a force pulling it forward, in the direction it is orbiting, so its orbit is stretched such that the point directly opposite it moves outward, the red dot in position 2, figure 3. Similarly, the outer moon is pulled back by the inner moon, so a force is pushing it in the opposite direction to its motion, so it slows down. What happens in that situation is the opposite of figure 1 – the point directly opposite the moon in its orbit will move inward. That point, the blue one at position 2 in figure 3, will now be the closest point in the outer moon’s orbit to Saturn, or periapsis.

If these gravitational forces were just short lived boosts to the moons’ speeds, as if they had rocket engines on them, their orbits might look like what’s seen in figure 3. But this is not exactly the case. Technically gravitational forces extend to infinity, so there is no sharp start or stop point for the moons’ interaction. During their close encounter, they are continuously attracting each other. The outer moon is continuously being slowed by the inner moon behind it. Its orbit is continuously being decreased in size so that in the end, it is on a nearly circularly orbit closer to Saturn than the used-to-be inner moon. Similarly, the inner moon’s orbit is continuously being increased by the forward pull of the outer moon so that it ends up on a nearly circular orbit further from Saturn than the used-to-be outer moon.

So now the inner moon is the outer moon, and the outer moon is the inner moon! The new inner moon, being on a smaller orbit, moves more quickly and gets ahead, as predicted by Kepler’s third law. Now the moons can return to figure 2 where the whole process can start over again, only with the moons switched.

One other thing to note: Janus has about four times the mass of Epimetheus [3]. This means when the switching of orbits occurs, the change in Janus’ orbit radius will be less than the change in Epimetheus’ orbit radius. The gravitational forces attracting the moons during their close approach are equal. The same force on Janus, the heavier moon, will mean less acceleration – heavier objects require more force to accelerate them at the same rate. Less acceleration ultimately means less of an increase or decrease in the size of Janus’ orbit after the encounter.

Janus and Epimetheus are the only known example of this particular kind of “co orbital relationship” in our solar system [1] and an example of some of the strange things that are allowed by the laws of orbital mechanics.


[1] “Epimetheus”, NASA Science, solar system exploration, last updated December 19 2019

[2] “Janus”, NASA Science, solar system exploration, last updated December 19 2019

[3] Emily Lakdawalla, “The Orbital Dance of Epimetheus and Janus”, Planetary Society, February 7 2006

[4] “Cruising Past Janus”, NASA Science, solar system exploration, last updated October 4 2018

[5] “Janus Moon and its Dance around Saturn: The Co-Orbitals”, The

At this stage Albert Einstein is a household name across the globe, his name being synonymous with the word ‘genius’. His theories and thought experiments have had an immense impact on our understanding of physics, and he seemed able to imagine ideas that no one else possibly could. This post tells the story of how, in 1929, Einstein retracted one of his theories – calling it the “biggest blunder” of his life.

Einstein had included in his equations of gravity what he called ‘the cosmoligical constant’, a constant represented by capital lambda, which allowed him to describe a static universe. This model of the universe complied with what was the generally accepted theory at the time in 1917, that the universe was indeed stationary.

Then, in 1929, Edwin Hubble (whom the Hubble telescope is named for) presented convincing evidence that the universe is in fact expanding. This caused Einstein to abandon his cosmological constant (i.e. presuming its value to be zero), believing it to be a mistake.

But that wasn’t the end of the story. Years went by and physicists repeatedly inserted, removed and reinserted lambda into the equations describing the universe, unable to decide whether or not it was necessary. Finally, in 1997/8, two teams of theorists, one led by Saul Perlmutter, published papers outlining the need for Einstein’s cosmological constant.

Through their analysis of the most distant supernovae ever observed – one of which was SN1997ap – and their redshifts, they had reached the conclusion that the distant supernovae were roughly fifteen percent farther away than where the prior models placed them. This could only mean they were accelerating away from us. The only known thing that ‘naturally’ accounts for this acceleration was Einstein’s lambda, and so it was reinserted into Einstein’s equations one last time. Einstein’s equations now perfectly matched the observed state of the universe.

So while Einstein’s initial use for the cosmological constant was incorrect, it proved vital to forming an accurate picture of our world. The great theorist had once again foreseen a factor no one else could – this time a good 70 years before anyone, including himself, was able to prove it.

With the success of detecting gravitational waves, a new branch of gravitational astronomy emerged, allowing us to explore the universe in a radical new way. While detectors like LIGO can successfully detect waves from stellar mass mergers with frequencies on the order of 100 Hz, they don’t have the sensitivity to probe more interesting, lower frequency waves from sources such as galaxy mergers or the gravitational-wave background. To increase sensitivity, the scale of the detector must increase. This imposes a serious limitation to earth-based detectors. Future projects such as LISA, a space-based interferometer, will greatly increase the scale, with promises of millihertz detection. However, there is another method, Pulsar Timing Arrays, that can effectively create a galaxy-sized detector, allowing us to probe the most exotic gravitational waves in the nanohertz regime.

Neutron stars are stellar remnants left over from the cataclysmic deaths of 10 – 25 solar mass stars. They are extremely dense, at around 1 solar mass with only a 10 km radius. From angular momentum conservation, these neutron stars can reach incredibly fast rotational rates. The fastest ever recorded reached over 700 revolutions per second, translating to an equatorial velocity of nearly 25% of light speed! Neutron stars slowly convert their rotational energy into magnetic dipole radiation. This radiation (predominantly radio waves) is channelled via powerful, billion Tesla magnetic fields into tight beams coming from each pole. If the magnetic poles are misaligned with the rotational axis, a distant observer may see a pulsar, characterised by a “lighthouse” effect as the beams pulse past the observer in extremely regular intervals.

A pulsar timing array (PTA), uses several such pulsars scattered throughout the Milky Way galaxy and searches for tiny deviations in the recorded pulses, potentially caused by gravitational wave distortion. By analysing correlated deviations across several pulsars, extremely low frequency gravitational waves can be detected, and their source location estimated. There are many potential phenomena that could be observed using PTA’s that otherwise would be impossible to detect.  While detectors like LIGO are limited to stellar mass mergers, the main target for PTA’s are supermassive black hole binaries formed from galaxy mergers. Almost all large galaxies have supermassive black holes at their centres. When these galaxies merge, the black holes orbit each other at vast distances producing very low frequency gravitational waves. By studying the properties of these waves, novel information can be discovered about the distribution of galaxies and their formation & evolution throughout the universe1, 2.

A more ambitious goal of PTA’s is to detect the gravitational wave background (or stochastic background). A superposition of randomly emitted waves from innumerable independent sources scattered throughout cosmic space and time. The information contained in this background has large potential for cosmology. It could prove the existence of cosmic strings, 1D topological defects formed in the early universe, predicted by both quantum field theory and string theory3.  Detection of these objects could further test and differentiate between these theories. Another interesting target are primordial gravitational waves, created by cosmic inflation a fraction of a second after the big bang4. Currently, we can only use electromagnetic radiation to probe back in time to the age of the cosmic microwave background, at around 400,000 years after the big bang. No electromagnetic radiation can be detected before this time as the universe was opaque to light. Primordial gravitational waves don’t have these limitations and are one of the most promising methods to look back further to the very beginning of the universe.

There are currently three well-established PTA projects in the world. Europe’s EPTA, Australia’s PPTA and the North American NANOGrav. Combined they form the International Pulsar Timing Array (IPTA), using an array of over a hundred pulsars scattered throughout the Milky Way. No direct measurement of gravitational waves has yet been achieved, but upper limits to the gravitational wave background amplitude have already significantly constrained galaxy formation and evolution models.

The current limitation of PTAs is our radio detectors. Current telescopes struggle to isolate deviations in pulsar timings from the intrinsic noise in the observed radio spectrum. However, there are many exciting projects planned in the upcoming decades such as the Square Kilometre Array, that will drastically improve radio wave detection. With these next-generation detectors, and with more pulsars being discovered every year, Pulsar Timing Arrays will become very powerful detectors that will very likely revolutionise our understanding of the universe.


[1] arXiv:2002.01954v2 [astro-ph.IM] 20 Mar 2020

[2] arXiv:1802.05076v1 [astro-ph.IM] 14 Feb 2018

[3] X. Siemens, V. Mandic and J. Creighton, Phys. Rev. Lett. 98, 111101 (2007) doi:10.1103/PhysRevLett.98.111101 [astro-ph/0610920].

[4] A. A. Starobinsky, JETP Lett. 30, 682 (1979) [Pisma Zh. Eksp. Teor. Fiz. 30, 719 (1979)].


With the recent imaging of the black hole in the center of the Milky Way, known as Sagittarius A (Sgr A*), the world was given the first image ever captured of the black hole at the center of the galaxy. Initially predicted by Albert Einstein’s general theory of relativity in 1916, black holes have been one of astrophysics’ hard to observe phenomena, whose interest by the astrophysical community was sparked from the discovery of neutron stars by Jocelyn Bell Burnell in 1967. These remnants of gravitationally collapsed stars could not be directly observed until 2019, but were commonly identified by the effects they have on stellar objects close to them, such as the bending of light known as “gravitational lensing”. Larger black holes, such as Sgr A*, could also be indirectly observed through the paths of stars as they travelled around it.

Fig. 1: Animation of stars orbiting around Sgr A*.

Black holes are generally active in nature, and usually have glowing surroundings (known as accretion disks) from which they can be observed as a silhouettes. These silhouettes are about 2.5 times the actual size of the black hole, and so this makes it sound like a simple feat to accomplish. However, no singular telescope has been created (as of yet!) which would have enough resolution to be able to discern the black hole from its surroundings. Because of this, we have been limited to artists renditions of how black holes may look like until the day the Event Horizon Telescope released its first image of the black hole known as M87*, located ~55 million light-years from Earth and has a mass of ~6.5 billion Suns.

Fig. 2: Image of M87* obtained by the EHT.

The EHT is not one singular telescope, and is instead an international collaboration of 60 institutions in over 20 regions of the world. The EHT uses a technique known as Very Long Baseline Interferometry (VBLI), with the institutions each individually contribute telescopes which are synchronised as an array of telescopes to focus on the same object at the same time, acting as a telescope as large as the Earth itself. In such a VBLI set-up, the aperture of this “virtual” telescope is the distance between the two farthest telescopes from each other, which in the case of the EHT is the distance between Antarctica and Spain. This distance is almost the same as the diameter of the planet, and effectively acts as a telescope with an aperture the size of the planet, which allows the EHT to image black holes with a relatively large apparent size (the size of the black hole in the night sky from Earth) [1]. The only black holes that can really be imaged by the EHT would be supermassive black holes, such as the ones found at the center of most galaxies.

These telescopes, while not physically connected together, work by taking images of the same object timed using Hydrogen Maser atomic clocks, which precisely timestamp the image obtained. Weather forecasts would be used to time an image capture, specifically for ranges of days with considerable clear skies at as many sites as possible. Once a suitable range of days is determined, the telescopes throughout the EHT would take images of the object over the course of several days. The data obtained would be in the ~350TB/day range for each telescope in the EHT during the capture of M87*, and data was stored on high-performance helium-filled drives. By the end of the data-capturing period, all these data would be sent to supercomputers to combine the images together into one overall image. [2]

Fig. 3: The locations of the telescopes in the EHT for the capture of M87*.

M87* was the first of the two targets of the EHT, with it being a very active supermassive black hole. The black hole itself is one of the largest black holes in terms of apparent size from the Earth, and so it was one of the ideal targets of the EHT. Its active state also made it interesting to image, as it actively has matter falling into it and spewing out as jets of particles. After the imaging of M87*, the second target of the EHT was Sgr A*, which has a considerably more noisy environment due to it being in the center of our own galaxy. However the same procedure was used as for M87*, with more institutions having joined the EHT since the imaging of M87*, and the final result was released as of 12th May 2022.

Fig. 4: Compiled image of Sgr A* by EHT.

The image received of Sgr A* could be split into 4 clusters of similar features, their averages shown below the main image of the main Sgr A* image. Three of the four clusters show ring-like features, with different parts of the ring brighter than the others. The last cluster also contained images that fit the data, but were not ring-like. The bars associated with each image show to what proportion each cluster was present in the obtained data, with thousands of images in the first three, and only hundreds of images in the fourth.


Information References:

  1. Lutz, O., 2019. How Scientists Captured the First Image of a Black Hole. [online] Jet Propulsion Laboratory. Available at: <> [Accessed 12 May 2022].
  2. 2019. Press Release (April 10, 2019): Astronomers Capture First Image of a Black Hole. [online] Available at: <> [Accessed 12 May 2022].
  3. 2022. Astronomers reveal first image of the black hole at the heart of our galaxy. [online] Available at: <> [Accessed 12 May 2022].

Image References:

  1. 2015. Animation of the Stellar Orbits around the Galactic Center. [online] Available at: <> [Accessed 12 May 2022].
  2. 2019. Press Release (April 10, 2019): Astronomers Capture First Image of a Black Hole. [online] Available at: <> [Accessed 12 May 2022].
  3. 2019. Locations of the EHT Telescopes. [online] Available at: <> [Accessed 12 May 2022].
  4. 2022. Astronomers reveal first image of the black hole at the heart of our galaxy. [online] Available at: <> [Accessed 12 May 2022].

When observing the Universe from our scaled down perspective, the distribution of galaxies seems to be random and sporadic with no clear pattern or structure. Its only when we zoom out and look at the Universe from a larger scale that the structure of the Universe begins to reveal itself to us. This structure, just like the structure of stars and planets, arises primarily from gravitational force. Once galaxies form, they clump up into clusters or even superclusters. This arrangement of the Universe mimics that of a spider web or a foam like composition, also known as ‘the cosmic web’, and is comprised of filaments and voids.

The branches of galactic density in the cosmic web are known as galactic filaments and are the largest known structures in the Universe known to man. They are comprised of walls of superclusters and can be as large as 80 Mpc. Filaments create borders between voids. The vast open spaces between these filaments are called cosmic voids. Voids were initially discovered by scientists in the 1970’s by means of redshift surveys of galaxies. Their sizes can vary from 10 to 100 Mpc and make up most of the volume of the Universe, roughly 80%. Voids are defined as areas in space with a very low numbers of galaxies that are distributed far from one another. If voids are large enough, they can even be dubbed as super voids. The largest known void is the Boötes void, discovered by Robert Kirshner et al, and has a diameter of 0.27% of the observable Universe.

Figure 1: simulation of the cosmic web.[1]

In this figure, the blue threads represent filaments, and the vacant spaces represent voids.

The dominant theory of void formation is that they were created by means of baryon acoustic oscillations, BAO, in the early Universe. BAO can be described as quantum fluctuations in the densities of baryonic matter, also known as visible matter. It is believed that in the early Universe, fluctuations in the density of baryonic matter resulted in increased concentrations of dark matter being formed. Baryonic matter was then attracted to it, by means of gravitational attraction, and formed stars and galaxies. This resulted in areas of high density becoming denser and areas of low density becoming even less dense. Thus, filaments indicate areas of high dark matter density while voids are areas of low dark matter density. From this we can postulate that dark matter dictates the structure of the universe at the largest scale.

Voids are often overlooked as being areas of empty space in the Universe, but they are a key component in understanding the expansion of the universe and dark energy. Due to the existence of super voids, 70% of the energy in the Universe must consist of dark energy. This number is consistent with the current estimates of 68.3% obtained in 2013 from observations made by the Planck spacecraft and thus, consistent with the Lambda-CDM model. Voids are extremely sensitive to cosmological alterations. This indicates that the shape of a void is indicative of the expansion of the Universe and somewhat governed by dark energy. By studying the shape of voids over time, we can become one step closer to modelling an equation of state for dark energy.

Image credit:

  1. NASA, ESA, and E. Hallman (University of Colorado, Boulder)

Figure 1: The terrestrial planets of our solar system: Mercury, Venus, Earth, and Mars[5]

Planetary science is a relatively new sub-field of astrophysics that is devoted to studying the nature of planetary formations both in and outside of our solar system. This field employs techniques across many disciplines, namely physics and geophysics. The beauty of planetary sciences is that one can reasonably assume all terrestrial bodies evolve similarly, so studying visible features and characteristics of other planets/moons leads scientists to a greater understanding of the hidden or past features of our planet. One such feature is heat-pipe cooling.

In 2017, a new way of understanding the cooling and heat transfer of terrestrial planets was proposed by a team of scientists from NASA and Louisiana State University[1].

Figure 2: Image of Jupiters moon, Io showing its surface eruptions.[6]

The theory was borne from observations of Jupiter’s tidally heated moon, Io shown in figure 2. The theory of heat-pipe cooling was developed to explain why Io has such a thick lithosphere that is consequently able to support its numerous mountains and calderas that result from its volcanism. If the lithosphere was not thick enough, any mountain formed on the moons surface would collapse under the stress. Scientists concluded based on observations that our solar systems terrestrial planets evolved in a manner consistent with heat-pipe cooling. In this way, the theory provides an explanation for Earth’s surface volcanic materials, its thick lithosphere, and its mountains.

Heat-pipe cooling/tectonics is a method of cooling for terrestrial planets wherein the main heat transport mechanism present on the planet is volcanism originating below the lithosphere, shown below in the top of figure 3. (stagnant-lid convection is discussed below as well)[4]. Melted rocks and volatile materials are moved from the liquid mantle through the lithosphere via vents and volcanic eruptions. These eruptions lead to global resurfacing of the planet by which older layers are buried and pushed down to form the thick, cooler lithospheres that contain the tectonic plates we are all familiar with.

Figure 3: Modeled lithospheric thickness for heat-pipe and stagnant-lid planets.[7]

Since these first observations, scientists have hypothesized that this method of cooling has been involved in the evolution of all terrestrial planets, including Earth. They went a step further to say that heat-pipe cooling is the last significant endogenic (occurring below the surface of the planet) resurfacing process experienced by terrestrial bodies, and as such contains information from this period in their formation such as magnetic fields and gravitational anomalies[4]. The time taken for a planet to cool via this method is directly related to its size, and as such, larger terrestrial planets in other solar systems may still be in heat-pipe cooling mode[4]. The significance of this is that observing larger terrestrial planets still in their heat-pipe cooling mode may lead to a greater understanding of the role this physical concept may have played in the formation of Earth as we know it today. Unfortunately, all of the terrestrial bodies in our solar system, including the Moon show evidence of heat-pipe cooling in their past but are no longer actively undergoing this process.

The hallmark of heat-pipe cooling is the resultant strong lithosphere in addition to the constant resurfacing of the body due to persistent volcanic activity. The implications of heat pipes for the tectonic history of terrestrial planets are shown in figure 3 above. Planets that evolve through a heat-pipe cooling phase develop a thick lithosphere early in their history which subsequently thins as volcanism wanes and thickens as stagnant-lid convection takes over. This is where the surface of a terrestrial planet has no active plates and is instead locked into one giant plate, and the surface material does not experience subduction[3]. Currently Earth does have active plates as evident by our abundant seismic activity, but this form of convection will eventually become dominant, and the lithosphere will no longer be recycled. At this stage, whatever condition the Earth’s surface is in will be preserved for extraterrestrials to view and study, similar to how we study other planets.






Image sources: