Posts

The Big Crunch

The Big Crunch is one suggested theory for how the Universe may end. The Friedmann equations, which physicist Alexander Friedmann derived in 1922, are what gave rise to the concept of the big crunch. Under the presumption that the Universe is homogeneous and isotropic, these equations relate to the Universe’s expansion or contraction. The force of gravitational attraction and the outward momentum created of the big bang were thought to be the two principal forces that determined whether the Universe would expand or contract. The Universe would then  rather simply constrict if gravity were to overpower expansion. This happens when the Universe’s actual density exceeds the critical density determined by the Friedmann equations. For comparison, our cosmos has a critical density of roughly 5.9 protons per cubic meter and an actual density of roughly 1 proton every 4 cubic meters. Exceeding the critical density would result in the collapse of the cosmos or the creation of a super-dense black hole.

The Big Bounce

Then, you might be curious as to what occurs after collapse. Many scientists believe it would just “bounce” back, also known as ‘the Big Bounce’. When the Universe collapses it will become the size of a Planck’s length, or roughly 10-35 m. The Universe’s Volume is very close to 0 at this scale, Therefore as the Universe still has Mass it’s density is almost infinite. In that situation, the processes of the Big Bang such as inflation would all repeat again recreating the Universe. This is currently theorised to be due to Loop Quantum gravity. The Universe could repeat itself endlessly as a result of this process, restarting time and time again. Possibly, our own Universe is not the first in this cycle simply just another repeat in a long line of repeats. It is also speculated that every time the Universe begins again, it will be the same Universe i.e., the timeline we exist in today will be the same timeline in the new Universe and all the events of our timeline will once again occur in the new Universe.

Dark Energy

The Big Crunch as a thoery was acceptable up until two international teams of scientists discovered dark energy in 1998. The real driving force behind expansion. In 2004, more in depth measurements of dark energy were possible thanks to the Chandra X-ray observatory. They discovered that although dark energy has a fixed value, the amount of dark energy is increasing as the Universe gets bigger. This measurement suggests that the Universe is always expanding. There is little room for a Big Crunch in an expanding cosmos. However, we still don’t fully understand dark energy. Fortunately for us, the Nancy Grace Roman space telescope is set to launch in May 2027. The Nancy Grace Roman telescope’s objective is to learn more about the origins of dark energy and its function in the early cosmos. While the likelihood of the Big Crunch is currently minimal, there is still a possibility that with greater knowledge of the universe, collapse could still be possible. This is something that the Nancy Grace Roman will hopefully clarify.

 

References:

Dr. Edward  J. Wollack(2015) What is the Ultimate fate of the Universe? -Nasa https://map.gsfc.nasa.gov/universe/uni_fate.html

William Harris(2021) How the Big Crunch Theory Works – How stuff works https://science.howstuffworks.com/dictionary/astronomy-terms/big-crunch.htm

Ashley Balzer(2020) Nasa’s WFIRST will help uncover the Universe’s fate -Nasa https://www.nasa.gov/feature/goddard/2019/nasa-s-wfirst-will-help-uncover-universe-s-fate

Anna Heiney(2004) Chandra Discovery Sheds Light on Dark Energy – Nasa https://www.nasa.gov/missions/science/f_dkenergy.html

Britt Grisworld(2014) What is the Universe Made Of?-Nasa

https://wmap.gsfc.nasa.gov/universe/uni_matter.html

Spaghettification may sound like a new diet trend on how to get thin as spaghetti. Yet if this were true, the method to achieve this is the most extreme yet. Spaghettification is a term in astrophysics coined by Stephen Hawking in his book, A Brief History of Time; it is described as the vertical stretching and horizontal compression of objects into elongated and thinned shapes in a very strong non-homogeneous gravitational field. Basically, spaghettification is a process which turns you (a human) into spaghetti when the difference in the strength of the gravitational field from your head to your toe is very strong. This causes your body’s length to become stretched and to maintain the volume of your body (and ensure there is no net change in volume), the width of your body is compressed. Therefore, your head is pulled up, your feet are pulled downwards, your right side is pulled to your left and your left side is pulled to your right. Almost like a nightmare rollercoaster that leaves you in the shape of a spaghetti and ultimately, dead due to the extreme pressure that this process causes.

A visual of an astronaut undergoing spaghettification;

For spaghettification to occur, the difference in gravitational field strength needs to be quite noticeable . Spaghettification is most commonly known to occur in black holes, specifically after the event horizon. The event horizon is a point where an object cannot escape a black hole and ultimately, it is sucked into the black hole’s singularity (a point of infinite density at centre of a black hole). After this point, the gravity caused by the singularity in a black hole causing the gravitational gradient (difference in strength of the gravitational field) becomes noticeably different from head to toe. Because this process is not constant across the entire body, the force of gravity is unequal and therefore stretches the body. If it was equal across the body, there would be no observable spaghettification.

Luckily for us, humans have been immune to spaghettification, but this happens all the time to other objects in space. Every time that an object with mass is sucked into a black hole, this process occurs. Any object from a small asteroid to a star undergoes spaghettification on passing the black holes event horizon. Our solar system is only 1000 light-years away from the nearest black hole. So thankfully, we have a while before we have to imagine our Earth turning from a meatball into spaghetti.

Introduction

Have you ever envisioned yourself sitting alongside Han and Chewie in the Millennium Falcon travelling through hyperspace seeing the stars form streaks across your windscreen? Or do you rather fancy yourself on the USS Enterprise and its iconic warp drive? Worry not, because as I am about to explain in this blog, warp drives and faster-than-light (FLT) travel might actually “be possible” according to physics with some caveats.

 

Image depicting travel of a spaceship through a wormhole by curving spacetime around it . Source : [4].

Interstellar travel and its difficulties

It is quite well-known that given our current technological limitations, that feasible interstellar travel is impossible. One of the reasons for this is the large amount of fuel required to carry out these journeys which we simply do not have or have an efficient source of. Another reason is special relativity which states that the faster an object travels, its ‘relativistic’ mass or mass due to an object’s motion increases tremendously and becomes infinite at the speed of light. This has the effect of requiring more and more energy as the spaceship is to go faster and faster, finally needing an infinite amount of energy to get the spaceship to the speed of light! With these in mind, let us see how a hypothetical warp drive would work

 

General and special relativity

Special relativity forms a subset of general relativity which holds under very small regions of 4D spacetime that can be considered to be flat[2]. Thus, general relativity does not impose restrictions on the speed of light, only that special relativistic restrictions must hold ‘locally’ including the limit on the speed of light[2].

Warp drives and general relativity

In 1994, theoretical physicist Miguel Alcubierre published a paper based on his work on general relativity proposing a solution to Einstein’s field equations of general relativity. According to Einstein’s field equations, one can calculate the deformation of the 4D spacetime, that Einstein predicted we live in, by a ‘distribution of mass and energy’[1]. Alcubierre did the reverse and, using a particular configuration of spacetime, was able to figure out the mass and energy distribution that produced it. The configuration he proposed was that of a bubble of flat spacetime containing the spaceship with a region of compressed spacetime in front of the ship (which can be thought of as spacetime being destroyed similar to the Universe collapsing in the Big Crunch) and a region of expanded spacetime behind the ship (which can be thought of as the spacetime being created just as the Universe expands with the Big Bang)[2]. This “warped” region can push this pocket of flat spacetime containing the spaceship to arbitrary velocities including faster than light without the spaceship having to move.

 

 

Image depicting the warped spacetime as proposed by Alcubierre. The region dipping down as shown compresses spacetime and is felt as the force of gravity, known by us all. The region projecting up is more unusual and expands spacetime as described and can be thought of as “antigravity”[1]. These two regions work together to push the flat region in the center, containing the spaceship, forward at any arbitrary velocity .Source : [5]

Thus, for a spaceship at the center of this bubble, it would find itself at rest or even moving slightly with respect to a relatively flat portion of spacetime, thereby having its velocity less than the “local” speed of light within the flat region of the bubble. Moreso, the spaceship being at rest or close to it in the flat spacetime region, it would not experience any of the relativistic effects such as length contraction, time dilation or mass expansion. So, the spaceship stays at rest within the pocket volume of flat spacetime, but this pocket moves as it is pushed by the “warped” region of spacetime.

 

Too good to be true?

Although the warp drive configuration of spacetime or known as the “Alcubierre metric” is a valid solution of Einstein’s field equations, to produce it is to open up a whole new assortment of problems.

One of the major problems is that the Alcubierre metric requires negative energy density to produce it[2]! This seems impossible to produce, however, there have been observed effects in quantum field theory that gives rise to negative energy densities under certain scenarios such as the Casimir effect[2], but what is this Casimir effect?

In quantum vacuum, quantum field theory predicts there are fluctuations in electromagnetic energies that produce electromagnetic waves at all wavelengths[3], but if two perfectly conducting, uncharged plates are brought close together, only those waves having nodes at either plate will fill the space between similar to standing waves on a guitar string[3]. This reduces the number of possible waves compared to that in free vacuum which reduces the energy density inside the cavity compared to outside[3]. With the energy density outside being close to zero, there is a small negative energy density created inside the cavity that results in a negative pressure inside that cavity which results in the plates being pushed together or attracted to one another. This is known as the Casimir effect.

 

Image depicting the Casimir effect with limited number of electromagnetic modes inside the cavity due to constraints resulting in a negative energy density with respect to the surrounding, producing a force on the plates. Source : [6]

As shown with the Casimir effect, negative energy densities do exist on a quantum scale, however, this is way too small for the amount of negative energy we require. Another viable solution of Einstein’s equations is a wormhole (also called Einstein-Rosen bridges) that require the same negative energy constraint in order to work[2]. In order to produce stable wormholes, “exotic matter” having negative energy density is required but this is completely hypothetical[2].

 

Another problem that Alcubierre warp drives brings is that it creates the possibility for making “closed time-like curves” which open up violations of causality[2] (such as effect happening before cause!). A well-known example is the grandfather paradox: if you travel back in time and kill your grandfather before your parent was born, they never would have been born and neither would you so who went back in time to kill your grandfather?

 

 

Conclusion

Thus, it can be seen, no matter how exciting building a warp drive to cruise by countless galaxies we can only dream of visiting may seem, we are still limited by our current technology. And although the need for the impossible negative energy can be overcome by using positive energy instead and with a slightly modified spacetime metric[1], the amount of energy required would be still phenomenal. Moreover, care must be taken to ensure that the spaceship and its inhabitants do not fall prey to the massive tidal forces at the boundaries of the moving “bubble” of spacetime.

So, to conclude, although  warp drives still seem like they belong on our television screens and on the pages of our books, the promising physics behind it give us hope that somewhere a long time from now, we may visit a galaxy far, far away.

 

REFERENCES:

[1]: Gast R, Spektrum. Star trek ’s warp drive leads to new physics [Internet]. Scientific American. [cited 2022 May 13]. Available from: https://www.scientificamerican.com/article/star-treks-warp-drive-leads-to-new-physics/

[2]: Alternate view column AV-81 [Internet]. Washington.edu. [cited 2022 May 13]. Available from: https://www.npl.washington.edu/av/altvw81.html

[3]: Stange A, Campbell DK, Bishop DJ. Science and technology of the Casimir effect. Phys Today [Internet]. 2021;74(1):42–8. Available from: http://dx.doi.org/10.1063/pt.3.4656

 

IMAGE SOURCES:

[4]: https://commons.wikimedia.org/wiki/File:Wormhole_travel_as_envisioned_by_Les_Bossinas_for_NASA.jpg

[5]: https://commons.wikimedia.org/wiki/File:Alcubierre.png

[6]: https://commons.wikimedia.org/wiki/File:Casimir_plates.svg

Mercury, Venus, Mars, Jupiter and Saturn are visible in the night sky. They were known by the ancient Babylonians and their current names are derived from Roman gods. Uranus was first sighted in 1690 and recognised as a planet almost a century later, the first discovered since ancient times. Neptune was predicted mathematically, based on its gravitational effects, leading to its observation in the 1840s, along with a whole host of other solar system objects.

 

It wasn’t until 1930 that Pluto was discovered. Using a blink comparator to scan the night sky for small changes in position, it took 23-year old Clyde Tombaugh ten months to discover it, and made headlines around the world. Initial proposals were to name it after Percival Lowell, the universities founder, or his wife. It was an 11-year old English schoolgirl, Venetia Burnley, who proposed Pluto. She had been learning about the Romans and Greeks, and a classical name was deemed appropriate. Like the king of the underworld, Pluto sits alone in a cold, dark and distant realm. 

 

Pluto was thought to be the mysterious Planet X, responsible for perturbations in Neptunes orbit, and several times the mass of the Earth. However, this was clearly not the case, as 1950s observations by Gerard Kuipier showed that it had a much smaller radius than the Earth. Later it was found that Pluto was highly reflective, and if it was as big as previously thought, should be incredibly bright. Finally, the discovery of Pluto’s moon Charon in the ‘70s determined Pluto’s size once and for all, at a radius of 1200 km, and 0.2% the mass of the Earth; nowhere near as large as originally proposed. 

 

In the 1990s, lots of objects similar in size and location to Pluto became known. Eventually, its classification as a planet began to become controversial. The discovery of Eris, with a greater mass than Pluto, led to a desire to formally outline the definition of a planet. In 2006, the International Astronomical Union defined a planet as follows:

 

  1. It orbits the Sun.
  2. It has formed a spherical shape under its own gravity
  3. It has cleared its neighbourhood of bodies of comparable size, due to its own gravitational dominance. 

 

Unfortunately, Pluto fails to meet the third criterion, making up only a fraction of the total mass of all the objects in its orbit. Thus, it was stripped of its status as a planet. A new designation, “Dwarf Planet” was created, and Pluto, along with Eris and several other large non-planets. Between its confirmation as a planet, and the point at which it ceased to be one, Pluto had only only completed a fraction of its 250-year orbit around the sun. 

 

In 2015, the New Horizons spacecraft came within 12,500 km of Pluto, sending back stunning images of its surface. 

Featured image: Cassini’s narrow angle camera captures three images of Epimetheus (smaller moon) passing between the spacecraft and Janus, on the 14 February 2010 [4].

There are two of Saturn’s many moons, named Janus and Epimetheus, that have an interesting relationship with each other. They are both small rocky worlds, only 196km and 135km across at their widest [1,2]. They both have nearly circular orbits at a distance of about 151,000km from the centre of Saturn. If you had observed where these moons were in the Saturn system last year in 2021, you would have found that Epimetheus was about 50km closer to Saturn than Janus, its slightly smaller orbit being inside that of Janus’. But if you were to look again next year, 2023, this would no longer be the case. Epimetheus will be further from Saturn than Janus and its orbit will be completely outside Janus’. This is because in 2022, Janus and Epimetheus will do something that they only do once every four years – swap orbits [5].

This interaction took place multiple times while NASA’s Cassini spacecraft was investigating the Saturn system before the mission ended in 2017. It first imaged the moons in 2005 two months before the moons switched places [3], and the featured image of this article is a close approach of the two moons that the spacecraft captured in 2010.

How does this switching of places between these moons work exactly? To explain it, it’s useful to think about some of the basic physics of orbits.

Suppose we have a rocket in a perfectly circular orbit around a planet (figure 1). If we point the rocket in the direction it’s moving in, and fire the engine for a short amount of time, it will of course speed up. As well as that, the extra energy given to the rocket from firing its engine increases the size of the orbit. The orbit will be stretched from a circle into an ellipse. The point in the orbit directly opposite the rocket, 180 degrees or half an orbit away, will move away from the planet. This point in the orbit, that is now the furthest point from the planet, is called the apoapsis. The point where the rocket fired it’s engine is now the closest point in the orbit to the planet, and is called the periapsis.

At the periapsis the rocket is now moving faster than it was in the circular orbit, since it sped up by firing the engine. But as it moves along the orbit, getting further away from the planet, it will slow down. It will be travelling slowest at apoapsis. When it passes this point and starts to get closer to the planet again, its speed will increase once more. This is all a result of Kepler’s second law of planetary motion. Without going into detail, it basically says that the further an orbiting object is from the object that it is orbiting, the more slowly it moves.

Figure 1: (a) a rocket in a perfectly circular orbit around a planet, arrows indicating the direction it orbits in (b) The rocket fires it’s engine for a short enough time that it doesn’t move too far in its orbit. (c) After its finished firing the engine, the rocket is in an elliptical orbit, moving fastest at periapsis and slowest at apoapsis. Diagrams by yours truly.

Now what about these moons of Saturn that swap orbits? Suppose they start out with the situation in figure 2, with the orbit of one moon being slightly outside the orbit of the inner moon (Since they switch, either moon can be Janus or Epimetheus). Now another of Kepler’s laws, the third one, essentially says that the larger an orbit is, the more time it takes the object to go around once. But this is not only because the orbiting object has more distance to travel, it’s also because it moves more slowly. If you google the average orbital velocities of the planets, you’ll see that they decrease as the planets get further from the Sun, mercury moving the fastest and Neptune the slowest.

Figure 2: Janus and Epimetheus, either being the inner or outer moon, have roughly a 50km difference in their orbits. The inner moon in red, moves slightly faster than the outer moon in blue, meaning as time passes it gets further and further ahead of the outer moon. Distances/sizes not to scale.

So the outer moon in the slightly larger orbit moves a little more slowly than the inner moon in the smaller orbit. This inner moon gradually moves further and further ahead of the outer, slower one. The inner moon will take 4 years to “lap” the outer moon, i.e. to go all the way around and start catching up on the outer moon from behind, as seen in figure 3 when the moons are at position 1.

They never actually get closer than about 15,000 km from each other[2]. But what happens is that the two attract each other gravitationally. The inner moon, being behind the outer moon, is pulled forward in its orbit by gravity. The outer moon, being in front of the inner one, experiences the same force pulling it backward.

Figure 3: What WOULD happen if the gravitational force between the two moons were like a short burst like from a rocket, force F in diagram. The Moons start out on the black orbits. The inner moon is accelerated so that it ends up on the red orbit, with new apoapsis above it’s old orbit (red dot, position 2). The outer moon is decelerated so that its new periapsis is below its old orbit(blue dot position 2). Distances/sizes once again not to scale.

Just like the rocket in figure 1, the inner moon has a force pulling it forward, in the direction it is orbiting, so its orbit is stretched such that the point directly opposite it moves outward, the red dot in position 2, figure 3. Similarly, the outer moon is pulled back by the inner moon, so a force is pushing it in the opposite direction to its motion, so it slows down. What happens in that situation is the opposite of figure 1 – the point directly opposite the moon in its orbit will move inward. That point, the blue one at position 2 in figure 3, will now be the closest point in the outer moon’s orbit to Saturn, or periapsis.

If these gravitational forces were just short lived boosts to the moons’ speeds, as if they had rocket engines on them, their orbits might look like what’s seen in figure 3. But this is not exactly the case. Technically gravitational forces extend to infinity, so there is no sharp start or stop point for the moons’ interaction. During their close encounter, they are continuously attracting each other. The outer moon is continuously being slowed by the inner moon behind it. Its orbit is continuously being decreased in size so that in the end, it is on a nearly circularly orbit closer to Saturn than the used-to-be inner moon. Similarly, the inner moon’s orbit is continuously being increased by the forward pull of the outer moon so that it ends up on a nearly circular orbit further from Saturn than the used-to-be outer moon.

So now the inner moon is the outer moon, and the outer moon is the inner moon! The new inner moon, being on a smaller orbit, moves more quickly and gets ahead, as predicted by Kepler’s third law. Now the moons can return to figure 2 where the whole process can start over again, only with the moons switched.

One other thing to note: Janus has about four times the mass of Epimetheus [3]. This means when the switching of orbits occurs, the change in Janus’ orbit radius will be less than the change in Epimetheus’ orbit radius. The gravitational forces attracting the moons during their close approach are equal. The same force on Janus, the heavier moon, will mean less acceleration – heavier objects require more force to accelerate them at the same rate. Less acceleration ultimately means less of an increase or decrease in the size of Janus’ orbit after the encounter.

Janus and Epimetheus are the only known example of this particular kind of “co orbital relationship” in our solar system [1] and an example of some of the strange things that are allowed by the laws of orbital mechanics.

References:

[1] “Epimetheus”, NASA Science, solar system exploration, last updated December 19 2019

https://solarsystem.nasa.gov/moons/saturn-moons/epimetheus/in-depth/

[2] “Janus”, NASA Science, solar system exploration, last updated December 19 2019

https://solarsystem.nasa.gov/moons/saturn-moons/janus/in-depth/

[3] Emily Lakdawalla, “The Orbital Dance of Epimetheus and Janus”, Planetary Society, February 7 2006

https://www.planetary.org/articles/janus-epimetheus-swap

[4] “Cruising Past Janus”, NASA Science, solar system exploration, last updated October 4 2018

https://solarsystem.nasa.gov/resources/14926/cruising-past-janus/

[5] “Janus Moon and its Dance around Saturn: The Co-Orbitals”, The Planets.org

https://theplanets.org/moons-of-saturn/janus-moon/

Is there a limit to how cold something can become? Does a minimum temperature exist? The Kelvin temperature scale was designed with this in mind, where 0 Kelvin (equal to -273.15 degrees Celsius) would be the lowest possible temperature but is 0 Kelvin even possible?

To explore this idea we first have to state what temperature is. Temperature is simply a measure of a system’s internal kinetic energy. The system in question is comprised of particles each moving randomly with their own momentum, thus giving them their own kinetic energy. The greater the intensity of this random motion, the higher the system’s temperature is. Theoretically 0 Kelvin means no internal kinetic energy, the Kelvin temperature scale was set with this definition. So, this means that the particles in the system would cease to move or even vibrate. This is perfectly fine for classical mechanics but the particles that we are talking about are not classical particles they are quantum particles. A quantum particle is a particle governed by the laws of quantum mechanics which apply at the atomic/sub-atomic level, these are the particles that every system we could consider will be comprised of. So given our particle’s are quantum in nature then at 0 Kelvin each particle would have a fixed position and 0 momentum, but this idea breaks the Heisenberg Uncertainty Principle, one of the most fundamental laws of Quantum Mechanics.

The Heisenberg’s Uncertainty Principle states that the product of the uncertainty of position and the uncertainty of momentum must be greater than or equal to a set number (Planck’s constant divided by 4 pi), put simply the Heisenberg’s Uncertainty Principle gives us a lower limit to how precisely we can measure an objects momentum and position. This means the more precisely we define a quantum particles position the less we are able to define its momentum. So if we fix a quantum particles position, its momentum becomes unknown. Its momentum will fluctuate with the possibility of very high values.

These fluctuations in momentum give rise to a little kinetic energy. This concept is the zero-point energy, the theoretically minimum energy a system can have which is NOT absolute zero. So due to quantum mechanics specifically the Heisenberg Uncertainty Principle it is impossible for an object to be cooled to absolute zero Kelvin because it will maintain a small amount of kinetic energy due to fluctuations in each particles momentum.

With the success of detecting gravitational waves, a new branch of gravitational astronomy emerged, allowing us to explore the universe in a radical new way. While detectors like LIGO can successfully detect waves from stellar mass mergers with frequencies on the order of 100 Hz, they don’t have the sensitivity to probe more interesting, lower frequency waves from sources such as galaxy mergers or the gravitational-wave background. To increase sensitivity, the scale of the detector must increase. This imposes a serious limitation to earth-based detectors. Future projects such as LISA, a space-based interferometer, will greatly increase the scale, with promises of millihertz detection. However, there is another method, Pulsar Timing Arrays, that can effectively create a galaxy-sized detector, allowing us to probe the most exotic gravitational waves in the nanohertz regime.

Neutron stars are stellar remnants left over from the cataclysmic deaths of 10 – 25 solar mass stars. They are extremely dense, at around 1 solar mass with only a 10 km radius. From angular momentum conservation, these neutron stars can reach incredibly fast rotational rates. The fastest ever recorded reached over 700 revolutions per second, translating to an equatorial velocity of nearly 25% of light speed! Neutron stars slowly convert their rotational energy into magnetic dipole radiation. This radiation (predominantly radio waves) is channelled via powerful, billion Tesla magnetic fields into tight beams coming from each pole. If the magnetic poles are misaligned with the rotational axis, a distant observer may see a pulsar, characterised by a “lighthouse” effect as the beams pulse past the observer in extremely regular intervals.

A pulsar timing array (PTA), uses several such pulsars scattered throughout the Milky Way galaxy and searches for tiny deviations in the recorded pulses, potentially caused by gravitational wave distortion. By analysing correlated deviations across several pulsars, extremely low frequency gravitational waves can be detected, and their source location estimated. There are many potential phenomena that could be observed using PTA’s that otherwise would be impossible to detect.  While detectors like LIGO are limited to stellar mass mergers, the main target for PTA’s are supermassive black hole binaries formed from galaxy mergers. Almost all large galaxies have supermassive black holes at their centres. When these galaxies merge, the black holes orbit each other at vast distances producing very low frequency gravitational waves. By studying the properties of these waves, novel information can be discovered about the distribution of galaxies and their formation & evolution throughout the universe1, 2.

A more ambitious goal of PTA’s is to detect the gravitational wave background (or stochastic background). A superposition of randomly emitted waves from innumerable independent sources scattered throughout cosmic space and time. The information contained in this background has large potential for cosmology. It could prove the existence of cosmic strings, 1D topological defects formed in the early universe, predicted by both quantum field theory and string theory3.  Detection of these objects could further test and differentiate between these theories. Another interesting target are primordial gravitational waves, created by cosmic inflation a fraction of a second after the big bang4. Currently, we can only use electromagnetic radiation to probe back in time to the age of the cosmic microwave background, at around 400,000 years after the big bang. No electromagnetic radiation can be detected before this time as the universe was opaque to light. Primordial gravitational waves don’t have these limitations and are one of the most promising methods to look back further to the very beginning of the universe.

There are currently three well-established PTA projects in the world. Europe’s EPTA, Australia’s PPTA and the North American NANOGrav. Combined they form the International Pulsar Timing Array (IPTA), using an array of over a hundred pulsars scattered throughout the Milky Way. No direct measurement of gravitational waves has yet been achieved, but upper limits to the gravitational wave background amplitude have already significantly constrained galaxy formation and evolution models.

The current limitation of PTAs is our radio detectors. Current telescopes struggle to isolate deviations in pulsar timings from the intrinsic noise in the observed radio spectrum. However, there are many exciting projects planned in the upcoming decades such as the Square Kilometre Array, that will drastically improve radio wave detection. With these next-generation detectors, and with more pulsars being discovered every year, Pulsar Timing Arrays will become very powerful detectors that will very likely revolutionise our understanding of the universe.

References:

[1] arXiv:2002.01954v2 [astro-ph.IM] 20 Mar 2020

[2] arXiv:1802.05076v1 [astro-ph.IM] 14 Feb 2018

[3] X. Siemens, V. Mandic and J. Creighton, Phys. Rev. Lett. 98, 111101 (2007) doi:10.1103/PhysRevLett.98.111101 [astro-ph/0610920].

[4] A. A. Starobinsky, JETP Lett. 30, 682 (1979) [Pisma Zh. Eksp. Teor. Fiz. 30, 719 (1979)].

 

With the recent imaging of the black hole in the center of the Milky Way, known as Sagittarius A (Sgr A*), the world was given the first image ever captured of the black hole at the center of the galaxy. Initially predicted by Albert Einstein’s general theory of relativity in 1916, black holes have been one of astrophysics’ hard to observe phenomena, whose interest by the astrophysical community was sparked from the discovery of neutron stars by Jocelyn Bell Burnell in 1967. These remnants of gravitationally collapsed stars could not be directly observed until 2019, but were commonly identified by the effects they have on stellar objects close to them, such as the bending of light known as “gravitational lensing”. Larger black holes, such as Sgr A*, could also be indirectly observed through the paths of stars as they travelled around it.

Fig. 1: Animation of stars orbiting around Sgr A*.

Black holes are generally active in nature, and usually have glowing surroundings (known as accretion disks) from which they can be observed as a silhouettes. These silhouettes are about 2.5 times the actual size of the black hole, and so this makes it sound like a simple feat to accomplish. However, no singular telescope has been created (as of yet!) which would have enough resolution to be able to discern the black hole from its surroundings. Because of this, we have been limited to artists renditions of how black holes may look like until the day the Event Horizon Telescope released its first image of the black hole known as M87*, located ~55 million light-years from Earth and has a mass of ~6.5 billion Suns.

Fig. 2: Image of M87* obtained by the EHT.

The EHT is not one singular telescope, and is instead an international collaboration of 60 institutions in over 20 regions of the world. The EHT uses a technique known as Very Long Baseline Interferometry (VBLI), with the institutions each individually contribute telescopes which are synchronised as an array of telescopes to focus on the same object at the same time, acting as a telescope as large as the Earth itself. In such a VBLI set-up, the aperture of this “virtual” telescope is the distance between the two farthest telescopes from each other, which in the case of the EHT is the distance between Antarctica and Spain. This distance is almost the same as the diameter of the planet, and effectively acts as a telescope with an aperture the size of the planet, which allows the EHT to image black holes with a relatively large apparent size (the size of the black hole in the night sky from Earth) [1]. The only black holes that can really be imaged by the EHT would be supermassive black holes, such as the ones found at the center of most galaxies.

These telescopes, while not physically connected together, work by taking images of the same object timed using Hydrogen Maser atomic clocks, which precisely timestamp the image obtained. Weather forecasts would be used to time an image capture, specifically for ranges of days with considerable clear skies at as many sites as possible. Once a suitable range of days is determined, the telescopes throughout the EHT would take images of the object over the course of several days. The data obtained would be in the ~350TB/day range for each telescope in the EHT during the capture of M87*, and data was stored on high-performance helium-filled drives. By the end of the data-capturing period, all these data would be sent to supercomputers to combine the images together into one overall image. [2]

Fig. 3: The locations of the telescopes in the EHT for the capture of M87*.

M87* was the first of the two targets of the EHT, with it being a very active supermassive black hole. The black hole itself is one of the largest black holes in terms of apparent size from the Earth, and so it was one of the ideal targets of the EHT. Its active state also made it interesting to image, as it actively has matter falling into it and spewing out as jets of particles. After the imaging of M87*, the second target of the EHT was Sgr A*, which has a considerably more noisy environment due to it being in the center of our own galaxy. However the same procedure was used as for M87*, with more institutions having joined the EHT since the imaging of M87*, and the final result was released as of 12th May 2022.

Fig. 4: Compiled image of Sgr A* by EHT.

The image received of Sgr A* could be split into 4 clusters of similar features, their averages shown below the main image of the main Sgr A* image. Three of the four clusters show ring-like features, with different parts of the ring brighter than the others. The last cluster also contained images that fit the data, but were not ring-like. The bars associated with each image show to what proportion each cluster was present in the obtained data, with thousands of images in the first three, and only hundreds of images in the fourth.

 

Information References:

  1. Lutz, O., 2019. How Scientists Captured the First Image of a Black Hole. [online] Jet Propulsion Laboratory. Available at: <https://www.jpl.nasa.gov/edu/news/2019/4/19/how-scientists-captured-the-first-image-of-a-black-hole/> [Accessed 12 May 2022].
  2. Eventhorizontelescope.org. 2019. Press Release (April 10, 2019): Astronomers Capture First Image of a Black Hole. [online] Available at: <https://eventhorizontelescope.org/press-release-april-10-2019-astronomers-capture-first-image-black-hole> [Accessed 12 May 2022].
  3. Eventhorizontelescope.org. 2022. Astronomers reveal first image of the black hole at the heart of our galaxy. [online] Available at: <https://eventhorizontelescope.org/blog/astronomers-reveal-first-image-black-hole-heart-our-galaxy> [Accessed 12 May 2022].

Image References:

  1. Youtube.com. 2015. Animation of the Stellar Orbits around the Galactic Center. [online] Available at: <https://www.youtube.com/watch?v=tMax0KgyZZU> [Accessed 12 May 2022].
  2. Eventhorizontelescope.org. 2019. Press Release (April 10, 2019): Astronomers Capture First Image of a Black Hole. [online] Available at: <https://eventhorizontelescope.org/press-release-april-10-2019-astronomers-capture-first-image-black-hole> [Accessed 12 May 2022].
  3. ESO.org. 2019. Locations of the EHT Telescopes. [online] Available at: <https://www.eso.org/public/images/eso1907p/> [Accessed 12 May 2022].
  4. Eventhorizontelescope.org. 2022. Astronomers reveal first image of the black hole at the heart of our galaxy. [online] Available at: <https://eventhorizontelescope.org/blog/astronomers-reveal-first-image-black-hole-heart-our-galaxy> [Accessed 12 May 2022].

When observing the Universe from our scaled down perspective, the distribution of galaxies seems to be random and sporadic with no clear pattern or structure. Its only when we zoom out and look at the Universe from a larger scale that the structure of the Universe begins to reveal itself to us. This structure, just like the structure of stars and planets, arises primarily from gravitational force. Once galaxies form, they clump up into clusters or even superclusters. This arrangement of the Universe mimics that of a spider web or a foam like composition, also known as ‘the cosmic web’, and is comprised of filaments and voids.

The branches of galactic density in the cosmic web are known as galactic filaments and are the largest known structures in the Universe known to man. They are comprised of walls of superclusters and can be as large as 80 Mpc. Filaments create borders between voids. The vast open spaces between these filaments are called cosmic voids. Voids were initially discovered by scientists in the 1970’s by means of redshift surveys of galaxies. Their sizes can vary from 10 to 100 Mpc and make up most of the volume of the Universe, roughly 80%. Voids are defined as areas in space with a very low numbers of galaxies that are distributed far from one another. If voids are large enough, they can even be dubbed as super voids. The largest known void is the Boötes void, discovered by Robert Kirshner et al, and has a diameter of 0.27% of the observable Universe.

Figure 1: simulation of the cosmic web.[1]

In this figure, the blue threads represent filaments, and the vacant spaces represent voids.

The dominant theory of void formation is that they were created by means of baryon acoustic oscillations, BAO, in the early Universe. BAO can be described as quantum fluctuations in the densities of baryonic matter, also known as visible matter. It is believed that in the early Universe, fluctuations in the density of baryonic matter resulted in increased concentrations of dark matter being formed. Baryonic matter was then attracted to it, by means of gravitational attraction, and formed stars and galaxies. This resulted in areas of high density becoming denser and areas of low density becoming even less dense. Thus, filaments indicate areas of high dark matter density while voids are areas of low dark matter density. From this we can postulate that dark matter dictates the structure of the universe at the largest scale.

Voids are often overlooked as being areas of empty space in the Universe, but they are a key component in understanding the expansion of the universe and dark energy. Due to the existence of super voids, 70% of the energy in the Universe must consist of dark energy. This number is consistent with the current estimates of 68.3% obtained in 2013 from observations made by the Planck spacecraft and thus, consistent with the Lambda-CDM model. Voids are extremely sensitive to cosmological alterations. This indicates that the shape of a void is indicative of the expansion of the Universe and somewhat governed by dark energy. By studying the shape of voids over time, we can become one step closer to modelling an equation of state for dark energy.

Image credit:

  1. NASA, ESA, and E. Hallman (University of Colorado, Boulder)

Figure 1: The terrestrial planets of our solar system: Mercury, Venus, Earth, and Mars[5]

Planetary science is a relatively new sub-field of astrophysics that is devoted to studying the nature of planetary formations both in and outside of our solar system. This field employs techniques across many disciplines, namely physics and geophysics. The beauty of planetary sciences is that one can reasonably assume all terrestrial bodies evolve similarly, so studying visible features and characteristics of other planets/moons leads scientists to a greater understanding of the hidden or past features of our planet. One such feature is heat-pipe cooling.

In 2017, a new way of understanding the cooling and heat transfer of terrestrial planets was proposed by a team of scientists from NASA and Louisiana State University[1].

Figure 2: Image of Jupiters moon, Io showing its surface eruptions.[6]

The theory was borne from observations of Jupiter’s tidally heated moon, Io shown in figure 2. The theory of heat-pipe cooling was developed to explain why Io has such a thick lithosphere that is consequently able to support its numerous mountains and calderas that result from its volcanism. If the lithosphere was not thick enough, any mountain formed on the moons surface would collapse under the stress. Scientists concluded based on observations that our solar systems terrestrial planets evolved in a manner consistent with heat-pipe cooling. In this way, the theory provides an explanation for Earth’s surface volcanic materials, its thick lithosphere, and its mountains.

Heat-pipe cooling/tectonics is a method of cooling for terrestrial planets wherein the main heat transport mechanism present on the planet is volcanism originating below the lithosphere, shown below in the top of figure 3. (stagnant-lid convection is discussed below as well)[4]. Melted rocks and volatile materials are moved from the liquid mantle through the lithosphere via vents and volcanic eruptions. These eruptions lead to global resurfacing of the planet by which older layers are buried and pushed down to form the thick, cooler lithospheres that contain the tectonic plates we are all familiar with.

Figure 3: Modeled lithospheric thickness for heat-pipe and stagnant-lid planets.[7]

Since these first observations, scientists have hypothesized that this method of cooling has been involved in the evolution of all terrestrial planets, including Earth. They went a step further to say that heat-pipe cooling is the last significant endogenic (occurring below the surface of the planet) resurfacing process experienced by terrestrial bodies, and as such contains information from this period in their formation such as magnetic fields and gravitational anomalies[4]. The time taken for a planet to cool via this method is directly related to its size, and as such, larger terrestrial planets in other solar systems may still be in heat-pipe cooling mode[4]. The significance of this is that observing larger terrestrial planets still in their heat-pipe cooling mode may lead to a greater understanding of the role this physical concept may have played in the formation of Earth as we know it today. Unfortunately, all of the terrestrial bodies in our solar system, including the Moon show evidence of heat-pipe cooling in their past but are no longer actively undergoing this process.

The hallmark of heat-pipe cooling is the resultant strong lithosphere in addition to the constant resurfacing of the body due to persistent volcanic activity. The implications of heat pipes for the tectonic history of terrestrial planets are shown in figure 3 above. Planets that evolve through a heat-pipe cooling phase develop a thick lithosphere early in their history which subsequently thins as volcanism wanes and thickens as stagnant-lid convection takes over. This is where the surface of a terrestrial planet has no active plates and is instead locked into one giant plate, and the surface material does not experience subduction[3]. Currently Earth does have active plates as evident by our abundant seismic activity, but this form of convection will eventually become dominant, and the lithosphere will no longer be recycled. At this stage, whatever condition the Earth’s surface is in will be preserved for extraterrestrials to view and study, similar to how we study other planets.

References:

[1]https://www.nasa.gov/press-release/scientists-propose-new-concept-of-terrestrial-planet-formation

[2]https://agupubs.onlinelibrary.wiley.com/doi/10.1029/JB094iB03p02779

[3]https://www.ucl.ac.uk/seismin/explore/convection-seismology.html

[4]https://reader.elsevier.com/reader/sd/pii/S0012821X17303242?token=73C931FE15DBD35C37DA2C96C433469E88F52DECB1E47D5F682C25DE4B7BE3D4150B372850444879131FDD450CDBD971&originRegion=us-east-1&originCreation=20220512121740

Image sources:

[5]https://solarsystem.nasa.gov/resources/687/terrestrial-planet-sizes/

[6]https://solarsystem.nasa.gov/resources/1039/galileo-sees-io-erupt/

[7]https://www.sciencedirect.com/science/article/pii/S0012821X17303242