Atoms, the building blocks which make up our universe, are made up of 3 important elements: the proton, the neutron and the electron. You may have recognize this image from the Big bang Theory.

The protons are the particles in red and have a positive electric charge, the neutrons in blue and have no charge. Together they comprise the nucleus. The grey particles are the electrons, which have negative charge, and attracted to the protons, much like the moon is attracted to the Earth. They execute an orbital motion around the nucleus.

When a very high energy ray of light comes near an atom, it has a chance to spontaneously disappear and leave behind an electron and a positron (the antimatter counterpart of the electron. Like an electron, except with a positive charge). This is called pair production. At first glance, this is a very strange prospect, that a ray of light can spontaneously turn into matter and antimatter. But this process obeys all of the laws of physics (obviously, because it happens), i.e. conservation of momentum, conservation of charge and conservation of energy. Conservation of energy is achieved when you take into account the rest energy of the proton and the electron.

Einstein figured out that matter is a form of bundled up energy, which is described by the famous equation E = m, which says that the energy from an electron or a positron existing is equal to its mass multiplied by the speed of light squared. So if the energy of the light ray is more than twice m, with m being the mass of the electron or positron (both have the same mass), then conservation of energy is satisfied.

The figure above shows the light ray coming from the left near the atom, and transforming into the electron positron pair on the right. So it obeys the laws of physics, but why does it happen? And why does it have to happen near an atom?

Well it has to happen near an atom due to  a subtle interaction between the atom and the light ray. Every atom produces electromagnetic fields, which the ray interacts with. The result of this interaction is probabilistic, the probability is determined by taking into account the total number of possible outcomes. It turns out, through the study of a field called quantum electrodynamics, that the probability that pair production occurs depends on the energy of the light ray, the higher the energy the more likely, and with the square of the number of protons in the nucleus, again, the more there are, the more likely. This interaction with the atom through its electromagnetic field includes it in the equation for the conservation of energy and momentum, which is why it is seen in the figure to have an extra momentum after the interaction.

Pair production is also a process which can happen backwards, in a process called pair annihilation. If you imagine running the process in the graph above backwards in time, i.e. the electron and the positron run into each other near an atom and annihilate each other to create a gamma ray, with all of the various conservation laws being obeyed.

References:

Image 1: Big bang theory: Why Leonard & Sheldon Spent exactly 139.5 hours rebuilding the model, https://screenrant.com/big-bang-theory-leonard-sheldon-139-hours-model-why/ , Accessed May 2020

Image 2: Conversion of energy into mass, https://www.jick.net/~jess/hr/skept/EMC2/node9.html , accessed May 2020

TIME CRYSTALS!!! A name and concept that seem to come straight out of a science fiction or fantasy novel. But are they as fictional as we expect them to be?

A time crystal is a novel phase of matter. It is a system which oscillates repeatedly from one stable ground state to another without absorbing or “burning” any energy in the process. Despite being a constantly evolving system, a time crystal is perfectly stable. Analogous to regular crystals in space which break the spacial-translational symmetry, time crystals spontaneously break time-translational symmetry – the usual rule that a stable system will remain the same throughout time. No work is carried out by such systems and no usable energy can be extracted from them, so finding them would not violate the well-established principles of thermodynamics.

In 2012, the Nobel laureate in physics, Frank Wilczek proposed the existence of the titular time crystals. Wilczek envisioned a diamond-like multi-part object which breaks the time-translation symmetry in its equilibrium, moving in periodic, continuous motion and eventually returning to its initial state. However, this model turn out to be impossible as laws of thermodynamics dictate that in order to minimise their energy, quantum particles in the thermodynamic limit prefer to stop rather than to move. Scientists had to come up with other, slightly different models to make the creation of time crystals more plausible.

“If you think about crystals in space, it’s very natural also to think about the classification of crystalline behavior in time,”

– Frank Wilczek

Over the past few years, researchers have tried to developed various methods and approaches to create systems which very closely resemble the theorised time crystals. Such systems require some “ingredients” or specific techniques to be constructed. Consider a one-dimensional chain of spins. First, particles such as electrons are prepared in a polarised spin state. Naturally, these particles will try to settle into an arrangement which minimalised their energy. However, random destructive interference will trap them in higher-energy configurations. Our system is now experiencing a many-body localisation. These many-body localised systems exhibit a very special kind of order: if you flip the spin orientation of each particle, you will create another stable many-body localised state.

If you act on our system with a periodic driver such as a very specific laser, you will find that the spin orientations will flip back and forth, repeatedly and indefinitely moving between two many-body localised states. It is important to note that the particles do not heat up and absorb any net energy from the driving laser. By definition, our system has formed a time crystal.

In 2021, a new development in the fields of quantum computing and theoretical condense matter physics made the headlines. Researches at Google and physicists Stanford and Princeton and other universities were able to demonstrate the existence of such time crystals using Google’s revolutionary quantum computers.

Quantum computers operate on qubits – controllable quantum particles. The controllable aspects of the qubits prove to be especially useful in creating a time crystal. We can randomise the interaction strengths between the qubits, creating the necessary destructive interference between them, which in turn, allows us to achieve the many-body localisation. In this experiment, microwave lasers act as our periodic drivers, flipping the spins of the qubits. By running thousands of such demonstrations for various initial configurations, the researchers were able to observe that the spins were flipping back and forth between two many-body localised states. During these processes the particles never absorbed or dissipated any energy from the microwave laser, keeping the entropy of the system unchanged. They were able to create an extremely stable time crystal within a quantum computer.

“Something that’s as stable as this is unusual, and special things become useful,”

–  Roderich Moessner, director of the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany and co-author of the Google quantum computer time crystal paper.

Time crystals have the potential to finally allow us to take our condense matter research into the fourth dimension. They may help us create a whole new generation of novel devices and technologies. Their applications might include: being used as a new techniques of more precise timekeeping, simulating ground states in quantum computing schemes and even being implemented as a robust method of storing memory in quantum computers. However, due to the exotic nature of these systems and our poor knowledge of their physics, it might be a while until we will be able to grasp time itself in the palm of our hands.

References:

  • Classical Time Crystals, A. Shapere and F. Wilczek, Phys. Rev. Lett. 109, 160402 (2012), https://link.aps.org/doi/10.1103/PhysRevLett.109.160402
  • Eternal Change for No Energy: A Time Crystal Finally Made Real, Natalie Wolchover, https://www.quantamagazine.org/first-time-crystal-built-using-googles-quantum-computer-20210730/
  • Time crystals enter the real world of condensed matter, P. Hannaford and K. Sacha, https://physicsworld.com/a/time-crystals-enter-the-real-world-of-condensed-matter/
  • Viewpoint: Crystals of Time, Jakub Zakrzewski, https://archive.ph/20170202102150/http://physics.aps.org/articles/v5/116#selection-625.0-625.16
  • How to Create a Time Crystal, Phil Richerme, https://physics.aps.org/articles/v10/5#c2
  • Physicists Create World’s First Time Crystal, https://www.technologyreview.com/2016/10/04/157185/physicists-create-worlds-first-time-crystal/

Quantum mechanics is regarded as one of the crowning achievements of 20th century physics with it’s predictions backed up by countless experiments in the decades since its formulation in the 1920’s. Along with it’s success as a physical theory for all things microscopic it has also garnered notoriety in the mainstream for its perceived complicated and abstract subject matter and it’s reputation for being impenetrable to any layman. The reason for this singling out of quantum mechanics from the great canon of physical theories is due to its unique philosophical position in relation to the physics it describes, or better put: it’s not necessarily what quantum mechanics tells us, rather how we interpret what we are told.

To begin this discussion of interpretations of quantum mechanics it is best to start with the most widely accepted and universally taught interpretation, The probabilistic interpretation. This is the belief that quantum mechanics doesn’t tell us what has happened in a given system but more precisely how likely it is to happen in a given system. Within this framework we can imagine that all possible results of a measurement of a quantum system have assigned to them a particular probability which represents how likely one is to find the system in that arrangement when measured. 

To clarify this idea we can consider a widely understood concept, Pokémon cards! We know that the only way to ‘measure’ what Pokémon we have is to remove the card from the pack and ‘observe’ it. If we know beforehand that there are only 3 types of Pokémon available, let’s say, fire, water and electric and we have heard from others who have bought the same cards that 20% of people get fire cards, 50% get water cards and 30% get electric cards. We can describe our card before we open it as follows:

(card type) = 0.2 (fire) + 0.5 (water) + 0.3 (electric).

All information about the card is represented in this statement (i.e. the type of card and the probability of getting it). We also know that after we open the card and ‘observe’ it we will only possess a single card belonging to only a single card type and it will not change. Therefore, (assuming we got a fire card) after it has been opened the card can be described as:

(card type) = 1.0 (fire)

This change in the description of the card is fundamental to this interpretation of quantum mechanics and is said to take place instantly upon measurement.

The many worlds interpretation of quantum mechanics is an alternate interpretation which proposes a different view on the ‘collapse’ of the description of quantum systems. It states that every possible result of an observation is realised in it’s own universe. Using this point of view, at the moment when the card is opened we can imagine the arrow of time branching into 3 distinct paths and in each new path a different type of card was obtained.

This interpretation was formulated in the 1970’s with contributions from physicists such as Hugh Everett and Bryce DeWitt and aimed to resolve some of the paradoxes of quantum mechanics, the most famous of which being Schrödinger’s cat. While sounding like nothing more than a purely science fiction concept it remains a very real and respected academic hypothesis to this day.

At this stage Albert Einstein is a household name across the globe, his name being synonymous with the word ‘genius’. His theories and thought experiments have had an immense impact on our understanding of physics, and he seemed able to imagine ideas that no one else possibly could. This post tells the story of how, in 1929, Einstein retracted one of his theories – calling it the “biggest blunder” of his life.

Einstein had included in his equations of gravity what he called ‘the cosmoligical constant’, a constant represented by capital lambda, which allowed him to describe a static universe. This model of the universe complied with what was the generally accepted theory at the time in 1917, that the universe was indeed stationary.

Then, in 1929, Edwin Hubble (whom the Hubble telescope is named for) presented convincing evidence that the universe is in fact expanding. This caused Einstein to abandon his cosmological constant (i.e. presuming its value to be zero), believing it to be a mistake.

But that wasn’t the end of the story. Years went by and physicists repeatedly inserted, removed and reinserted lambda into the equations describing the universe, unable to decide whether or not it was necessary. Finally, in 1997/8, two teams of theorists, one led by Saul Perlmutter, published papers outlining the need for Einstein’s cosmological constant.

Through their analysis of the most distant supernovae ever observed – one of which was SN1997ap – and their redshifts, they had reached the conclusion that the distant supernovae were roughly fifteen percent farther away than where the prior models placed them. This could only mean they were accelerating away from us. The only known thing that ‘naturally’ accounts for this acceleration was Einstein’s lambda, and so it was reinserted into Einstein’s equations one last time. Einstein’s equations now perfectly matched the observed state of the universe.

So while Einstein’s initial use for the cosmological constant was incorrect, it proved vital to forming an accurate picture of our world. The great theorist had once again foreseen a factor no one else could – this time a good 70 years before anyone, including himself, was able to prove it.

As a rule, the universe tends towards disorder. It can seem like a rather depressing fact to some, but no matter how concerted and deliberate you try to be, physics guarantees that your actions will always act to increase the overall amount of disorder in the world. Want to have a spoon of sugar in your tea? You’ve just ruined your sweetener’s fine crystal structure by letting it dissolve. Take it without sugar? In boiling the kettle you’ve already set the water molecules in your drink into ever faster and disordered motion just by heating them up. There’s no stopping it. This universal law is codified physically in the second law of thermodynamics, which dictates that after carrying out any irreversible process (irreversible in the sense that you cannot stir the sugar out of your tea), entropy, a measure of disorder, must necessarily have increased. 

 

Beyond the depression, at first this principle can seem somewhat illusive. Why does Nature decide things must be messied? The answer lies in probability. Take again the example of our cup of tea and sugar. Each sugar molecule, given the chance, can move relatively freely through the tea. They’ll bump into a water molecule here or there, another sugar molecule,  or potentially a caffeine molecule (should you not take decaf), but on average, over time, they get around the entire cup. If you consider the probability of different arrangements of the sugar molecules, you can see that an unmixing of a spoon of sugar is incredibly unlikely. For this to happen, we’d need every sugar molecule from all around the cup to conspire to all at once stick back to our spoon, meanwhile enough of the water molecules would have to decide to get out of the way to make room for our spoonful (presuming your sugar was dry to begin with). The odds of this happening are staggeringly small. They’re so astronomically small in fact that in principle we can say it’ll essentially never happen, even if we stood and stared diligently at our cup for a few billion years. The second law of thermodynamics, under this guise, and once we note that generally there’s just a higher chance of things being disordered, is simply a statement that Nature does the most probable. 

 

Now, in 1867, James Clerk Maxwell, feeling rather devious, proposed a simple thought experiment regarding entropy that went without a complete solution by physicists for some 115 years. What Maxwell tried to do was concoct a system whose entropy would decrease rather than increase over time, thus becoming more ordered and flying in the face of the second law of thermodynamics. To do this he imagined a container split in two. One side of the container, say the right, is completely empty to start, the other is full of gas, and the two are separated by an impenetrable wall with a small tap to let gas flow between the containers. First, we open the tap and let things progress according to the laws of physics. Predictably, the gas spreads out between the two containers, diffusing out from the left portion into the right until everything settles down into equilibrium, the gas pressure now having gone down due to its expansion. This so far is completely ordinary. If we run and check that nothing has broken yet, we’ll be assured that everything is in order. The system has undergone an irreversible process (the gas won’t magically saunter back into its original place on one side of the container), and so it should be more disordered than before, with a higher entropy. This we find to be true, since in having spread out the positions of all the molecules making up the gas are now inherently more random. The second law remains law and we have no problems so far.

 

Now comes the demon. Maxwell allows for a tiny creature, you can imagine them with horns if you like, to open and close the tap at will. This demon, conspiring to annoy physicists, decides it rather liked things the way they were before. It decides that should it see a particle moving to the left from the right side of the container it will open the tap and let it into the left side, whereas if it sees a particle moving to the right from the left side it’ll refuse to budge the tap. Over time, this leads to a gradual return to our original situation; all the gas is back on the left, whereas there’s nothing but vacuum on the right. Thus our demon, through a slight bit of trickery wiggling a tap open and closed, seems to have restored order and reversed entropy! Even more startling still is that if we had placed a small turbine at the tap that spins as gas flows between the containers, we could have harvested some of the energy from our experiment and, since we’ve returned everything back to its starting position, there’s no stopping us doing that again, and again, and again. We’d get more energy out of the system each time, and a free lunch.

 

There’s no such thing of course. The solution to the problem of Maxwell’s demon lies not in trying to manufacture a replica in a lab and generate infinite power using little devilish creatures and boxes of gas, rather it lies in trying to prove that either no such demon could exist, or in as much as they could they don’t break the laws of physics. One idea first tried out on the problem was that the demon would necessarily have to actually be able to see the gas molecules in order to know when to open the tap, and to do this it’d have to shoot light at them, using up energy and thus squashing our hopes of a sustainable future being powered by Satan. Equally, this light must reflect off the molecules, jostling them about thus increasing the randomness of their motion. This solution falls through however, since the demon, outwitting us, can always decide to use lower and lower intensity light to sense the particles, and thus use an absolute minimum of energy, eeking out an advantage and still breaking physics (this might mean they’re worse at seeing the particles, but they’ve got time and can miss a few so long as overall gas only ever travels one way through the tap).

 

The beginnings of a solution to the problem interestingly came from the work of pioneering computer scientist Claude Shannon. In 1948, Shannon showed that the information content of any message could be directly quantified mathematically by what he called information entropy. The more information a message holds, the higher its information entropy. On the face of it Shannon’s notion of information entropy seems quite some distance from what physicists mean when they talk about entropy. What’s the connection between the amount of information stored in a text message and how disordered my cup of tea is? The answer lies in a distinction between the measurements us physicists normally take of systems and measurements of their exact states. Normally, we don’t measure where every single gas molecule is in a system, in fact normally we can’t possibly do this. To store all the position data for just one gram of hydrogen gas at one instant in time would require something like a quadrillion gigabytes of storage. Every bit of data storage on the planet currently amounts to less than 3% of that, and that’s just for a snapshot of the particles at an instant. We’d be in an even worse state trying to measure how the particles move over time. What we normally measure in a lab are things like pressure, temperature, or volume, which all come from the behaviour of large numbers of particles rather than individual ones, and so require much less data to store. Physicists say that what we typically measure is the macrostate of systems, or how they appear on a macroscopic scale, rather than their microstate, or how they appear down to the level of atoms and molecules. In general, many different microstates can correspond to the same macrostate, since they give the same pressure, temperature, etc., just with the particles moved around a bit. Information entropy, when applied to a gas in a box, is a measurement of how much additional information we’d need to figure out the exact microstate our gas is in (where all the molecules are, how fast they’re moving, etc.) once we know what macrostate it’s in. Thus, if we know all the particles are on the left side of the box, we’d need to know less information to figure out exactly where they are, and thus the system has lower entropy than if they were spread out over the entire box.

 

Information entropy alone doesn’t solve the problem, however. The second step was made by physicist Rolf Landauer who, in 1961, showed that theoretically there is a lower limit to the energy efficiency of any form of computation, in what’s known as Landauer’s principle. What Landauer proved is that if a computer is to perform a calculation, whether that be adding two numbers, running a line of code, or even something as simple as storing or deleting data, there is a quantifiable minimal energy cost in doing so, and a corresponding entropy increase for having gone to the bother. This entropy increase comes from the fact that, in using up some power, whatever circuit we run our computations on must heat up, and thus either it or its surroundings must become more disordered, since this heat causes its molecules to jiggle about in more random motion.

 

With all of this at hand, it still took another 21 years for physicists to finally come up with a solution to Maxwell’s demon when, in 1982, Charles Bennett finally figured it out. Bennett’s stroke of genius was to reduce the demon down to its core function; an information-processing machine connected to a tap. The tap connection isn’t too important, since we can always in principle minimise its energy consumption and corresponding entropy increase, and so the solution comes directly from considering the information-processing (the problem isn’t in the demon’s body, just whatever they have between the ears). Bennett reasoned that the demon is going to need to store some data, since they need to record both where the particles near it are and how they’re moving, alongside when they have to open and close the tap to let the right ones through. He equally reasoned that the demon doesn’t have infinite storage space, and so will eventually run out of room and have to start overwriting their storage to record when next to open the tap. This overwriting, however, must take at least the theoretical minimum of energy given my Landuer’s principle, and this must correspond to a least corresponding increase in entropy. This entropy increase, when calculated, will always exceed the decrease the demon gets out of playing with the tap, and thus the demon is thwarted and the second law persists.

 

Maxwell’s demon is one of the many striking examples in physics of the power of a simple thought experiment and particularly how their solutions, sometimes taking years, can draw in results from seemingly unrelated fields of study entirely into a unified whole. Physicists from Galileo to Newton to Einstein have all pondered mock setups like this in their minds, daydreaming about lobbing cannonballs into orbit, jumping in elevators, or poisoning cats, and from these imaginings has sprung some of the greatest developments in the history of science and physics, alongside ever deeper insights into the minutiae of the Universe. The whole process of carrying out thought experiments, or Gedankenexperiment as Einstein liked to call them, runs down to the core of what physicists do; simplify the world to its most fundamental parts, play with them, learn something, then build it back up again.

When it comes to science and the supernatural, the common consensus is that the two areas are polar opposites and will never meet. The area of quantum mechanics, which is approaching the ripe old age of 200 years, continues to provide a great deal of confusion within the scientific community as to what it is that makes such an utterly baffling concept possible.

Quantum in a Nutshell

Quantum mechanics is, at its core, an explanation of why particles act the way they do. In classical mechanics a wave will always act like a wave, that is, it will always travel at the speed of light with some frequency and respective wavelength and will experience effects such as refraction, reflection and diffraction. Similarly, a particle in classical mechanics will always act like a particle; a solid mass with a momentum. Quantum mechanics is used to explain the motion of particles that are so small they do not act like a particle or a wave, but rather as both.

Now you may be thinking “big deal, particle go brrr” but I can assure you it gets weirder. In order to illustrate why the scientific community was and remains to be so perplexed by this field, we must first observe the results of the double slit experiment, the first example of wave-particle duality.

The Double-Slit Experiment

First performed by Thomas Young in 1802, the double slit experiment, as the name suggests, uses two slits in the surface of a solid material to create and interference pattern from an incident beam of light. This experiment was revolutionary in observing the physical properties of wave motion.

Figure 1: Young’s Double Slit Experiment. Credit: [1] eiu.edu

In 1927, in an experiment performed at Western Electric by Clinton Davisson and Lester Germer, it was proven that electrons could undergo diffraction and produce a diffraction pattern, thus proving the hypothesis of wave-particle duality, a fundamental building block of modern-day quantum mechanics. In 1961, the double slit experiment was carried out using a beam of electrons instead of light in order to see if they would produce the same result. Sure enough, the electrons produced an interference pattern on the screen.

So Young’s experiment shows us that electrons behave like waves. But we know that electrons interact with other particles in the same way a particle would. This wave-particle duality is what gives rise to the quantum-mechanical theory. An electron may act as either a particle or a wave at any given time. The real question is does an electron know when it has to act like one or the other? A common thought experiment details a detector being placed at the two slits so that the electron is observed going through the slit. Hypothetically, the interference pattern would not appear on the other side as before since the electron is observed passing through the slit in the form of a particle and therefore must continue to act like a particle. Richard Feynman used this thought experiment to prove that an electron must always act like a wave in this circumstance since this thought experiment cannot possibly be performed due to the impossibly small scale ([2] Harrison, 2006).

Varying Interpretations

Feynman’s thought experiment can cause some confusion as it gives the impression that an electron could choose to act like a particle at the slit because it somehow knows it is being observed. Similar to other natural phenomena, if left unexplained by physics, many will jump to believe that a higher power could control such behaviour. Science has always been used to explain natural phenomena that were previously considered to be acts of witches, gods or the supernatural.

The idea that a particle could be granted some form of consciousness by a higher power is something that would very much excite those who believe in or are searching for a god. Simply put, however, that’s not how it works. The particle does not “choose” its state, it simply is. If placed in a certain condition where a wave will experience a certain effect (as in the double slit experiment) it will do so and will undergo the same process as anything else with wave motion. Like when a non-Newtonian fluid is put under stress, it will alter its form to adapt to its surroundings. An electron is still a particle but simply acts like a wave sometimes.

The probabilistic nature of quantum mechanics does give rise to some theories about alternate worlds in which fundamental particles were to act like particles where they would act like waves in our world. This “Many Worlds” theory is more the stuff of science fiction as there is truly no way for it to be proven. The fact that it cannot be disproven, however, is an interesting notion that finds itself appearing more in cinemas than in the lab.

The Final Message

The scientific community’s interpretations of quantum mechanics vary from the perfectly normal to the apparently bizarre. The notion that this field of study could prove the existence of parallel dimensions seems to be pulled directly out of science fiction. The idea that a particle can choose its state of being gives rise to a plethora of philosophical and potentially religious questions.

Quantum mechanics is clearly the most vital area of physics today and the sooner we can come to an explanation that can be understood by all, the better.

 

References

[1] Dr. Doug Davis, Adventures in Physics, 20.2 Young’s Double Slit, East Illinois University. https://ux1.eiu.edu/~cfadd/3050/Adventures/chapter_20/ch20_2.htm

[2] David M Harrison, 2006, The Feynman Double Slit, Dept. of Physics, University of Toronto. https://faraday.physics.utoronto.ca/PVB/Harrison/DoubleSlit/DoubleSlit.html

A quark is defined as any component of a set of primary subatomic particles that interact via the strong force and are thought to be among the fundamental components of matter. Protons and neutrons are formed by quarks interacting with one another via this strong force, much like atomic nuclei are formed by the latter particles combining in various proportions. Quarks are classified into six kinds, known as flavours, based on their mass and charge properties. There are three pairings of quark flavours: up and down, charm and weird, and top and bottom. Quarks appear to be actual elementary particles, with no discernible structure nor the ability to be resolved into smaller particles. However, quarks appear to invariably combine with other quarks or antiquarks, which are their antiparticles, to generate all hadrons—the so-called strongly interacting particles that include both baryons and mesons.

Quark (Particle)

James Joyce isn’t generally the first name that comes to mind when we think about particle physics. In 1963, when physicist Murray Gell-Man offered a term for his hypothesis of a fundamental particle of matter smaller than a proton or a neutron, Joyce was not on his mind. There was no spelling for the phrase he pronounced “quork” since it had never been written down.

James Joyce

According to Gell-Man’s own account, he had a propensity of calling strange items “squeak” and “squork,” and “quork” was one of them. He stumbled upon a phrase from Joyce’s ‘Finnegan’s Wake‘ a few months later:

“Three quarks for Muster Mark!

Sure he has not got much of a bark

And sure any he has it’s all beside the mark.”

Joyce clearly intended quark to rhyme with Mark, bark etc. However, this didn’t sound anything like the “kwork” in Gell-Mann’s thoughts. The physicist used some imagination and recreated the statement as a request for drinks at the bar:

“Muster Mark gets three quarts!”

Pronouncing the term like kwork “might not be wholly irrational” with this change, notes Gell-Mann in his 1994 book ‘The Quark and the Jaguar’. “The recipe for producing a neutron or proton out of quarks is, roughly speaking, ‘Take three quarks,” thus the allusion to three seemed appropriate.

Murray Gell-Man

The name quark comes from an old English phrase that means “to croak.” In the tale of Tristan and Iseult, a bird choir mocks King Mark of Cornwall, and the above-quoted verses are about that. However, there is a persistent mythology, particularly in German-speaking regions of the world, that Joyce got it from the word Quark, a German word of Slavic origin that is often translated as “cottage cheese” but is also a slang phrase for “trivial nonsense.” According to folklore, he heard it in a peasant market in Freiburg during a visit to Germany.

Quark (Dairy Product)

What even is a Conductor?

A conductor is any material which allows an electric charge carrier (e.g. an electron) to flow through it once a voltage is applied to it. This flow of a charge is known as electric current. Commonly known conductors are metals such as gold, silver, iron and copper, and even sea water!

 

What makes it so ‘Super’?

Super conductivity is a phenomenon which occurs in certain materials when their electrical resistance drops to zero and they begin to eject magnetic flux fields after reaching their critical temperature. This can be seen in certain materials, i.e. superconductors.

For anyone unfamiliar with these terms, electrical resistance is the force that opposes the flow of electric current through a material. One way electrical resistance occurs is when a metal heats up and the atoms start vibrating a lot, this results in the nucleus getting in the path of the electron that’s trying to move past it and ultimately hindering its flow. The aim for most conductors is to have as little resistance as possible for maximum efficiency. The ejection of the magnetic field lines is a bit more abstract. In the diagram below, on the left hand side is a superconductor above its critical temperature, and on the left is after it has fallen below it and has its superconductor properties:

As we can see, the magnetic field will no longer pass through the superconductor upon hitting the critical temperature. This is known as the Meissner Effect, and is the reason why a superconductor will levitate when placed in a magnetic field pointing upwards.

 

The First Breakthroughs

Firstly, there is a common misconception about superconductors, that is that they can be accurately described through a classical description using the likes of Lenz’s law for a conductor with no resistance, but this is not the case. Superconductivity is what is known as a ‘quantum mechanical’ property, which basically means that it’s just a few million times more difficult to understand.

They were first observed by  Heike Kamerlingh Onnes, a Dutch physicist in 1911, but it took another fifty years before much headway was made in describing the phenomenon.

The first big breakthrough in understanding them wasn’t until 1950, which was the Ginzburg-Landau Theory. This theory was developed from Landau’s previous theory on second order phase transitions, where superconductivity is seen as a state of matter of a material. The crux of this theory is that there exists a theoretical parameter which can be described using what’s called a ‘wave-function’. The main property of this parameter is that it is zero for any temperature above the critical temperature, but it has a positive value when the temperature decreases below it, this would explain the sudden appearance of superconductivity as opposed to it gradually becoming visible. The next breakthrough would be the biggest one yet, and would also be the best description for most superconductors to this day.

 

The Barden-Cooper-Scheifer Theory

The Barden-Cooper-Scheifer (BSC) Theory was conjured up by the three physicists it is named after in 1957, and it was the first solid microscopic theory of superconductivity developed, which also won the Nobel Prize. The premise of this theory lies within another phenomenon called Bose-Einstein Condensate (BEC).

Before getting straight into superconductors, it is important to first outline the key differences between two types of particles: Bosons and Fermions. Fermions are ‘distinguishable’, which means they can only have one copy of themselves per energy level, i.e. two electrons in each orbital (one spin up and one spin down). Bosons on the other hand are what’s known as ‘indistinguishable’ can occupy any energy level with as many Bosons as possible. As a result of this at very low temperatures Bosonic materials will clump together in the lowest energy level, this leads to the aforementioned BEC. This is a state in which bosons form a kind of condensation which is quite different to your standard solids, liquids and gases but still considered a state of matter.

Now, you may be asking “Ok, so Bosons can do all that fancy shmancy stuff, but aren’t we looking at electrons, which are Fermions?”, and you would be absolutely right! But the effect of having the conductors at such low temperatures lowers the heat energy in the system such that the positively charged nuclei of the metal (blue circles below) won’t be vibrating on their own at all, so they’re much less likely to get in the electron’s way. This also means that as an electron passes through it will slightly pull the nuclei towards it as they have opposing charges, this will then bring the nuclei closer together, but also closer to the next electron in line. This ends up in a slipstream like effect, similar to how it is with cars on the motorway, where neighbouring electrons flow as a pair, this is called a Cooper Pair.

“So what’s so important about this Cooper Pair nonsense?”, I can already hear you ask. Well, it turns out that when the critical temperature is reached and the Cooper Pairs are formed, they actually act as a Boson! This allows them to clump together in the ground state energy level and form a condensate. The main reason this is important is that, as all the Cooper Pairs are bonded over a relatively large distance, all the new ‘Bosons’ become entangled and move together as a whole. This reduces the chance of any electron colliding with any nucleus, (even if one did collide with a nucleus, the second electron in the pair will reconnect with the nearest free one and thus the entangled body will be almost instantaneously remade), and hence removes any resistance in the flow of the electrons! And thus, we arrive at the fundamentals of the BSC Theory of superconductors.

 

Types of Superconductors

Type I

A Type I superconductor is one with a single critical temperature. Here the critical temperature acts as an “on/off” switch for the superconductivity properties. The main property being that all magnetic flux lines within the conductor are expelled immediately when gone below the critical temperature, but are present as usual above it. So once the temperature goes above it it becomes a regular ole’ conductor again.

Type II

A Type II superconductor is one that can be described as having two critical temperatures Tc1 and Tc2, when the conductor’s temperature is in these limits it will act as a mixture of a normal conductor and a superconductor. Within this band the magnetic flux lines partially penetrate the conductor, with them being expelled completely beneath the lower limit of the band, and them passing through as if it were a normal conductor above the upper band limit.

 

What is the Importance of Superconductors Today?

First and foremost the main reason why superconductors are an area of such high intrigue is to do with the benefits of zero electrical resistance. Energy companies have to transport their product over great distances, and they inevitably lose money on electrical energy lost due to resistance with every joule they try to transmit, with estimates of around 7.5% of energy being lost as a result of resistance, according to sta.ie.

Secondly, superconductors act as a mechanism for representing quantum phenomena at a macroscopic level, making them an area of utmost importance in research in both experimental and theoretical physics.

At the end of the day, I would have to say there is something quite super about these conductors after all.

The interpretation of quantum mechanics has been a subject of rich debate since the theory’s infancy. Among the proposed interpretations are (and this is certainly non-exhaustive) the so-called Copenhagen interpretation and the hidden variables interpretation.

The former interpretation is based on the probabilistic understanding of quantum mechanics. What puts it at odds with our usual intuition is that it is not deterministic. Take the position of a particle (this can be an electron, proton, even an atom, anything sufficiently small) as an example. This is what is known as an observable, a physical quantity which can be measured through experiment. According to the Copenhagen interpretation, the most we can know about a particle before a measurement is made is the probability of measuring its position in a certain region of space. This is a clear contradiction with classical determinism, which dictates that we should have some means of predicting where the particle will be when we make a measurement.

The hidden variables theory championed most notably by Einstein, suggests that quantum mechanics is incomplete. An example of this argument is the Einstein-Podolsky-Rosen (EPR) paradox, introduced by the three authors in their 1935 paper.

They make the claim that quantum mechanics can only be complete if a physical observable has no well-defined value before it is measured, since if it had one, a complete theory would allow us to predict what this would be. The usual example given which leads to the paradox is the physical observable known as spin. The original EPR argument does not deal with spin, but rather is general, it can however be applied to spin. I will avoid a detailed description of what spin is and focus on the fundamental fact that if the spin of an electron is measured in some direction, only two outcomes are possible. These two possibilities are referred to as up and down respectively. Furthermore, spin is a conserved quantity, so if we have two electrons, and we measure the spin in some direction (say the z direction, for simplicity) of electron 1 to be up, the spin in the z direction of electron 2 must be down. This is where the paradox is contained. In this example we could deduce what the spin of the second electron should be without measuring it. The claim made by Einstein, Rosen, and Podolsky is thus that quantum mechanics cannot be a complete theory and there should be some hidden variable. The physicist John Bell killed this idea in his famous 1964 paper.

To summarise Bell’s argument, consider the two electron example from before. It is necessary to introduce the notion of an expectation value here. It is essentially a mean. More precisely, it is the mean result one gets upon performing a measurement of some observable on a collection of identical systems, provided a sufficient large number of systems are used. Bell investigates whether the usual quantum mechanical calculation of the expectation value of the product of the spins in two different directions is compatible with the equivalent calculation in the hidden variable model. The way in which the expectation value is incorporated into the hidden variable model is by assuming there exists a probability distribution associated with the hidden variable. Bell then compares the mean computed this way to the way it is computed in quantum mechanics. Indeed, the two formulations lead to different expectation values, and therefore, different physics. He shows this by deriving Bell’s inequalities, a necessary condition for compatibility of the quantum mechanical and hidden variable results which are not always satisfied.

Bell’s result leads us to a conclusion which is either frustrating or fascinating depending on your perspective. In essence, we must discard our classical notion of determinism, in favour of the stranger, more wonderful ideas that quantum mechanics provides us.

 

References:

  1. A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. 47, 777 (1935).
  2. J.S. Bell, Physics. 1, 195 (1964).

When looking at Physics, one of the more scarier and misunderstood concepts is that of particle spin. I too didn’t get the general idea of the concept until it was derived in front of me, but more importantly shown its effects, most notably the Zeeman effect. Before beginning to delving into the Zeeman effect, a general recap will be done so that everybody is up to speed and nobody is left behind. It should be noted that the model used to explain the atom is the Bohr model which isn’t 100% correct but gives a good baseline understanding of what occurs in the Zeeman effect.

Bohr model - Academic KidsFigure 1 – An image of the Bohr model as seen on academickids.com, where E is the energy of the photon, h is the Planch’s constant and f is the frequency of the photon.[1] 

 

Bohr Model: The definition of a neutral atom is that it doesn’t have an overall charge. This means the protons (positively charged particles in the centre of the atom, known as the nucleus) and the electrons (the negatively charged particles) that orbits it are the same in quantity. There are also neutral particles that act as glue for the protons known as neutrons. The increased radius away from the nucleus shows a bigger orbit and therefore a bigger energy of hat electron. The distance aren’t equal in size but is shown for clarity sake. An electron can absorb or emit a certain quantity (a quanta) of electromagnetic radiation energy, known as a photon, which allows the electron to move from one energy level (quantised/discrete number n) to the other. This emitted photon will have a certain frequency (E=hf) or wavelength (c=λf where c is the speed of light and λ is the wavelength) as seen in figure 2.

electromagnetic spectrum | Definition, Diagram, & Uses | Britannica

Figure 2 – An illustration showing the range of electromagnetic radiation and how it is the same thing but of different frequencies/wavelengths, as seen on britannica.com.[2]

 

Getting a grasp with the infamous spin: The (simple) definition of momentum is a mass moving with a certain velocity(speed with a certain direction)/speed. This can be thought of as a simple linear movement, as walking down straight street or a rotational one, which is about spinning in a circle. The total angular momentum is broken up into two bits and can be thought of through planets. The first part is the orbital momentum which is the momentum due to the planet (particle) as the planet goes around a center point (the sun in figure 3’s case or the nucleus in the electrons case). The other is rotational momentum which is the planet itself spinning, as earth does everyday.

The motion of the Earth which spins on its axis as it orbits the Sun is... | Download Scientific Diagram

Figure 3 – an illustration of the earth orbiting the sun, as its rotating symbolised by the red arrow, as seen on researchgate.net.[3]

 

This can also be said for particles like electrons, there is the orbital like momentum and the ‘spin’ momentum. Its much better to think of it as a the ‘mathsy’ counterpart that would take place of the spinning momentum of a planet. In case you were wondering why we know its not the particle actually spinning, calculations were preformed. To get the same effects from particle spin as from it rotating, the particle would have to spin faster then the speed of light, which is impossible. It should be noted that electrons have positive spins and negative spins, so if it was a planet, it would be like the planet was spinning clockwise and anticlockwise.

 

Bohr Model: If we make the situation a bit more difficult and imagine figure 1 is now not a circle but a sphere with different onion layers around the nucleus (figure 3) then an ‘issue’ occurs, symmetry begins to develop. From the singular quantum number n needed to describe what the energy of an electron is, now more then one are needed due to the fact that we are now working with a 3-D problem (so not just n, but now it is more like n it is n x and n y and n z which signify the three main directions possible in a 3-D problem, intuitively can be thought of as the three lines seen in every room corner). It is found that different arrangements of the quantum numbers can lead to the same electron energy, which in quantum physics is called degenerate levels.

7,642 Sphere Layers Stock Photos, Pictures & Royalty-Free Images - iStock

Figure 3 – An illustration on how certain certain energy levels would look like in a Bohr-like model as seen on istockphoto.com.[4]

 

Zeeman effect: Due to the spin from electrons being either positive or negative, this can be used to separate/lift the degenerate states from the electrons in the presence of a magnetic field. In figure 4 the normal degenerate state can be seen as a normal red ring, one wavelength from the laser. When a magnetic field is applied then they separate from each other because of the magnetic field, giving them different wavelengths and moving them away from each other in the fancy optical setup that was used, giving figure 5.

No description available.No description available.

Figures 4 and 5 – figure 4 (left) is the normal laser without magnetic field, figure 5 (right) is the laser when a magnetic field is applied

 

 

References:

  1. Bohr model – Academic Kids. (2022). Retrieved 9 May 2022, from https://academickids.com/encyclopedia/index.php/Bohr_model
  2. electromagnetic spectrum | Definition, Diagram, & Uses. (2022). Retrieved 9 May 2022, from https://www.britannica.com/science/electromagnetic-spectrum
  3. Padgett, M. (2022). Retrieved 9 May 2022, from https://www.researchgate.net/figure/The-motion-of-the-Earth-which-spins-on-its-axis-as-it-orbits-the-Sun-is-analogous-to-that_fig2_301577086
  4. Sphere layers – iStock. (2022). Retrieved 9 May 2022, from https://www.istockphoto.com/photos/sphere-layers