Posts

Fig 1: LHC (Large Hadron Collider) located in Cern, Switzerland. Source: [1] Brice, Maximilien, Cern Accelerating science 2019-04-30.

Introduction

You may ask yourself the very normal, real question when you wake up of, what am I? what am I made up of on the smallest scale? how different is the stuff that makes up me from the bed I wake up on? When we get to the smallest scale, everything around us is made up of the exact same stuff called atoms and really they are only differentiated by their atomic number or how many protons they contain which can be seen by on any periodic table of elements. We find that even these atoms can be made up of constituent particles called quarks and you may question what other particles are there and how do all of these coincide to create the building blocks to our universe?

The First discovered fundamental particle:

The year was 1897, the first elementary subatomic particle, the electron, had been discovered by J.J Thomson through the cathode ray experiment. With the discovery of the first fundamental particle, this had marked the beginning of this journey to discover the fundamental nature to reality. These experiments began to attempt to answer questions such as:  What is our universe made up of? Is the matter indivisible or can we get smaller? How do these subatomic particles interact?  The universe can be viewed analogously like a board of a chess game with the particles being the pieces and the corresponding rules of how they move or act are the laws that govern the universe that we try to discover through verifiable experiments. Can we discern the rules of how these particles interact with each other and our universe? Our current most successful Theory Of Everything, The Standard Model, attempts to explain this.

Fig:2 The Standard Model of Physics. Source: [2] Wikipedia Contributers, Wikimedia Foundation 19 May 2021. Author: MissMJ

The Guage Bosons and the forces of the Universe:

Currently, there are more than over 150 discovered particles and this extensive list of particles is colloquially called “The Particle Zoo”. Just as the species of animals in a zoo can be grouped together based on their natural habitat, we can group together particles with similar properties and categorize them. Particles interact with each other via other particles acting as force mediators that transfer forces between particles. According to The Standard Model there are 4 fundamental forces to our universe. In order of increasing strength we have: Gravity, The weak nuclear force, Electromagnetism and The strong nuclear force. The range of gravity and electromagnetism is infinite while the nuclear forces have very short range. The weak nuclear force is of order magnitude 10^(-18)m and the strong nuclear force of order 10^(-15)m which is why we don’t see the effects in everyday life and only on the quantum scale. These interact with the particles through force mediators called Guage Bosons. Bosons are categorized as any particle with an integer spin such as 1 or 0. Just think of spin as an intrinsic property which is either spin up or spin down that we can measure just like charge of an electron. Trying to conceptualize what spin exactly is runs into the fallacy of trying to think of these particles just as spinning balls of charge except they’re not balls due to them being point like and they’re not spinning.  The force mediator particles for each of the forces are given by: Electromagnetism is mediated by the photon which is the constituent particle of light, for Strong nuclear force is mediated by the gluon, the weak nuclear force is mediated by the W and Z bosons and finally gravity is mediated by the hypothetical graviton. Also there is a Higgs field that permeates all of space and the excitation of this field gives us the Higgs boson which is the final Guage Boson. This field and it’s interactions is what results in the mass of all these particles which was recently discovered in 2012 at the LHC (Large Hadron Collider) in Switzerland.

Fig 3: Fundamental forces to Our Universe. Source: [3] Weebly, Particle Zoo.

Leptons, Fermions, Quarks and Hadrons:

The electron, along with the muon and tau and their corresponding neutrinos make up what are known as Leptons. These are categorized as so due to certain intrinsic characterisitics such as not being able to  interact with matter via the strong nuclear force and only through the weak nuclear force. The first three have negative charge of -1 while the neutrinos have no charge and these are all spin 1/2 particles unlike bosons with spin 1 so they are also categorized as fermions. Leptons are fundamental particles and can’t be broken down to anything smaller. We have a more complex family called Hadrons which are strong interacting particles made up of quarks.  These are broken down into Baryons made up of 3 quarks which are like supermassive nucleons and Mesons made up of 2 quarks. All Mesons are Fermions and all Baryons are Bosons. Quarks are also leptons with spin 1/2 and consist of 6 types called flavours named up, down, top, bottom, strange and charm. The proton consists of up, up and down quark (uud) and the neutron is down, down and up (ddu). Up quarks have charge 2/3 and down has charge -1/3 which is why protons have charge +1 and neutrons have no charge.  These quarks that form protons and neutrons are binded together via the strong nuclear force and thus by mediating gluons as mentioned earlier. Another quantity that is intrinsic to note is strangeness and can just be thought as property just like charge.

 

Fig 4: Particle Classifications.

Particle Accelerators and Detectors

How are physicists able to probe inside atomic nuclei on the smallest scale and discover all these particles? The answer lies in the usage of linear and circular accelerators. In the early days linear accelerators were used just as J.J Thomson used cathode-ray tubes to discover electrons. These take advantage of the magnetic and electric fields to accelerate particles from rest to very high speeds to collide with an object that’s called a target. Think of linear accelerators just  like a slingshot with a ball, the elastic potential energy when you pull back is like the electrical potential energy present in the accelerator when a current runs and builds electrical potential energy and letting go converts this potential energy to kinetic causing the ball or the particle to reach high speeds while passing through various electrodes and hitting the target. It’s linear since the path of the particle is a straight line. hitting the target can cause the constituent particles to split and these can be detected various way such as on a zinc sulphide screen. Circular accelerators come in the form of Cyclotrons and Synchrotons. These use magnets with uniform magnetic field to deflect the particles into circular paths. These particles accelerate to higher and higher speeds and eventually the particles coming in opposite circular paths cross and collide. Circular accelerators such as the LHC are much more useful as they can be confined to a smaller area and easily attain higher speeds in comparison to linear accelerators. Many detectors track the movement of charged particles moving through a gas, liquid or solid which show up as droplets in a cloud chamber. At the LHC they use solenoids to help identify these new particles.

Eightfold Way and Symmetry:

All of this discussion lays the groundwork for particle theory. Symmetry is a of huge importance in all of physics. Symmetries lead to conservation laws such as conservation of energy or momentum. For the particles we have conservation of charge and other quantities in their interactions. If we consider eight spin 1/2 baryons we can plot their value of strangeness versus value of charge and the result is a hexagonal pattern. We can likewise do the same for nine spin-0 mesons. These symmetries accumulate together to form the eightfold way.  Current research in this field is trying to unify all four fundamental force interactions is the basis of the Theory of everything. We have unified electromagnetism and the weak force and is thus called electroweak interactions. The strong force has also been proposed to be unified in grand unified theories but this is still speculative. however there is trouble incorporating gravity into the mix and that’s what current research is trying to accomplish with one such example being String Theory.

Fig 5: Plot of Strangeness(s) and Charge(q) for spin 1/2 Baryons showing the symmetry pattern in eightfold way.

 

References:

[1]https://home.cern/science/accelerators

[2]https://en.wikipedia.org/wiki/File:Standard_Model_of_Elementary_Particles.svg

[3]Particle Zoo – Introduction (weebly.com)

 

Camera as an optical instrument plays a vital role in the modern life. Its applications in sciences, entertainments, and communications had great benefits to our life. With the digital camera in our smartphones, we can easily capture and share moment of our life. The simplest model of camera is the pinhole camera, which operates with fascinating physics and optics.

The word camera originates from camera obscūra, which means “dark chambre” in Latin. It refers to the device which creates the optical phenomenon of the formation of an inverted image when light project an image through a small hole. The oldest known statement of camera obscūra was found on the work of Mozi, Chinese philosopher, during the Warring States Period 476-221 BC. The original statement in old Chinese states:

 “景。光之人,煦若射,下者之人也高;高者之人也下。足蔽下光,故成景于上;首蔽上光,故成景于下。在远近有端,与于光,故景库内也。”([1])

This statement has two different parts. Firstly, he described the light as arrows and how the image form under reflection. In the second part, he described the process of pinhole imaging which results in inverted image. I found this very interesting as the arrow picture not only correctly states the light ray travels in a straight line in the same medium, but it also coincided with much later idea of using vectors to formulate many optical phenomena. The phenomenon is demonstrated by figure 1, we can see how the image is inverted.

Figure 1: Pinhole Imaging, The boys scientists, Boston, Lothrop, Lee & Shepard Co. [©1925]

The pinhole camera can be considered as a small portable “Dark Chambre” without a lens but with a small aperture, the pinhole. It operated solely based on the camera obscūra. In general, the pinhole camera is a box with a pinhole located on the center of one of its sides. The figure 2 shows one of modern pinhole camera that I used before, the Harman obscura camera by Iford. It is a PVC box made of two locking sections, which has a 0.35 mm pinhole with a magnetic shutter, and it can be attached to a tripod. You can also create your own pinhole camera using cereal box, with some coloured tape you will be able to observe the inverted image by yourself. The photographic paper is usually placed at the back of the box

Figure 2: The Harman obscura camera by Iford

One may ask, why would someone use pinhole camera in our time, when we can just use the camera on our phone, and all other advanced modern camera with powerful lenses. In fact, the pinhole camera has many advantages over the other cameras.

Since there are no lenses on the camera, one would never experience lens distortion. Lens distortion is one form of optical aberration, which occurs when the lens failed to maintain a straight line straight from a scene to an image. The distortions are often irregular, the two most common distortions are the barrel distortion and the pincushion distortion, which you can see how they got their name from based on the figure 3. Distortions make the photos look different to our naked eye, which can be often experienced using a phone camera. It is not a problem for pinhole camera.

Figure 3: Two most common distortion, with their corresponding objects.

Another advantage is the ideal pinhole camera has a depth of field (DOF) of nearly infinity! Which means everything within the photo taken by the pinhole camera will be in focus. Since DOF is the distance between the nearest and farthest objects that are in focus in an image. Using the property of light travels in a straight line, we can immediately see that the smaller the pinhole, the more focused the image gets. Ideally, with a very tiny pinhole allows a single light ray to enter to a single point inside the camera will give the best resolution. It is physically impossible to build such camera in real life. However, we can find the optimal size of the pinhole.

The method of finding the optimal pinhole diameter was discovered by physicist Joseph Petzval,([2]) where the smallest pinhole would give us the highest resolution. The distance between it to the pinhole can be taken as the focal length for the camera, f. The optimal diameter of the pinhole, d for a light with wavelength, λ is given by

The Harman obscura camera that I mentioned above has a pinhole of size 0.35 mm. It has a focal length of 87 mm, taking the largest wavelength of visible light to be 700 nm, using the formula, we indeed end up with d equals to 0.35 mm! One can apply the formula when making his own camera for desired size photos.

Lastly, the pinhole camera has a long exposure time. Surprisingly, this has become an advantage for the camera. The exposure length depends on the temperature and the humidity of the location, which is often difficult to work with Irish weather and it requires time to master it to perfection. However, the long exposure time gives the unit motion blur in the picture for moving object with different velocity. Those are often referred as “ghost” in those photos. Digital camera to this day, still cannot recreate the similar effect, which makes this a unique feature for all pinhole camera photos.

References:

[1] Ben She.Yi Ming(2011),English Mojing(Chinese Edition),  Shanghai Foreign Language Education Press, ISBN 10: 7544619443  ISBN 13: 9787544619448.

[2] Petzval, Joseph Maximilian (1857), “Bericht über dioptrische Untersuchungen”. Vortrag vom 23. Juli 1857; in: Sitzungsberichte der kaiserlichen Akademie der Wissenschaften, mathematisch-naturwissenschaftliche Classe, vol. XXVI, S. 33-90

What even is a Conductor?

A conductor is any material which allows an electric charge carrier (e.g. an electron) to flow through it once a voltage is applied to it. This flow of a charge is known as electric current. Commonly known conductors are metals such as gold, silver, iron and copper, and even sea water!

 

What makes it so ‘Super’?

Super conductivity is a phenomenon which occurs in certain materials when their electrical resistance drops to zero and they begin to eject magnetic flux fields after reaching their critical temperature. This can be seen in certain materials, i.e. superconductors.

For anyone unfamiliar with these terms, electrical resistance is the force that opposes the flow of electric current through a material. One way electrical resistance occurs is when a metal heats up and the atoms start vibrating a lot, this results in the nucleus getting in the path of the electron that’s trying to move past it and ultimately hindering its flow. The aim for most conductors is to have as little resistance as possible for maximum efficiency. The ejection of the magnetic field lines is a bit more abstract. In the diagram below, on the left hand side is a superconductor above its critical temperature, and on the left is after it has fallen below it and has its superconductor properties:

As we can see, the magnetic field will no longer pass through the superconductor upon hitting the critical temperature. This is known as the Meissner Effect, and is the reason why a superconductor will levitate when placed in a magnetic field pointing upwards.

 

The First Breakthroughs

Firstly, there is a common misconception about superconductors, that is that they can be accurately described through a classical description using the likes of Lenz’s law for a conductor with no resistance, but this is not the case. Superconductivity is what is known as a ‘quantum mechanical’ property, which basically means that it’s just a few million times more difficult to understand.

They were first observed by  Heike Kamerlingh Onnes, a Dutch physicist in 1911, but it took another fifty years before much headway was made in describing the phenomenon.

The first big breakthrough in understanding them wasn’t until 1950, which was the Ginzburg-Landau Theory. This theory was developed from Landau’s previous theory on second order phase transitions, where superconductivity is seen as a state of matter of a material. The crux of this theory is that there exists a theoretical parameter which can be described using what’s called a ‘wave-function’. The main property of this parameter is that it is zero for any temperature above the critical temperature, but it has a positive value when the temperature decreases below it, this would explain the sudden appearance of superconductivity as opposed to it gradually becoming visible. The next breakthrough would be the biggest one yet, and would also be the best description for most superconductors to this day.

 

The Barden-Cooper-Scheifer Theory

The Barden-Cooper-Scheifer (BSC) Theory was conjured up by the three physicists it is named after in 1957, and it was the first solid microscopic theory of superconductivity developed, which also won the Nobel Prize. The premise of this theory lies within another phenomenon called Bose-Einstein Condensate (BEC).

Before getting straight into superconductors, it is important to first outline the key differences between two types of particles: Bosons and Fermions. Fermions are ‘distinguishable’, which means they can only have one copy of themselves per energy level, i.e. two electrons in each orbital (one spin up and one spin down). Bosons on the other hand are what’s known as ‘indistinguishable’ can occupy any energy level with as many Bosons as possible. As a result of this at very low temperatures Bosonic materials will clump together in the lowest energy level, this leads to the aforementioned BEC. This is a state in which bosons form a kind of condensation which is quite different to your standard solids, liquids and gases but still considered a state of matter.

Now, you may be asking “Ok, so Bosons can do all that fancy shmancy stuff, but aren’t we looking at electrons, which are Fermions?”, and you would be absolutely right! But the effect of having the conductors at such low temperatures lowers the heat energy in the system such that the positively charged nuclei of the metal (blue circles below) won’t be vibrating on their own at all, so they’re much less likely to get in the electron’s way. This also means that as an electron passes through it will slightly pull the nuclei towards it as they have opposing charges, this will then bring the nuclei closer together, but also closer to the next electron in line. This ends up in a slipstream like effect, similar to how it is with cars on the motorway, where neighbouring electrons flow as a pair, this is called a Cooper Pair.

“So what’s so important about this Cooper Pair nonsense?”, I can already hear you ask. Well, it turns out that when the critical temperature is reached and the Cooper Pairs are formed, they actually act as a Boson! This allows them to clump together in the ground state energy level and form a condensate. The main reason this is important is that, as all the Cooper Pairs are bonded over a relatively large distance, all the new ‘Bosons’ become entangled and move together as a whole. This reduces the chance of any electron colliding with any nucleus, (even if one did collide with a nucleus, the second electron in the pair will reconnect with the nearest free one and thus the entangled body will be almost instantaneously remade), and hence removes any resistance in the flow of the electrons! And thus, we arrive at the fundamentals of the BSC Theory of superconductors.

 

Types of Superconductors

Type I

A Type I superconductor is one with a single critical temperature. Here the critical temperature acts as an “on/off” switch for the superconductivity properties. The main property being that all magnetic flux lines within the conductor are expelled immediately when gone below the critical temperature, but are present as usual above it. So once the temperature goes above it it becomes a regular ole’ conductor again.

Type II

A Type II superconductor is one that can be described as having two critical temperatures Tc1 and Tc2, when the conductor’s temperature is in these limits it will act as a mixture of a normal conductor and a superconductor. Within this band the magnetic flux lines partially penetrate the conductor, with them being expelled completely beneath the lower limit of the band, and them passing through as if it were a normal conductor above the upper band limit.

 

What is the Importance of Superconductors Today?

First and foremost the main reason why superconductors are an area of such high intrigue is to do with the benefits of zero electrical resistance. Energy companies have to transport their product over great distances, and they inevitably lose money on electrical energy lost due to resistance with every joule they try to transmit, with estimates of around 7.5% of energy being lost as a result of resistance, according to sta.ie.

Secondly, superconductors act as a mechanism for representing quantum phenomena at a macroscopic level, making them an area of utmost importance in research in both experimental and theoretical physics.

At the end of the day, I would have to say there is something quite super about these conductors after all.

From the very moment their existence was theorised, black holes have inspired an almost-unparalleled curiosity in both career scientists and the public at large. They have a mystique to them that appeals to the same senses of awe and wonderment that fuel our love of legends and tall-tales, in that their existence is counterintuitive to our immediate experiences. Naturally, this provides science-fiction with its two eponymous components, but for a long time the emphasis was more heavily placed on the fiction.

However, in the run-up to two of the most major black hole-related breakthroughs in the 2010s – the detection of gravitational waves at LIGO and the direct image of Messier 87 – two films attempted to create the most accurate depictions of black holes ever seen on film: Interstellar (2014) and High Life (2018). While both films are inflected with varying degrees of overly self-serious, fatuous, and tiresome “philosophy” (especially High Life), a more interesting comparison can be drawn from the real-life physics concepts they choose to incorporate into their stories and, of course, how they represent black holes.

We’ll begin with Interstellar. The film brings up two main ideas related to black holes; time dilation (Miller’s Planet) and causal loops (inside the black hole). Gravitational time dilation is the process by which different altitudes in a gravitational potential well age at different rates. Put simply, this means that the greater the gravitational force you experience, the slower time moves for you. This is actually one of the more tangible aspects of general relativity, given that astronauts aboard the ISS and run-of-the-mill satellites are affected by it [1]. However, this effect is absolutely miniscule, unlike on Miller’s Planet (“One hour there is seven years back on Earth”), a rate so high that Kip Thorne, the scientific advisor on Interstellar, later said he struggled to come up with a plausible way that this could be achieved [2].

Regarding causal loops, this is where things get rather hand-wavy and unphysical. While General Relativity permits some exact solutions that contain “closed timelike curves” (i.e. time travel), this topic quickly devolves into the various time paradoxes that are fun the first time you hear them but soon grow old (unlike that grandfather you just killed). Interstellar avoids this by accepting what is known as “Novikov’s Self-Consistency Principle”, which, in spite of the name, is little more than an opinion that only “self-consistent trips back in time would be permitted”; that is to say, you can’t have any fun with the past by changing it. Also central to the idea of causal loop is that their origin in time cannot be determined. This is precisely what happens in the black hole in Interstellar; the infinite bookshelves cover all the points in time in Murph’s room, while the information he gives himself was exactly what brought him to the black hole in the first place.

 

Gargantua, the black hole from Interstellar.

 

Moving on to High Life, the film is (apparently, but that’s not how I remember it) about a crew of death-row prisoners who are sent into space to extract energy from a black hole. Though the method by which the crew will do this is not specified in the film, it could be one of two processes: either the Blandford-Znajek Process or the Penrose Process [3]. The former is based on the idea that a black hole can be considered as a conductor. This idea is derived from the theory that the black hole magnetises the material that it pulls into its orbit, and thus there is a difference in the voltage between the equator and the poles. Hence, by placing electrons near it, said electrons would be accelerated to the point that they radiate gamma-rays, which can then be used for energy.

The latter concept utilises the conservation of momentum to transfer energy to an object. By sending an object into the ergosphere – the part of the black hole whose inner boundary is the event horizon – and having it break apart with one piece going down into the black hole, the other part of the object would shoot out from the ergosphere with more energy than it had going in. Similarly to the Blandford-Znajek Process, this also slows down the black hole.

 

The black hole from High Life.

 

Finally, we come to the part we’ve all been waiting for: how both films represent their black holes. In Interstellar, the massive rate of spin of Gargantua yields a large and vibrant accretion disk which, as detailed in the film, acts as the “sun” for the various planets orbiting it. Meanwhile, in High Life, the absence of nearby material leaves a much smaller accretion disk. In both cases, the light from the part of the disk on the side of the black hole opposite to the observer can be seen as if it lies on top of the black hole. This is owing to the process of gravitational lensing, whereby the light that isn’t sucked into the black hole is bent by its gravitational pull. It is quite fun to compare these predictions with the actual image obtained in 2019, as they are both amazingly accurate.

 

M87, the first black hole to be photographed.

 

REFERENCES:

[1]: https://www.wired.com/2014/11/time-dilation/#:~:text=Why%3F-,Well%2C%20according%20to%20the%20theory%20of%20relativity%2C%20astronauts%20on%20the,ve%20traveled%20into%20the%20future.

[2]: https://www.space.com/28077-science-of-interstellar-book-excerpt.html

[3]: http://large.stanford.edu/courses/2011/ph240/nagasawa2/

The interpretation of quantum mechanics has been a subject of rich debate since the theory’s infancy. Among the proposed interpretations are (and this is certainly non-exhaustive) the so-called Copenhagen interpretation and the hidden variables interpretation.

The former interpretation is based on the probabilistic understanding of quantum mechanics. What puts it at odds with our usual intuition is that it is not deterministic. Take the position of a particle (this can be an electron, proton, even an atom, anything sufficiently small) as an example. This is what is known as an observable, a physical quantity which can be measured through experiment. According to the Copenhagen interpretation, the most we can know about a particle before a measurement is made is the probability of measuring its position in a certain region of space. This is a clear contradiction with classical determinism, which dictates that we should have some means of predicting where the particle will be when we make a measurement.

The hidden variables theory championed most notably by Einstein, suggests that quantum mechanics is incomplete. An example of this argument is the Einstein-Podolsky-Rosen (EPR) paradox, introduced by the three authors in their 1935 paper.

They make the claim that quantum mechanics can only be complete if a physical observable has no well-defined value before it is measured, since if it had one, a complete theory would allow us to predict what this would be. The usual example given which leads to the paradox is the physical observable known as spin. The original EPR argument does not deal with spin, but rather is general, it can however be applied to spin. I will avoid a detailed description of what spin is and focus on the fundamental fact that if the spin of an electron is measured in some direction, only two outcomes are possible. These two possibilities are referred to as up and down respectively. Furthermore, spin is a conserved quantity, so if we have two electrons, and we measure the spin in some direction (say the z direction, for simplicity) of electron 1 to be up, the spin in the z direction of electron 2 must be down. This is where the paradox is contained. In this example we could deduce what the spin of the second electron should be without measuring it. The claim made by Einstein, Rosen, and Podolsky is thus that quantum mechanics cannot be a complete theory and there should be some hidden variable. The physicist John Bell killed this idea in his famous 1964 paper.

To summarise Bell’s argument, consider the two electron example from before. It is necessary to introduce the notion of an expectation value here. It is essentially a mean. More precisely, it is the mean result one gets upon performing a measurement of some observable on a collection of identical systems, provided a sufficient large number of systems are used. Bell investigates whether the usual quantum mechanical calculation of the expectation value of the product of the spins in two different directions is compatible with the equivalent calculation in the hidden variable model. The way in which the expectation value is incorporated into the hidden variable model is by assuming there exists a probability distribution associated with the hidden variable. Bell then compares the mean computed this way to the way it is computed in quantum mechanics. Indeed, the two formulations lead to different expectation values, and therefore, different physics. He shows this by deriving Bell’s inequalities, a necessary condition for compatibility of the quantum mechanical and hidden variable results which are not always satisfied.

Bell’s result leads us to a conclusion which is either frustrating or fascinating depending on your perspective. In essence, we must discard our classical notion of determinism, in favour of the stranger, more wonderful ideas that quantum mechanics provides us.

 

References:

  1. A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. 47, 777 (1935).
  2. J.S. Bell, Physics. 1, 195 (1964).

You get out of the car after a long drive and go to push the door closed. Zap! You feel a nasty little sting as the car door handle shocks you. You know that it was an “electric shock”, but why did it happen? What’s electricity doing in the door handle? There’s actually no “electricity” in the door handle, not exactly.

You might have heard of electric charge; this is what makes the balloon attract your hair after you rub it on your head for a while. There is positive and negative electric charge, and opposites attract. How does this relate to the balloon? Well, when you rub the balloon on your head, something called the triboelectric effect comes into play. The triboelectric effect happens when there is contact between two suitable materials, and they trade charges; electrons from your hair are transferred to the balloon, leaving your hair positively charged (your hairs repel each other, standing on end, because they all have the same charge), and the balloon negatively charged. Then, moving the balloon close to your head, you notice that your are attracted by the balloon, which matches up with what we said about opposites attracting.

As you might have guessed, this “static charge” is what is on the surface of your car just before you get shocked. Your car accumulated this charge as you drove, but you, inside the car, didn’t. To introduce new language, there is now a voltage difference between you and the car; put simply, this means you have different concentrations of charge. Now we’re getting to the idea of electricity. Remember that the car now has some charge. The fact that like charges repel means that all of this charge desperately wants to spread out away from itself; this causes it to spread to the surface of the car body. But these charges would love to spread out even more, if given the chance; such a chance presents itself when you bring your hand close to the handle of the door. If given the choice between the rest of the car body (charged) and your skin (uncharged), the charges in the door handle seize the opportunity to spread out further once your hand comes close enough. Ouch! Charge jumps the gap and flows between you and the car, bringing you to the same potential (or in other words, to a point where charges are shared between you and the car such that both are equally undesirable places for charge to flow to). This flow of charge from one place to another is called electricity, and you’ve just experienced a brief electric shock.

Everyone knows that you don’t die from a static shock from your car, that happens when you stick a fork in a power outlet. What if I then told you that your car-shock measured in at around 5000V, whilst your outlet only supplies 120V? This is only counterintuitive because of the way it’s been presented. There is another important quantity when it comes to electricity which I’ve neglected to mention; electrical current. Measured in amps, the current is a measure of how much charge is flowing over time. If twice as many electrons are flowing in a second, then there are twice as many amps. Thinking back to voltage, we recall that it’s essentially a measure of how many more electrons there are in place A compared to place B; we know this means it’s indicative of how much the electrons want to move from A to B. It does not talk about how many electrons there are that want to move.

The number of amps, the current, is what really does damage to your body. Simply put, the body does not like electrons quickly and forcibly moved through it; this damages cells and interrupts vital bodily functions. Now let’s properly compare the car door to the power outlet.
The car door has a voltage of about 5000V, but when it flows to your hand it’ll flow with about 0.01A. Additionally, it only flows very briefly. The high voltage actually allows it to jump a small gap from the handle to your finger, but the low amperage means that it only stings a little.
The outlet has a voltage of 120V, but an amperage of about 15A. This is likely to kill, especially since the shock is likely to last longer than a static shock. The lower voltage means that your hand can be a lot closer to the metal contact without charge flowing, but when it does it will be much more damaging.
For an extra refence point, if you are struck by lightning the voltage is around 300 million V (the charges arc a long gap through the sky) and 30,000A. The only reason anyone ever survives being struck by lightning is because it’s such a brief occurrence, and even still death is very likely.

So there you have it! Voltage tells you how much electricity wants to flow, but the amperage tells you how much electricity is flowing. The volts will tell you how far the charges are willing to arc, but its the amps that’ll do the damage. Don’t stick forks in power outlets!

Joshua Cooney-Mercadal, JS TP

Sources:

[1]

[2] weather.gov/safety/lightning-power#:~:text=A%20typical%20lightning%20flash%20is,120%20Volts%20and%2015%20Amps.

[3] https://commons.wikimedia.org/wiki/File:Attractive-and-repulsive-electric-force-demonstrations-with-charges.jpg

[4] https://commons.wikimedia.org/wiki/Mains_socket#/media/File:Steckdose.jpg

Gravity is one of the four primordial forces of the Universe. There is even theories that believe Gravity was the force holding all matter in the universe into one “Singularity” before the big bang occurred. It is for this reason that is very easy for humanity as a whole to take gravity for granted.

Back when I was in sixth year of secondary school we used to always take the acceleration due to gravity on Earth (g) to be 10 metres/second. When I got to college we began to use g = 9.8 m/s for increased accuracy in calculations, but I always wondered how much of a difference that 0.2  m/s would make to our everyday lives and even more so if we just let g = 0.

We can calculate g by using g = GM/rˆ2,  M is mass of earth, G is the gravitational constant 6.6743 × 10-11 m3/kg/s^2, and r is the radius of the earth.

To investigate this we have to first lay out what we understand the gravitational force to be. In simple terms gravity is just an attractive force between any two objects. In that same sense the force of attraction between us and the Earth is what is keeping you sitting in your seat as you read this. The greater the mass of an object, the greater the force of gravity it holds. For visualisation,

No matter where on Earth you are standing you will be pulled towards its gravitational centre.

Gravity can in turn be thought of as a curvature of space and time. The easiest way to picture this is to think about how a bowling ball in the middle of a trampoline would curve all space around it. This thought allows us to investigate our first system of no gravity. Our gravitational constant G can be considered a field in space and time. Therefore if we let G be a scalar field (a function that assigns a scalar to each point of space) and then allow the strength to vary, we will get a system with g = 0.

In a case like this there would be huge repercussions for the planet and everything on Earth. All landscapes would be flattened as there is force of attraction between objects. Everything being oppressed by gravity would fly off in whatever direction it was being held from down in. While this is terrible it is definitely the least of our problems . Our atmosphere would no longer be held together and of course no atmosphere means no air to breathe. All water on the planet would all disperse as-well. Earth would be utterly inhabitable.

In another wild scenario to make gravity disappear we have to consider the elementary particle, Higgs-Boson. In short the Higgs Boson is part of a field aptly named the Higgs field. This field is essentially what gives mass to other particles like electrons. In order to acquire these masses the particles have to pass through the Higgs field. Let’s say this field is set to 0. Now the particles do not have mass and they can not effect space time and thus no gravity is created. The particles would now move freely with the speed of light. In this case the very atoms that make up everything on the planet would break down. Forget about floating away and being deprived of air, humans would disintegrate at a molecular level.

Doing research like this it is clear how important gravity is in our world and like G it is a constant in our lives.

 

 

 

Many academics and industry experts consider us to be living through a second quantum revolution. One of the largest developments to be undertaken in this revolution is the creating of the quantum computer. To understand the significance and advantages to creating a quantum computer, we need to take a look at the quantum physics which allows them to operate.

Many people who have not extensively studied quantum physics and who hear the word “quantum” believe it to imply something strange and unexplainable is happening. For those of us who have studied quantum mechanics this viewpoint can be dangerously close to the truth sometimes. In a simple statement, quantum mechanics describes the physics of very small objects such as electrons, protons and the atoms they make up. Among the types of behaviours we are presented with in quantum physics we have quantum superposition and quantum entanglement as two of the most of important. Quantum superposition means a quantum object can exist is a superposition of possible states with a given probability. An example of this is found when considering the spin of an electron. Spin is a quantum behaviour analogous to the rotation of the Earth around its own axis and we can take it to be pointing upwards(spin up), or downwards(spin down). Until we measure it, the electron actually exists in a superposition of spin up and spin down with us(the observers) unable to precisely say what state the electron is actually in. Quantum entanglement is a concept which confused great minds such as Einstein. If we have two quantum objects which are linked with each other(entangled) it means that any action we perform on one of the objects, will directly influence the behaviour of the other even if nothing is done to it. Returning to the electron spin example, if we now have two electrons which are entangled with a 50-50 chance of measuring spin up or spin down on a single electron we can observe the phenomena of entanglement. First, the electrons are separated by a massive distance which light cannot even travel instantaneously. Next, a measurement is performed on the first electron and the outcome is spin up. This guarantees that the second electron is in a spin down state and the same holds for opposite measurement outcomes. Through entanglement we were able to characterise the state of our entire system through the measurement of one piece of it, and also have this outcome transmitted over an unimaginable distance instantaneously.

It’s clear looking the behaviour seen in our everyday macroscopic world that these sorts of phenomena do not fit in and appear illogical. However as illogical and counter-intuitive as they may be, they have been found to be quite practical and useful as it turns out. In this era of the second quantum revolution we have progressed from discovering these behaviours to studying how to make use of them. Such a use is in the development of quantum technology, with a large focus on quantum computers.

A generic image related to computation would be a large block of 0’s and 1’s which constitute the bits of a computer and exist as 0 or 1. In quantum computers we have qubits(quantum bits or two level quantum systems) which can use quantum superposition to exist in both the 0 and 1 state at the same time. This property along with entanglement between qubits allows for faster information processing capabilities in quantum computers than in classical computers. In today’s research being carried out by university academics and in industry the focus is on how to get more qubits into quantum computers so they are prepared to tackle large scale problems where they will provide an advantage what is possible with classical supercomputers. This is being done by looking at decoherence effects between qubits as well as creating quantum algorithms which provide a clear advantage as to how quantum computers will outperform classical computers on particular problems.

We now know the basic operation principles of a quantum computer and where their development is currently at, but the question of where they will be used in the future still remains.

Any quantum problem: Quantum computers, naturally, are better suited to simulating quantum related behaviour. This has clear benefits for researchers studying quantum physics and quantum chemistry but has benefits beyond those fields as well. Molecular structure and configuration rely on quantum mechanics and can be better modelled using quantum simulation techniques. This ability to unlock more complex properties of molecules means quantum computers will accelerate progress in materials synthesis and drug development by bioengineers.

Cryptography: While quantum computers could pose a threat to modern day encryption techniques through ideas such as Shor’s algorithm, the sensitivity and uncertainty in quantum systems could allow for new quantum based encryption schemes to be developed.

Database Searching: There are many operation today which require the processing of large amounts of data and searching databases containing this information. This is a task which is known to be more efficient on quantum computers through the application of Grover’s algorithm. There is benefits to be found in manufacturing operations and in data-processing for the development of Artificial Intelligence through this advantage of using quantum computers.

These are only examples of the tasks which are aimed to be tackled in the future using larger scale quantum computers. The second quantum revolution also is not solely focused on the development of quantum computers with other technology such as quantum sensors a very topical focus of research as well. It will be interesting how this technology is improved and applied in the coming years.

References:

(1) The second quantum revolution | symmetry magazine