What is wind energy?

Wind energy is a form of renewable energy. It provided 29.5% of Ireland’s total electricity in 2021. It is by far Ireland’s largest contributor to renewable energy. For context, in 2021, the other renewable energy sources produced a 5.6% share of Ireland’s electricity.

So how does it work?

A wind turbine is a device that  takes in kinetic energy from the wind and converts it into electricity. Kinetic energy is the energy that particles have due to their motion. A group of wind turbines together are called a wind farm.  There most common type of wind turbine is called a “Horizontal- Axis Turbine”, named so because the blades spin around a horizontal axis. The main factors that influence how much electricity a wind turbine can produce are wind speed, blade radius, and air density. The stronger the wind, the more energy that is produced. The larger the “swept area” of the blades, the more energy that can be produced- doubling the radius can result in 4 times more power. The denser, or “heavier” the air, the more lift that is exerted on a rotor and hence the more energy that is produced- this is why farms at sea level are favoured over farms at altitude; the air is denser at sea level. Other factors that can influence wind farm production are the layout of the turbines and the grid connection of the farm. The turbines cannot be too close to one another as the turbulence caused by one turbine will affect the others around it- wind turbines are generally more effective when hit with laminar flow as opposed to turbulent flow, i.e., a smooth and orderly flow is generally better than chaotic motion of fluid particles when it comes to wind energy production.

The drawbacks to wind energy

The main drawback is the obvious one- when there is no wind, there is no wind energy. Although we live on a windy island, we do (occasionally) get periods of high-pressure weather systems when the weather is calm. Wind turbines have a range of wind speeds for which they are safe and efficient to operate in. If the weather is such that the wind speed is outside this range, the wind turbines must be switched off. This also means that when we get stormy weather with lots of gusts, a time when there is lots of wind energy to be harvested, we actually get none.

Other challenges to wind energy include the installation challenges associated with building wind farms in remote areas, sometimes even at sea. Upgrading national grid networks to reliably connect these isolate wind farms to urban areas is an ongoing project in many countries, something which could significantly reduce the cost of expanding wind energy capabilities. Turbine noise, and interference with wildlife are also drawbacks to wind energy, although it is worth noting that noise and interference with wildlife are not unique problems to wind energy production and wind energy does have a relatively low impact on wildlife. Public perception and the fear of the impact that wind turbines will have on the landscape are also challenges which wind energy faces.

Another (more physics- related) point to note is Betz’s Law which states that theoretically no turbine can extract more than  of the kinetic energy of the wind, irrespective of turbine design. This limit puts a bound on the amount of energy which can be extracted from a site.

The way forward

Is there a way around the drawbacks to wind energy or is it doomed to be an intermittent power supply, ultimately incapable of meeting a nation’s energy needs?

The answer is that wind energy is part of the solution. The other part is energy storage. Within the last decade, novel ideas have sprung regarding the storage of excess wind power. The main idea is that the wind will generate electricity when it is windy, and then use a battery to discharge power when the wind stops blowing. The wind will charge the battery when the power grid does not need the electricity that it is generating. One such solution was built in Tullahennel, Co. Kerry, where each of the turbines was fitted with a lithium- ion battery roughly the size of a car.

Another, more daring, solution was designed in Germany. This solution involved the integration of wind and hydro- power. The design involved placing wind turbines, fitted with large water reservoirs around their bases, on a hill above a hydro-power plant. The water reservoirs were the “batteries”, analogous to the lithium batteries in Kerry. Water would be pumped uphill to the reservoirs when the energy demand was low and released back downhill to power to the hydro-plant when needed.

These collaborative ideas might be the future of renewable energy. Such ideas allow for renewable energy to overtake and eliminate fossil fuel power, hopefully leading to a less volatile energy market and a greener future.

 

Here are the sources I used for this blog:

https://windeurope.org/about-wind/wind-basics/

https://windenergyireland.com/

https://www.gov.ie/en/policy-information/7498e-renewable-electricity/

https://www.seai.ie/publications/Energy-in-Ireland-2022.pdf

https://www.energy.gov/eere/wind/advantages-and-challenges-wind-energy

https://www.ge.com/news/reports/unique-combo-wind-hydro-power-revolutionize-renewable-energy

https://www.ge.com/news/reports/shades-green-wind-battery-hybrid-system-debuts-ireland

https://theroundup.org/betz-limit/

Featured Image Source:

https://commons.wikimedia.org/wiki/File:Whitelee_Wind_Farm_turbines.jpg

 

 

 

As the global pandemic has largely left the news cycle I think it is wise to reflect on how people interact with science on a daily basis in the media and how ineffective science communication can shape public discourse. Masks, lockdowns, and vaccines were the largest topics of discourse over the course of the last 2 years. Arguments were had in the media on a near daily basis about these and about how effective, necessary, and safe all these were. But government controlling the movement of people and Anti-vaxxers have been in and out of the news cycle for years. Masks are a relatively new topic for a lot of people, and I believe that the science of masks didn’t really get into the collective psyche. Here I will be debunking some of the myths surrounding masks and exploring how different masks work.

“I’m getting less oxygen!”

A common complaint about masks is that they restrict breathing, and this is somewhat true. It can cause someone to put more effort into breathing, and for some people with certain medical conditions this can be a real issue. It was also recommended that children under 13 shouldn’t be made to wear mask. Unfortunately this leads to some incorrect conclusions. One of the most pervasive myths is that masks trap carbon dioxide or stop you getting as much oxygen. Many people report feeling out of breath wearing masks, so it is a reasonable conclusion to come to, that these masks are someone giving you less oxygen.

This proposed ability to select for simple gasses with a 25 cent surgical mask is however, not possible. The range of pore diameter in a standard surgical mask is 10 to 50 micrometers¹, while the molecular size of carbon dioxide and oxygen are 0.33 nanometers and 0.30 nanometers. This makes the holes in the masks more than 50,000 times larger than an oxygen molecule, while a carbon dioxide molecule is 1.1 times as big. But if masks don’t block oxygen, why are people feeling bad after wearing them? Unfortunately a big part of it is mental health. Pandemics are scary, and putting on a mask, especially at the beginning, made it more real. Anxiety and panic attacks can look like a lot of different things and the high levels of anxiety surrounding masks² can cause people to connect breathlessness and tiredness, and other symptoms of anxiety, with the physical properties of the mask, rather than how they feel about them.

“How can a piece of cloth stop a virus, but my underpants don’t stop a fart?”

This is a surprising good question, though it is usually brought up by someone arguing in bad faith. The explanation is relatively simple, like above, the things we are talking about filtering are orders of magnitude apart in size, so they cannot be compared. However, in this case, they are largely right. Masks don’t outright filter and trap virus particles and a lot makes it through, into the environment.

So why then do we wear masks? For one, it’s a numbers game. When fighting a new disease, the number of virus particles in your body matters and will determine how sick you get and how much you spread it. Masks stop some virus, and that can prevent infections. Myths like this are particularly pervasive because they contain partial truths, but this question misses the point. Implicit in the phrasing is an implication that it is a binary. Masks either stop covid, or they don’t, and if they don’t they’re useless. The issue is, when people bring this up, they are largely arguing with well intentioned “pro-maskers” who often imply the opposite side of the binary, that masks do stop covid. This makes it very easy for the anti-masker to “win” the debate, because all they have to prove that they don’t stop all covid particles, which is of course true.

People don’t have a very good intuition of the physics of masks. One can think of it has just a tiny sieve, but matter interacts different at that small a scale. An N-95 mask for example has very tight knit nanofibres that can capture a wide range of sizes of particle, but for some particles, like water droplets, it utilise electro static charges within the fibres to induce slight polarity in the droplets to then adsorb the droplet.

“Even the scientists admit masks don’t work.”

A Danish paper found that recommending people wear surgical masks did not produce a “statistically significant result” and concluded that in their data,  masks were comparable to lesser forms of protection. This was touted by many as irrefutable evidence that facemasks are effectively useless. There are however many limitations with this study. There was no blind study, data was collected via self reporting, and the trial only looked at the mask wearer’s themselves testing positive, when it is known from other studies that masks are better at protecting people from the wearer.³ Possibly most importantly the study was done where there was already other preventative measures in place. Absence of proof is not proof of absence and this study is far from conclusive, which the scientists involved are all too eager to point out.

Science communication

There is quite a large disconnect between what scientists publish and what gets disseminated to the general public. It is hard to blame scientists for this issue as scientists are, by and large, not writing for lay people when they publish an article in an academic journal. They are writing for other scientists, but in the age of information, it’s not only scientists who have access to them. Explaining new science is often left up to reporters, or sometimes, science communicators act as a middle man, making mask discourse (or any science discourse) a bizarre game of telephone.

If someone didn’t have a good understanding of the science of masks or scientific methods in general, then the first time they heard facts about masks were through fairly strict mask mandates, made by decidedly not science-educated politicians. In some cases this allows some people to conflate the (poorly represented) scientific facts with the politics of the government espousing them.

There will of course always be fringe groups with outlandish claims and a disregard for science, but facts and figures can be presented better than a ream of figures listed off on the 9 o’clock news every night. Science can be more accessible.

 

 

References

¹ Du, W., Iacoviello, F., Fernandez, T. et al. Microstructure analysis and image-based modelling of face masks for COVID-19 virus protection. Commun Mater 2, 69 (2021). https://doi.org/10.1038/s43246-021-00160-z

² Szczesniak D, Ciulkowicz M, Maciaszek J, Misiak B, Luc D, Wieczorek T, Witecka KF, Rymaszewska J. Psychopathological responses and face mask restrictions during the COVID-19 outbreak: Results from a nationwide survey. Brain Behav Immun. 2020 Jul;87:161-162. doi: 10.1016/j.bbi.2020.05.027. Epub 2020 May 7. PMID: 32389696; PMCID: PMC7204728.

³ Efficiency of surgical masks as a means of source control of SARS-CoV-2 and protection against COVID-19. Int. Res. J. Pub. Environ. Health 7(5):179-189.

For almost 70 years, humanity has been launching rockets, satellites, people and animals into space. While most people and some animals came back, much less of the equipment comes back. For most satellites there are no plans to return them to sender. When their lifetimes run out, they simply stay orbiting Earth. This build up of space junk and debris is a growing problem that requires constant surveillance. The US department of defence even has a global space surveillance network whose sensors have to constantly monitor more than 27,000 pieces of space junk. However, the estimated number of dangerous debris is much higher considering that a lot of it is too small to be detected, even though with the speed it travels at orbiting the earth it is still a big source of danger. Functioning satellites’ orbits are carefully monitored along with the international space station. If it is calculated that a collision may occur between the space station and some orbital debris an emergency manoeuvre may have to take place. However, even with this surveillance and tracking costly accidents can still occur. So, what can be done to prevent this from continuing to be a growing problem.

The one saving grace we have so far is that some spaced junk drops down low enough into the Earth’s atmosphere and begin to feel its presence in the form of atmospheric drag. This causes the debris to burn up and disintegrate. This will eventually occur for all of the Earth’s orbital debris, however it may take decades and the rate at which the amount of debris is increasing vastly outdoes the amount that is getting burnt up.

The first planned mission to remove orbital debris is slated to occur in 2025, with the European space agency funding a company to send an experimental 4-armed robot into space to collect a large payload adapter and drag it into the Earth’s atmosphere where it and the robot will burn up. While this is a positive step, it is only one piece of large debris being removed at quite a high cost. A better, more economic solution is required.

A new idea is to possibly add drag sails to future satellites. Rather than a light sail which is used to propel spacecraft further from the sun using the force of the photons radiating from the sun, these will act more like parachutes. They will be very thin, and extremely sensitive to any force, so even just slightly grazing the top of the earths atmosphere and barely encountering any air, the satellite will be slowed and force down towards earths atmosphere, causing a much quicker burn up.

So hopefully with more creative ideas and solutions, the amount of junk in Earth’s orbit will soon begin to decrease.

 

Have you heard of the Irish Woodstock1? It isn’t a strange retro festival coming this summer, but an important historical event that has affected Irish politics and energy generation to this day, despite happening in the long distant days of 1978. ‘Get to the Point’, the event’s proper title, was the culmination of five years of effort by citizens concerned about government plans to build four nuclear power plants at Carnsore Point in Co. Wicklow, a protest concert headlined by none other than Irish musical luminary Christy Moore. In every way, the protestors succeeded. The government discretely scuttled the plans and, in 1999, made it illegal to use nuclear fission for the purposes of electrical generation. While it is an admirable example of non-violent action by citizens leading to change in an unpopular government policy, in light of the current climate crisis and Ireland’s commitments as part of the Paris Agreement to cutting emissions, was this decision beneficial for the country in the long run?

The main fears1 around the proposed Carnsore Point Power Plants, ignoring the political dimensions of the Cold War era, were the safety risks posed by the by-products of the fission process- radioactive nuclear waste- and the danger posed by failure of the reactor, which could render the land all around the plants deeply irradiated. It is likely that similar concerns are first and foremost on the minds of those who continue to oppose nuclear power in Ireland today- though the most recent surveys indicate that the percentages of the population in either the pro- or anti- camp are roughly even.2

Are these concerns founded? Dealing with the latter concern first, there is no doubt that the consequences of an accident, like at Chernobyl, can be catastrophic and should not be taken lightly. On the other hand, the majority of nuclear accidents are caused by a mixture of human and equipment error, which are exceedingly rare, especially since nuclear power plants are held to high safety standards in light of the danger an unmitigated accident poses. Consider France3, which opened its first nuclear power plant in 1962 and now has over 50 nuclear fission reactors, supplying about 70% of the nation’s energy. They’re power plants have not had a serious accident since 1980 and it was resolved without the endangerment or loss of human life. France’s success runs counter to the common anti-nuclear narrative that each power plant is a disaster waiting to happen. As such, it is unlikely that the fears around a nuclear disaster in Ireland would have come to pass- though for some people, any chance of such a disaster might be too much of a risk.

The concern of nuclear waste is more insidious. The physics cannot be denied- the by-products of Uranium-235 fission can remain dangerous to humans for up to thousands of years and need to be carefully managed of to avoid endangering human life4. The best solution is deep geological disposal, either in natural caves or in purposely created boreholes using repurposed oil drilling technology, with hundreds of meters of bedrock protecting civilization from the harmful rays5. Attempts to engage in the creation of large-scale waste disposal facilities, however, are often kneecapped by the protests of citizens within the selected locality, who are unwilling to take what they perceive to be an unfair risk. It is only in Finland that serious work has been done in creating a facility purpose built for nuclear waste disposal.  It can be said then that the concerns of the protestors at Carnsore Point were perhaps well founded here- if the Irish Government could not safely dispose of nuclear waste, sooner or later someone would be hurt. If one analyses the situation here, however, a certain cycle can be seen to emerge- without a way to dispose of nuclear waste safely, people oppose the construction of nuclear power plants- yet, equally, when nuclear waste disposal facilities are to be constructed, these two are opposed because people don’t view them as safe.

This latter point dovetails neatly in the final point I wish to make in this blog post. It may be that the concerns of the protestors at Carnsore continue to be valid in the modern day- but only because such concerns made taking actions to alleviate the worries difficult, if not impossible. The risk of an accident can never be made zero. Nuclear waste cannot magically disappear. At the same time, we cannot generate energy out of nothing. It may be that accepting the risks of nuclear power plants at Carnsore would have been better for the island of Ireland in future. There is no single silver bullet to solve the current energy crisis, but rather a chain of connected solutions that require us to consider every option fairly, and not fall into the trap of 20th century hysteria.

 

References:

1: Ireland’s Woodstock: the anti-nuclear protests at Carnsore Point – HeadStuff

2: https://www.thejournal.ie/climate-policies-nuclear-power-cars-economy-poll-5586969-Nov2021/

3: https://www.world-nuclear.org/information-library/country-profiles/countries-a-f/france.aspx

4: https://www.nrc.gov/reading-rm/doc-collections/fact-sheets/radwaste.html

5: https://world-nuclear.org/information-library/nuclear-fuel-cycle/nuclear-waste/storage-and-disposal-of-radioactive-waste.aspx#:~:text=Disposal%20of%20low%2Dlevel%20waste,the%20most%20radioactive%20waste%20produced.

 

 

Atoms, the building blocks which make up our universe, are made up of 3 important elements: the proton, the neutron and the electron. You may have recognize this image from the Big bang Theory.

The protons are the particles in red and have a positive electric charge, the neutrons in blue and have no charge. Together they comprise the nucleus. The grey particles are the electrons, which have negative charge, and attracted to the protons, much like the moon is attracted to the Earth. They execute an orbital motion around the nucleus.

When a very high energy ray of light comes near an atom, it has a chance to spontaneously disappear and leave behind an electron and a positron (the antimatter counterpart of the electron. Like an electron, except with a positive charge). This is called pair production. At first glance, this is a very strange prospect, that a ray of light can spontaneously turn into matter and antimatter. But this process obeys all of the laws of physics (obviously, because it happens), i.e. conservation of momentum, conservation of charge and conservation of energy. Conservation of energy is achieved when you take into account the rest energy of the proton and the electron.

Einstein figured out that matter is a form of bundled up energy, which is described by the famous equation E = m, which says that the energy from an electron or a positron existing is equal to its mass multiplied by the speed of light squared. So if the energy of the light ray is more than twice m, with m being the mass of the electron or positron (both have the same mass), then conservation of energy is satisfied.

The figure above shows the light ray coming from the left near the atom, and transforming into the electron positron pair on the right. So it obeys the laws of physics, but why does it happen? And why does it have to happen near an atom?

Well it has to happen near an atom due to  a subtle interaction between the atom and the light ray. Every atom produces electromagnetic fields, which the ray interacts with. The result of this interaction is probabilistic, the probability is determined by taking into account the total number of possible outcomes. It turns out, through the study of a field called quantum electrodynamics, that the probability that pair production occurs depends on the energy of the light ray, the higher the energy the more likely, and with the square of the number of protons in the nucleus, again, the more there are, the more likely. This interaction with the atom through its electromagnetic field includes it in the equation for the conservation of energy and momentum, which is why it is seen in the figure to have an extra momentum after the interaction.

Pair production is also a process which can happen backwards, in a process called pair annihilation. If you imagine running the process in the graph above backwards in time, i.e. the electron and the positron run into each other near an atom and annihilate each other to create a gamma ray, with all of the various conservation laws being obeyed.

References:

Image 1: Big bang theory: Why Leonard & Sheldon Spent exactly 139.5 hours rebuilding the model, https://screenrant.com/big-bang-theory-leonard-sheldon-139-hours-model-why/ , Accessed May 2020

Image 2: Conversion of energy into mass, https://www.jick.net/~jess/hr/skept/EMC2/node9.html , accessed May 2020

I think the process of learning about science can create unique experiences which students studying physics can relate to. The actual process of comprehending new mathematical concepts understanding old information in new frameworks, the process of continuously leaving behind past beliefs about the world and updating them until you don’t describe your knowledge of the world as a belief anymore. The inherent surety and yet also doubt in the scientific worldview. These experiences are so relatable to scientists yet so distant from peoples who haven’t travelled down those paths. The purpose of this piece that a reader may relate to one of these experiences without ever having to learn physics. I started thinking on this topic today while listening to a public lecture given by Feynman (who is arguable one of the best educators to have ever existed). I thought he was able to explain what the process of learning these facts and theories about the world feel like. The video was in some ways cathartic and comforting. I thought that if I could get my parents or grandparents to listen to this maybe they could to a better degree relate to some of the emotions that are brought on by studying physics.

To this end I want to talk about the framework in which you understand a concept. I want to talk about what it’s like to constantly relearn something using a different framework. Firstly, to understand anything you need to be existing in a space where somethings are taken as a given. If you asked why did the chicken cross the road? A satisfactory answer could be because there was chicken feed on the other side. Here we accept this as a reasonable answer because the concept of being motivated by food is taken as given. However, a sentient plant would be confused by this explanation and would need to be told of mammalian life requires consumption of food. The process of physics often requires you to list what you take about the world as a given and then to show what this would mean for phenomena. Similarly, you spend time relearning things you thought you knew but in a different framework of assumption.  For example, in Newtonian Mechanics we take time as an absolute entity that exist separate to the particles and location of any events. This proved to be wrong but  it is approximately true at small enough speeds which allows Newtonian mechanics to be useful still. In modern day we have brought the basic axioms of this world back further and further. One assumption made today that is that the laws of physics is the same now as they were yesterday or a hundred years ago. Another would be that the speed of light is the same in every reference frame. Yet at no stage do we have any idea as to why these givens are the case. Just that the implications of these assumption match up with nature as we have observed her.

To illustrate this process, I will describe my understanding of the concept of energy. Firstly, I thought energy was to do with motion and light and explosions and food. It was something objects could have or something that could be passed between forms but never destroyed or created. Further study had me thinking of energy as just a quantity that could be calculated in any scenario. That what made it special was that after some time the same calculations would give you the same result. As such it was useful to physics because the calculation involved the speed and so on of objects so that one term in the sum had a smaller value then another would have to have a larger value. I took the existence of such a constant quantity as a given. I had no idea why that would be the case. Now I understand energy as a quantity that must be the same if the laws of physics are to be the same today as they were yesterday. If the value of the energy was different the motion of the particles would be different for different values of time. As such energy is now a consequence of the assumptions that the laws of nature are unchanged in time. This assumption is now the new given and I understand energy in this framework. I hope this illustrates that learning physics can sometimes feel like this constant relearning of old ideas in a new framework with new givens.

God particle and physicists’ grail, what is the Higgs boson and why is it so important?

 

What it is?

Le Higg’s boson is elementary particle postulated in 1964 by three researchers, two Belgians, François Englert and Robert Brout, and then independently by the Scotsman Peter Higgs, even if history has only retained the name of the latter.

To summarise its function, it has often been said that “The Higgs boson is responsible for the mass of everything around us”, a shortcut that needs to be deciphered

Why do we need Higg’s boson?

The Standard Model of physics postulates that in the Universe, all the phenomena that surround us are the work of four elementary interactions, fundamental forces: the electromagnetic interaction (to the origin of light and magnetisation), the gravitational interaction, the strong nuclear interaction (explains the very cohesion of the nucleus of the atom) and the weak nuclear interaction (explains the radioactivity of certain atomic nuclei).

Even though there are four different forces, physicists are trying to unify them in order to arrive at a Theory of everything.

The electromagnetic interaction and the weak nuclear force, for example, are unified in the form of an original force that links them: the electroweak force. Except that to bring them together in this way, the Standard Model ‘theory of everything’ faces a big problem. Each force has its own type of elementary particle.

The electromagnetic force is associated with the photon, while the weak force is associated with the W and Z bosons. This is where unification gets stuck: the expected symmetry is broken, because the photon has zero mass (hence the possibility of a “speed of light”), which is not the case for the very massive W and Z bosons. How can the theory of unification be sustained in the face of forces that are so different with respect to such an essential ingredient as their mass? This leads to several questions like how to take mass into account or explain the fact that different particles have different masses.

The Standard Model, already included all types of elementary particles such as quarks, leptons and bosons.

Figure 1:Ordinary matter content of the Standard Model of Particle Physics.

 

The protons and neutrons inside the nucleus of the atom are made up of quarks. The more familiar leptons are the negatively charged electrons outside the nucleus. Bosons are responsible for various energy fields such as electricity and light. But none of these particles explain the mass.

In 1964, the theory of the Higgs field was postulated to solve this problem.

 

Higgs field

To understand the mechanism of Higgs field, we can compare it to a group of people who initially fill a room in a uniform way. When a celebrity enters the room, she draws people around her, giving her a large ‘mass’. This gathering corresponds to the Higgs mechanism, and it is the Higgs mechanism that assigns mass to particles.

It is not the boson that directly gives mass to the particles: the boson is a manifestation of the Higgs field and the Higgs mechanism, which gives mass to the particles. This is comparable, in this metaphor, to the following phenomenon: an outsider, from the corridor, spreads a rumour to the people near the door. A crowd of people forms in the same way and spreads, like a wave, across the room to pass on the information: this crowd corresponds to the Higgs boson.

In a more physicist way, the Higgs field is everywhere and it has an effect even in the most total vacuum of space, it gives mass. Since the quantum vacuum is full of the Higgs field, aligned along a particular direction of the abstract space mentioned above, particles “parallel” to this direction will be able to propagate without constraint, but those “perpendicular” will suffer a slowdown due to the incessant interactions with the Higgs field.

 

Origin of the universe:

According to this theory, The Universe would be filled with a specific field that gives mass to elementary particles. This field was present at the Big Bang, but it was zero. The force bosons were therefore also empty of mass – including the W and Z bosons. As the Universe cooled, it spontaneously became charged. All the elementary particles that interacted with the Higgs field acquired mass, and the longer the interaction lasted, the higher the mass turned out to be. The photon did not interact with the field because of its nature, so its mass is zero; but the W and Z bosons interacted so much that they got their mass.

 

The theory looks good on paper, but to be proven, it must be observed. The fields of the Universe are all manifested, when they exist, by a visible particle. In the case of the Higgs field, this particle is called the Higgs boson. Only one thing was missing: to detect it.

 

Some words about the LHC:

Figure 2:The tunnel and part of the particle accelerator at the Large Hadron Collider

The existence of the scalar boson is too short to detect it directly: we can only hope to observe its decay products, or even the decay products of these. Events involving ordinary particles can also produce a signal similar to that produced by a Higgs boson.

Thanks to the Large Hadron Collider (LHC), the particle accelerator that started operating in 2008 near Geneva. In this 27-kilometre long circular tunnel, protons are accelerated very quickly, raising their mass very high and producing collisions – an ideal tool in particle physics. It is possible to recreate conditions similar to the primordial environment of the universe.

Two parallel LHC experiments, the ATLAS and CMS detectors, have detected a boson in a mass region of the order of 126 GeV, exactly where the Higgs boson was expected to be. It could be nothing other than the long-sought particle that proved (fifty years after the theory emerged)  the existence of the Higgs field.

The experimental proof of the Higgs Boson led to the award of the Nobel Prize in Physics to François Englert and Peter Higgs in 2013.

 

To conclude, the impact of this discovery is huge as it clarified the Standard Model of physics and further confirmed the idea of a unification of forces. Knowledge of the Higgs boson’s properties can also guide research beyond the Standard Model and pave the way for the discovery of new physics, such as supersymmetry or dark matter.

 

References:

Le boson de Higgs. (n.d.). CERN. https://home.cern/fr/science/physics/higgs-boson

Universalis‎, E. (n.d.). BOSON DE HIGGS. Encyclopædia Universalis. https://www.universalis.fr/encyclopedie/boson-de-higgs/3-champ-de-higgs-et-masse-des-particules/

Pourquoi se préoccuper du boson de Higgs? (n.d.). Parlons Sciences. https://parlonssciences.ca/ressources-pedagogiques/les-stim-en-contexte/pourquoi-se-preoccuper-du-boson-de-higgs

Images:

Introduction to Physical Astronomy – Elementary Particles. (n.d.). Kias.dyndns.org. http://kias.dyndns.org/astrophys/particles.html

Pourquoi se préoccuper du boson de Higgs? (n.d.). Parlons Sciences. Retrieved May 13, 2022, from https://parlonssciences.ca/ressources-pedagogiques/les-stim-en-contexte/pourquoi-se-preoccuper-du-boson-de-higgs

Have you ever wondered how those water fountains at airports have perfectly smooth jets of water, that look like rods of glass jumping around in all directions? Well, that is because of a phenomenon called laminar flow! In fluid dynamics, when the flow of fluid particles follows smooth paths, all parallel to each other with no mixing between the layers, laminar flow is achieved where the fluid is so streamlined that when it exits a nozzle or pipe, it looks exactly like a rod of glass.

In contrast to laminar flow, turbulent flow is when cross currents, swirls, and mixing between the paths of the fluid cause chaotic flow of fluid. The one example of laminar flow that occurs in nature is in waterfalls. At the very edge of the waterfall, the currents of water follow laminar flow because of the velocity of water as it reaches the edge. Soon after the water falls over the edge, mixing of the different currents with acceleration due to gravity causes turbulence in the flow, breaking away from the smooth laminar flow.

Osborne Reynolds studied and theorised the distinction between laminar and turbulent regimes of flow. His research yielded the Reynolds Number (Re) which is a dimensionless number that acts as a parameter to identify the transition between laminar and turbulent flow. Laminar flow is generally governed by the geometry of the tube, the velocity of the fluid, and the viscosity of the fluid. Hence, Re is directly proportional to the velocity of the fluid and the diameter of the tube and is inversely proportional to the viscosity of the fluid. At lower values of Re, the flow of the fluid is laminar and when Re exceeds a certain transition threshold, the flow becomes turbulent.

Therefore, using water with a viscosity of approximately 1, at low velocities and small diameters of nozzles, the water fountains at airports can be engineered to create majestic glass rod looking flow of water. The jets can be fitted with LEDs to create even more spectacular formations. Another important application of laminar flow is seen in the wings of an aircraft. The wings are specially designed to cut the air in order to create laminar flow of air around it so as to minimise drag effects and help produce lift.

There are three main ways to get faster on a bike; get stronger, get a lighter bike or become more aerodynamic.

 

Since getting stronger is hard and a lighter bike is expensive, the easiest way to go faster on a bike is by optimising aerodynamics.

 

One component of drag is the difference in air pressure in front of and behind you. Shapes are developed by using computational fluid dynamics (CFD) and wind tunnels. The most aerodynamic shapes are often these teardrop-shaped “aerofoils”, and these shapes influence the design of helmets and the tubes that make up the bike frame.

 

A popular joke in CFD is the aerodynamic cow, here the colours indicate the intensity of the wind drag.

 

Some designers have even used the CFD simulation heatmap to inspire the bike’s livery

 

The force of wind drag can be calculated by the following equation

Where ρ is the air density, A is the frontal area, C_D is the drag coefficient and v is the velocity. This means that at higher speeds the drag force increases a lot. Since we’re trying to go fast anyway, this doesn’t matter. The most important variable here that is easiest to manipulate is the frontal area. A cyclist wants to make their frontal are as small as possible to decrease drag. They can manipulate this by simply using a bike that is designed to be more aerodynamic, with complicated carbon layups that incorporate aerofoil shapes into the leading edges of the frame and with deep section wheels that decrease turbulence of the wind as it passes by the wheels. But the easiest, and cheapest way to decrease frontal area is to bring your elbows in or lean forward more.

 

 

The third cyclist in the figure is the most aerodynamic but is also the least equipped to delivery power into the pedals to propel themselves. Hence, this position is usually only adopted when they are going so fast that the cyclist is no longer able to pedal at a cadence that actually makes them go faster, having to rely solely on aerodynamics to gain more speed. Because of this, many different variations of the “supertuck” have been developed by prominent cyclists.

 

 

In 2021, the Union Cycliste Internationale, or UCI, the governing body of professional cycling banned the “supertuck” position (sitting on the top tube, as seen in the “Froome” and “Top tube safe” positions) from races, deeming it too dangerous and unsafe. An even more extreme and dangerous method of reducing frontal area can be seen in the following video.

The world is full of forces that are important for our everyday life – without friction we would be unable to drive cars on the road safely. Did you know that all forces can be classified into one of the following four categories: gravitational force, electromagnetic force, strong force and weak force. These make up the fundamental forces of nature and can also be thought of as interactions. These forces all have a particle associated with them. In this article, we will discuss the four different types.

 

Gravitational Force

This is the force that people are probably most familiar with and is commonly referred to as gravity. Out of all of the forces it is the weakest, with only 6×10-39 of the strength of the strong force. However, it has an infinite range. Gravity is the force of attraction between two objects, which depends on the distance between the two objects. This force exists between every single particle in the universe with every other particle, however weak it may be.

The gravitational force felt by each object due to its attraction to the other, based on Newton’s law of gravitation [1]

From the figure, it shows that the strength of the force depends on the masses of the two objects (m1 & m2) and the distance between them d (this is sometimes referred to as r instead). The further away the objects are, the smaller and weaker the force is. It is understandable then that due to its infinite range, it can correspond to quite weak forces.

 

Electromagnetic Force

This is also sometimes referred to as electromagnetism and is the category that electric and magnetic forces fall under. This is likely the type of force that people will understand and encounter the most after gravity: taking the example of driving again, friction is due to electrical interactions between the atoms of the car tyres and the atoms on the road. Magnetic forces, for example, the force between two magnets, occur due to electric charges in motion. For a long time, it was thought that electric and magnetic forces were completely separate from each other. However, it was discovered that they were both just components of the electromagnetic force, the Lorentz force. It also has infinite range but is much stronger than the gravitational force.

 

Strong Force

The last two forces one is probably less acquainted with. The strong nuclear force, or often shortened to the strong force, is the strongest of the four as the name would suggest. It is the force that allows larger particles to exist. It is responsible for binding clusters of quarks together, therefore allowing protons and neutrons to exist. Neutrons and protons then form the nucleus of an atom and are also held together by the strong force. Protons are positive particles and neutrons are neutral particles; as you may know from magnets, like tends to repel like. The positive protons experience an electric force that repels them from each other; if not for the strong force the nucleus would then break apart.

The strong force overcomes the electric repulsion experienced by the protons, keeping the nucleus together [2]

The strong force however has a very limited range in comparison to the gravitational and electromagnetic forces, roughly in the subatomic region. Therefore, this force only occurs when particles are no further than the length of a proton apart.

 

Weak Force

Finally, the weak force has the smallest range of all the forces. Unlike the name suggests, it is not actually the weakest (this is gravity), however it is the second weakest. The weak force is the cause of beta decay, a form of radioactivity. In the nucleus of a radioactive atom, a neutron is then converted to a proton, and two particles are released from the nucleus, an electron (a small negatively charged particle) and an antineutrino (nearly massless antiparticle). This is one form of beta decay and it occurs due to the weak force. The weak force is the reason why carbon dating exists, allowing archaeologists to determine the age of many organic-based objects that previously came from living organisms. Carbon-14 is an unstable nucleus due to the weak force and will decay into nitrogen over time. By measuring the fraction of carbon-14 remaining in the object, its age can be determined.

 

Comparing the forces

Now that we have described each force briefly, let us compare their strengths, ranges and discuss the particles that are responsible for the different type of interactions.

The four fundamental forces and their properties, in order of decreasing strength [3]

The comparative strengths can easily be seen, as well as the varying range and what this size actually compares to for the strong and weak forces. The mediating particle is the one that is involved to cause a certain interaction to occur. For the strong force, it is mediated by gluons, massless bosons with a spin of 1. All the forces in fact have bosons as their mediating particles, particles with integer spin (0, 1, 2, …). The electromagnetic force is mediated by photons, particles of light. The weak force has mediating particles of W and Z bosons. It is believed that the gravitational force has a mediating particle, the spin 2 graviton; however currently this is all theoretical as this boson has yet to be observed experimentally.

 

Combining all four?

As previously stated, originally electric and magnetic forces were thought to be separate, until it was discovered that they can be described by the electromagnetic force. Since the 20th century, the electroweak theory has been developed, that describes both the electromagnetic and weak forces as a single electroweak force. This theory has been tested rigorously and so far has passed all experimental testing. There has been a desire to create a grand unified theory, that would combine the electroweak force and the strong force. This ‘electronuclear’ force has yet to be observed, however there has been models created that predict its existence. There has also been great interest in a theory of everything that would combine all the forces. There is still much about the universe we are discovering and it will be interesting to see if such theories are ever proven to be true.

 

Image sources

Featured image: https://commons.wikimedia.org/wiki/File:FOUR_FUNDAMENTAL_FORCES.png

[1] https://www.sciencefacts.net/gravitational-force.html

[2] https://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-structure-of-matter/the-nuclei-of-atoms-at-the-heart-of-matter/what-holds-nuclei-together/

[3] http://hyperphysics.phy-astr.gsu.edu/hbase/Forces/funfor.html