Posts

As the global pandemic has largely left the news cycle I think it is wise to reflect on how people interact with science on a daily basis in the media and how ineffective science communication can shape public discourse. Masks, lockdowns, and vaccines were the largest topics of discourse over the course of the last 2 years. Arguments were had in the media on a near daily basis about these and about how effective, necessary, and safe all these were. But government controlling the movement of people and Anti-vaxxers have been in and out of the news cycle for years. Masks are a relatively new topic for a lot of people, and I believe that the science of masks didn’t really get into the collective psyche. Here I will be debunking some of the myths surrounding masks and exploring how different masks work.

“I’m getting less oxygen!”

A common complaint about masks is that they restrict breathing, and this is somewhat true. It can cause someone to put more effort into breathing, and for some people with certain medical conditions this can be a real issue. It was also recommended that children under 13 shouldn’t be made to wear mask. Unfortunately this leads to some incorrect conclusions. One of the most pervasive myths is that masks trap carbon dioxide or stop you getting as much oxygen. Many people report feeling out of breath wearing masks, so it is a reasonable conclusion to come to, that these masks are someone giving you less oxygen.

This proposed ability to select for simple gasses with a 25 cent surgical mask is however, not possible. The range of pore diameter in a standard surgical mask is 10 to 50 micrometers¹, while the molecular size of carbon dioxide and oxygen are 0.33 nanometers and 0.30 nanometers. This makes the holes in the masks more than 50,000 times larger than an oxygen molecule, while a carbon dioxide molecule is 1.1 times as big. But if masks don’t block oxygen, why are people feeling bad after wearing them? Unfortunately a big part of it is mental health. Pandemics are scary, and putting on a mask, especially at the beginning, made it more real. Anxiety and panic attacks can look like a lot of different things and the high levels of anxiety surrounding masks² can cause people to connect breathlessness and tiredness, and other symptoms of anxiety, with the physical properties of the mask, rather than how they feel about them.

“How can a piece of cloth stop a virus, but my underpants don’t stop a fart?”

This is a surprising good question, though it is usually brought up by someone arguing in bad faith. The explanation is relatively simple, like above, the things we are talking about filtering are orders of magnitude apart in size, so they cannot be compared. However, in this case, they are largely right. Masks don’t outright filter and trap virus particles and a lot makes it through, into the environment.

So why then do we wear masks? For one, it’s a numbers game. When fighting a new disease, the number of virus particles in your body matters and will determine how sick you get and how much you spread it. Masks stop some virus, and that can prevent infections. Myths like this are particularly pervasive because they contain partial truths, but this question misses the point. Implicit in the phrasing is an implication that it is a binary. Masks either stop covid, or they don’t, and if they don’t they’re useless. The issue is, when people bring this up, they are largely arguing with well intentioned “pro-maskers” who often imply the opposite side of the binary, that masks do stop covid. This makes it very easy for the anti-masker to “win” the debate, because all they have to prove that they don’t stop all covid particles, which is of course true.

People don’t have a very good intuition of the physics of masks. One can think of it has just a tiny sieve, but matter interacts different at that small a scale. An N-95 mask for example has very tight knit nanofibres that can capture a wide range of sizes of particle, but for some particles, like water droplets, it utilise electro static charges within the fibres to induce slight polarity in the droplets to then adsorb the droplet.

“Even the scientists admit masks don’t work.”

A Danish paper found that recommending people wear surgical masks did not produce a “statistically significant result” and concluded that in their data,  masks were comparable to lesser forms of protection. This was touted by many as irrefutable evidence that facemasks are effectively useless. There are however many limitations with this study. There was no blind study, data was collected via self reporting, and the trial only looked at the mask wearer’s themselves testing positive, when it is known from other studies that masks are better at protecting people from the wearer.³ Possibly most importantly the study was done where there was already other preventative measures in place. Absence of proof is not proof of absence and this study is far from conclusive, which the scientists involved are all too eager to point out.

Science communication

There is quite a large disconnect between what scientists publish and what gets disseminated to the general public. It is hard to blame scientists for this issue as scientists are, by and large, not writing for lay people when they publish an article in an academic journal. They are writing for other scientists, but in the age of information, it’s not only scientists who have access to them. Explaining new science is often left up to reporters, or sometimes, science communicators act as a middle man, making mask discourse (or any science discourse) a bizarre game of telephone.

If someone didn’t have a good understanding of the science of masks or scientific methods in general, then the first time they heard facts about masks were through fairly strict mask mandates, made by decidedly not science-educated politicians. In some cases this allows some people to conflate the (poorly represented) scientific facts with the politics of the government espousing them.

There will of course always be fringe groups with outlandish claims and a disregard for science, but facts and figures can be presented better than a ream of figures listed off on the 9 o’clock news every night. Science can be more accessible.

 

 

References

¹ Du, W., Iacoviello, F., Fernandez, T. et al. Microstructure analysis and image-based modelling of face masks for COVID-19 virus protection. Commun Mater 2, 69 (2021). https://doi.org/10.1038/s43246-021-00160-z

² Szczesniak D, Ciulkowicz M, Maciaszek J, Misiak B, Luc D, Wieczorek T, Witecka KF, Rymaszewska J. Psychopathological responses and face mask restrictions during the COVID-19 outbreak: Results from a nationwide survey. Brain Behav Immun. 2020 Jul;87:161-162. doi: 10.1016/j.bbi.2020.05.027. Epub 2020 May 7. PMID: 32389696; PMCID: PMC7204728.

³ Efficiency of surgical masks as a means of source control of SARS-CoV-2 and protection against COVID-19. Int. Res. J. Pub. Environ. Health 7(5):179-189.

Dutch physicist, Heike Kamerlingh Onnes, discovered superconductivity in 1911. Onnes was studying the electrical properties of mercury when he found that its electrical resistance completely vanished when it was at a temperature of 4.2 K (close to absolute zero). An electric current was applied to this supercooled mercury, then the battery was removed. The electric current remained the same in the mercury at the same value. This discovery had massive implications for the energy industry, and if utilised could solve looming energy crisis.

Although the discovery of superconductors was in 1911, the understanding of how they work was not put forward until 1957. Physicists John Bardeen, Leon N. Cooper and Robert Schrieffer developed the theory that to create electrical resistance, the electrons in a metal need to be free to move and bounce around. In super cold conditions, under the material’s critical temperature, the electrons inside the metal become less mobile, allowing them to pair up. This prevents them from moving around. These electron pairs are called Cooper pairs and are very stable at low temperatures.  As there are no electrons free and mobile, the electrical resistance disappears completely.

Superconductors experience many phenomena, one being the Meissner Effect. When a superconductor is below its critical temperature, it expels its magnetic field. When the temperature is above the critical value, a magnetic field is able to pass through the material. Below the critical temperature, the magnetic field cannot pass through the material and instead must go around it. Surface currents that flow without resistance then develop to create magnetization within the superconducting material. This magnetization is equal and opposite to the applied magnetic field, which results in the cancelling out of the magnetic field everywhere within the superconductor. This means the superconductor has a magnetic susceptibility of -1, and it exhibits perfect diamagnetism. Diamagnetic materials are repelled by by a magnetic field, hence the superconductor is repelled by the magnetic field. One way to display this phenomena is to place a magnet above a superconductor. The magnet is observed to ‘float’ above the superconducting material. This is because repelling force can be stronger than gravity, allowing the magnet to levitate above the superconductor.

Fig1: Meissner EffectFig 2: Meisnn

The implementation of superconductors have not been so straightforward. Superconductors only operate at temperatures close to absolute zero, and the energy costs of cooling these materials to these temperatures are enormous. Despite this, it is very likely to encounter a superconductor in everyday life. Many MRI machines use superconducting magnets, as normal magnets would melt due to the heat of even a little bit of resistance. As superconductors have no electrical resistance, no melting occurs, and the electromagnets can generate the necessary magnetic fields for the safe operation of MRIs.

Have you ever wondered how those water fountains at airports have perfectly smooth jets of water, that look like rods of glass jumping around in all directions? Well, that is because of a phenomenon called laminar flow! In fluid dynamics, when the flow of fluid particles follows smooth paths, all parallel to each other with no mixing between the layers, laminar flow is achieved where the fluid is so streamlined that when it exits a nozzle or pipe, it looks exactly like a rod of glass.

In contrast to laminar flow, turbulent flow is when cross currents, swirls, and mixing between the paths of the fluid cause chaotic flow of fluid. The one example of laminar flow that occurs in nature is in waterfalls. At the very edge of the waterfall, the currents of water follow laminar flow because of the velocity of water as it reaches the edge. Soon after the water falls over the edge, mixing of the different currents with acceleration due to gravity causes turbulence in the flow, breaking away from the smooth laminar flow.

Osborne Reynolds studied and theorised the distinction between laminar and turbulent regimes of flow. His research yielded the Reynolds Number (Re) which is a dimensionless number that acts as a parameter to identify the transition between laminar and turbulent flow. Laminar flow is generally governed by the geometry of the tube, the velocity of the fluid, and the viscosity of the fluid. Hence, Re is directly proportional to the velocity of the fluid and the diameter of the tube and is inversely proportional to the viscosity of the fluid. At lower values of Re, the flow of the fluid is laminar and when Re exceeds a certain transition threshold, the flow becomes turbulent.

Therefore, using water with a viscosity of approximately 1, at low velocities and small diameters of nozzles, the water fountains at airports can be engineered to create majestic glass rod looking flow of water. The jets can be fitted with LEDs to create even more spectacular formations. Another important application of laminar flow is seen in the wings of an aircraft. The wings are specially designed to cut the air in order to create laminar flow of air around it so as to minimise drag effects and help produce lift.

The world is full of forces that are important for our everyday life – without friction we would be unable to drive cars on the road safely. Did you know that all forces can be classified into one of the following four categories: gravitational force, electromagnetic force, strong force and weak force. These make up the fundamental forces of nature and can also be thought of as interactions. These forces all have a particle associated with them. In this article, we will discuss the four different types.

 

Gravitational Force

This is the force that people are probably most familiar with and is commonly referred to as gravity. Out of all of the forces it is the weakest, with only 6×10-39 of the strength of the strong force. However, it has an infinite range. Gravity is the force of attraction between two objects, which depends on the distance between the two objects. This force exists between every single particle in the universe with every other particle, however weak it may be.

The gravitational force felt by each object due to its attraction to the other, based on Newton’s law of gravitation [1]

From the figure, it shows that the strength of the force depends on the masses of the two objects (m1 & m2) and the distance between them d (this is sometimes referred to as r instead). The further away the objects are, the smaller and weaker the force is. It is understandable then that due to its infinite range, it can correspond to quite weak forces.

 

Electromagnetic Force

This is also sometimes referred to as electromagnetism and is the category that electric and magnetic forces fall under. This is likely the type of force that people will understand and encounter the most after gravity: taking the example of driving again, friction is due to electrical interactions between the atoms of the car tyres and the atoms on the road. Magnetic forces, for example, the force between two magnets, occur due to electric charges in motion. For a long time, it was thought that electric and magnetic forces were completely separate from each other. However, it was discovered that they were both just components of the electromagnetic force, the Lorentz force. It also has infinite range but is much stronger than the gravitational force.

 

Strong Force

The last two forces one is probably less acquainted with. The strong nuclear force, or often shortened to the strong force, is the strongest of the four as the name would suggest. It is the force that allows larger particles to exist. It is responsible for binding clusters of quarks together, therefore allowing protons and neutrons to exist. Neutrons and protons then form the nucleus of an atom and are also held together by the strong force. Protons are positive particles and neutrons are neutral particles; as you may know from magnets, like tends to repel like. The positive protons experience an electric force that repels them from each other; if not for the strong force the nucleus would then break apart.

The strong force overcomes the electric repulsion experienced by the protons, keeping the nucleus together [2]

The strong force however has a very limited range in comparison to the gravitational and electromagnetic forces, roughly in the subatomic region. Therefore, this force only occurs when particles are no further than the length of a proton apart.

 

Weak Force

Finally, the weak force has the smallest range of all the forces. Unlike the name suggests, it is not actually the weakest (this is gravity), however it is the second weakest. The weak force is the cause of beta decay, a form of radioactivity. In the nucleus of a radioactive atom, a neutron is then converted to a proton, and two particles are released from the nucleus, an electron (a small negatively charged particle) and an antineutrino (nearly massless antiparticle). This is one form of beta decay and it occurs due to the weak force. The weak force is the reason why carbon dating exists, allowing archaeologists to determine the age of many organic-based objects that previously came from living organisms. Carbon-14 is an unstable nucleus due to the weak force and will decay into nitrogen over time. By measuring the fraction of carbon-14 remaining in the object, its age can be determined.

 

Comparing the forces

Now that we have described each force briefly, let us compare their strengths, ranges and discuss the particles that are responsible for the different type of interactions.

The four fundamental forces and their properties, in order of decreasing strength [3]

The comparative strengths can easily be seen, as well as the varying range and what this size actually compares to for the strong and weak forces. The mediating particle is the one that is involved to cause a certain interaction to occur. For the strong force, it is mediated by gluons, massless bosons with a spin of 1. All the forces in fact have bosons as their mediating particles, particles with integer spin (0, 1, 2, …). The electromagnetic force is mediated by photons, particles of light. The weak force has mediating particles of W and Z bosons. It is believed that the gravitational force has a mediating particle, the spin 2 graviton; however currently this is all theoretical as this boson has yet to be observed experimentally.

 

Combining all four?

As previously stated, originally electric and magnetic forces were thought to be separate, until it was discovered that they can be described by the electromagnetic force. Since the 20th century, the electroweak theory has been developed, that describes both the electromagnetic and weak forces as a single electroweak force. This theory has been tested rigorously and so far has passed all experimental testing. There has been a desire to create a grand unified theory, that would combine the electroweak force and the strong force. This ‘electronuclear’ force has yet to be observed, however there has been models created that predict its existence. There has also been great interest in a theory of everything that would combine all the forces. There is still much about the universe we are discovering and it will be interesting to see if such theories are ever proven to be true.

 

Image sources

Featured image: https://commons.wikimedia.org/wiki/File:FOUR_FUNDAMENTAL_FORCES.png

[1] https://www.sciencefacts.net/gravitational-force.html

[2] https://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-structure-of-matter/the-nuclei-of-atoms-at-the-heart-of-matter/what-holds-nuclei-together/

[3] http://hyperphysics.phy-astr.gsu.edu/hbase/Forces/funfor.html

 

Crystals are something we come across every day. From the salt on your table to the ice in your cup, from your phone screen to the crystals sold for good luck, it’s unlikely that you can make it through the day without encountering at least a crystal or two. When all’s said and done though, what makes crystal structures so different and special from any other materials, and how do we know all that we do about them?

Just like crystals are part of our everyday life, the study of crystal structure is an integral part of physics and chemistry in identifying the chemical composition of a structure. This however cannot be done by observation alone and instead a technique called x-ray diffraction is used. Crystals are solids that are made up of building blocks such as atoms or molecules. These building blocks come together in a repeating pattern with an ordered arrangement to form a crystal. It is because of this long range order that crystalline solids can be studied using x-ray diffraction. This is due to the fact that the uniform spacing between the atoms and molecules in a crystal have a similar size to the wavelength of the x-rays that are used in x-ray diffraction. Solids that are not crystalline (amorphous solids) cannot be studied using x-ray diffraction as they do not have this uniform spacing.

To understand x-ray diffraction, we must first understand how the periodicity (repetitiveness) of a crystalline solid can be described. Each of the repeating building blocks or motifs in a crystal are called lattice points and as they repeat in the same pattern, i.e. are periodic, they can come together to form something called a lattice, which is used to describe the periodicity of the crystal. Crystals can be viewed as structures made up of “unit cells” which are the smallest group of atoms or molecules that when used with the lattice, make up the entire crystal structure.

The unit cell of a crystal can come in many different shapes and forms such as cubic and tetragonal and within those crystal systems, there are four different arrangement atoms can take, primitive, body centred and end-face centred. In order to define the lattice, the locations of the atoms on the unit cell, the symmetry of the crystal structure and the lattice parameters must be known. The lattice parameters are the lengths of each side of the unit cell and they are related to Miller indices and Miller planes. Miller planes are the set of parallel planes that exist on the axes of unit cells, and can be described by a set of three numbers called the Miller indices of the crystal.

Bragg’s Law, the Principle Used in X-Ray Diffraction (1)

This is where x-ray diffraction comes in, as the Miller indices can be found from the results of this technique. In x-ray diffraction, the x-rays are emitted and passed through a crystal. When this happens, the planes of the crystal reflect some of the x-rays at a scattering angle of theta. This angle theta corresponds to a Miller index. This relationship can be found using Bragg’s law as shown in the image above. X-ray diffraction provides an x-ray diffraction pattern of these different values of theta and so give the different Miller indices of the crystal. These can be used to find the unit cell parameters of the crystal, and give the type of unit cell that the crystal has. This pattern can also be compared to a database of diffraction patterns of known substances to identify the likely chemical composition of the crystalline solid or powder that is being analysed by x-ray diffraction.

With that we see how x-ray diffraction can be used to find out the lattice parameters of the unit cell of a crystal and from this its crystal structure, and can also be used to obtain the chemical composition of an unknown crystalline substance and thus is an incredibly important tool in the study of crystal structures.

(1) Image credit: X-Ray Diffraction, Veqter: https://www.veqter.co.uk/residual-stress-measurement/x-ray-diffraction

Figure 1- showing the Ternary phase diagram for the C, H systems.[1]

Dimond like carbons (DLC) are a distinct set of amorphous carbon materials which share structural similarity to diamond, due to them both possessing sp3 hybridised carbon atom. The irregularity in its arrangement comes from the presence of filler atoms such as sp2 hybridised carbons, metals and even C-H bonds. The presence and arrangement of these filler atoms in the sp3 system, causes the material to exhibit unique and/or improved properties not observed in a simple. In fact there are 7 classified type of DLC commonly used (demonstrated in figure 1), the most abundant of which being tetrahedral amorphous carbon (ta-c), consisting of an even blend of sp3 and sp2 carbons, making the material stronger, smoother, have a higher gas barrier performance as well as a better biocompatibility compared to diamond, some research suggesting that it is even possible to scratch diamond.

 

Why and What are they used for?

The unique properties of DLC materials have made them one of the most sought after and researched materials within a large range of fields. The amazing this about DLCs is that in most applications they are only needed as coating materials, which are most formed as thin films (about 5 m thick), this means that not a lot of material is needed and therefore cuts cost for manufactures. These coats can be used on several materials to improve their hardness, they are so successful at improving toughness, that when steal is coated with DLC and exposed to rough wear, its lifetime was improved from a few weeks to 85 years[1]. Because of this DLCs have been found to have a great importance in the durability of materials ranging from scratch resistant car windows to coating space shuttles to prevent wear during launch, due to their high environmental temperatures. DLCs can be made to have an incredible smooth surface allowing them to have an extremely low friction. This technology has found its way to the locomotive industry, where coated gears and equipment make a perfect replacement to lubricants, as well as increase the longevity of the vehicle.

For a long time now DLCs have been researched and developed for biomedical use due to their superior mechanical properties and biocompatibility, they have already been used in the field of medicine biomedical applications. Implants made from DLCs have shown great success, since tissue can easily adhere to the surface as well as when blood is present a layer of protein is formed around the surface making the body less prone to blood clots and less likely to reject the implant. Because of this, DLC coating has been of great important to the development stents, a device which is able to expand veins and arteries. DLCs, much like diamonds, are very inert, research shows that they are very resistant to acidic substances, making them ideal for storage of highly corrosive and dangerous chemical which would otherwise seep through when using uncoated glass, as well as to protect sensitive equipment[1].

The structure of diamond is well known for being extremely electrically insulating, whilst the its graphite allotrope is very conducting along its planes, since DLCs have an internal structure consisting of a  mixture of diamond and graphite carbons, they have been observed to have conducting properties. The extent of which is directly proportional to the amount of conducting sp2 carbons and doping which the material has, this conductivity is achieved via quantum mechanical tunnelling between sp2 sites (pockets of electron site). Because of this DLCs can be easily manufactured to have a range of different conductivities from super conducting to insulating, this also means that right at a key sp2 percentage semiconducting properties are observed. Moreover, the ease that which DLC’s properties can be modified, means that they can be fine tunned to have a desired band gap for a particular job, making them a very useful and prominent technology in the semiconducting industry[2]. Unfortunately, the market is currently controlled by silicon semiconductors, due to them being cheaper and having more investment, meaning that current DLCs are used to coat and improve on the properties of already developed silicon based semiconductors. Due to the incredible conductivity features that DLCs can manufactured to have, means that they are very regularly used in the electronic community both for passive and active materials.

 

How are they manufactured?

Figure 2- showing 5 types of ion beam deposition methods [2]

Since the invention of DLCs in the early 70s, there have been a multitude of ways to develop them, all of which based on deposition methods to grow thin films. The first DLCs where developed via ion beam deposition, which have expanded into several beam type depositions as shown in figure 2, all of which sharing the general features, carbons ions are created by a plasma sputtering to a graphite surface. Sputtering is the process where an accelerated ion is targeted on to the surface of a material (in this case graphite) to remove particles of the material. The ejected carbon ions can them be guided using a forward bias to a substrate target where the thin films can grow from. Unfortunately, this process does require immense temperatures and a high vacuum environment which reduces the number of materials that it can grow from as many materials might decompose in the process. The other major deposition technique is called chemical vapour decomposition (CVD) where a solid material is vaporised via a chemical reaction and then deposited to the surface of a substrate. This technique is widely used in the formation of thin films for semiconductor materials. [2]

 

Their future?

As mentioned before the properties of DLCs can be highly tuned by controlling their manufacturing process, because of this most current research usually targets the production process of this material. Recent improvements in the field of DCLs focuses on developing deposition methods which do not require a high vacuum and temperature, Keio university in Japan have been studying these novice deposition growths, able to develop thin films at atmospheric temperature and observing any changes in the structure and properties.[3]

 

References:

[1] Rajak, D., Kumar, A., Behera, A. and Menezes, P., 2021. Diamond-Like Carbon (DLC) Coatings: Classification, Properties, and Applications. Applied Sciences, 11(10), p.4445.

[2] American Elements. 2022. Robertson, J., 2002. Diamond-like amorphous carbon. Materials Science and Engineering: R: Reports, 37(4-6), pp.129-281.

[3] Hasebe, T., Ishimaru, T., Kamijo, A., Yoshimoto, Y., Yoshimura, T., Yohena, S., Kodama, H., Hotta, A., Takahashi, K. and Suzuki, T., 2007. Effects of surface roughness on anti-thrombogenicity of diamond-like carbon films. Diamond and Related Materials, 16(4-7), pp.1343-1348.

Carbon is the villain. It is the reason for the trapped heat that is causing Climate Change. The two main gases that cause the greenhouse effect are carbon dioxide and methane, CO2 and CH4, and C is for carbon. We burn oil and gas and coal and turf, all laden with carbon. So, carbon is the villain, right?

Wouldn’t it be poetic if it was also the hero?

Carbon can save us by being used in more sustainable and cheaper batteries, solar panels, supercapacitors (read: fast batteries) and fuel cells.

To be fair, this isn’t all carbon does. It makes most of you, you. If you are mostly protein (and you are), proteins are mostly carbon. This this is what makes it both a problem and a solution. It’s everywhere.

Carbon, just by itself, can be wildly different. The number to keep in mind: four. Carbon wants to have four bonds.

What you want in a material is: 1) strong bonds and 2) lots of bonds.

For strong bonds you need to be small. This is because the larger the atom, the less the bonds to it care. They are so far away, they don’t feel as impacted by the nucleus, like a voter in Kerry not caring about the problems in Dublin.

For lots of bonds, you need a Goldilocks of not too many electrons and not too few. The reason 4 is the magic number is because its as far from 0 and 8 as you can get. Carbon has these 4 electrons, and it either wants to get 4 more or get rid of them, and to do this it needs bonds. More electrons or fewer electrons, it could achieve this with fewer bonds.

So to explain why carbon is just so good, really just look at it on the periodic table: it’s at the intersection of Strong Bond Street and Many Bond Avenue.

 

How can we use this? There are two ways this can go: diamond or graphene. Diamond puts these four strong bonds to good use by being bonded to four other carbons at equal distances, in a pattern that looks like this:

Periodic Table of the Elements and location of carbon

Graphene, on the other hand, breaks our rule: four bonds is better. It does this by making a hexagonal pattern, bonding to three others. The last electron is allowed to float freely between the others, strengthening the bonds. This free electron also gives the great electrical properties of graphene, as loose electrons are necessary for the movement of electricity in a material.

The fact that the carbon is only bonded to three others is what makes it a “2 dimensional” material as there are sheets one atom thick.

Structure of graphene from hexagonal benzene

Like rolling paper into a straw, the graphene sheets can be rolled up into a tube, called a carbon nanotube. Scientists find these extremely exciting materials because they act like wires, with a similar level of conductivity as graphene but in a more useful form.

Carbon nanotubes rolled from sheets of graphene.

Now that the properties are explained, we can move on to why this is important: carbon is the hero.

First step is producing energy. Solar panels now are made with silicon. These are fairly efficient, about 20%. But for almost any application, efficiency can be replaced with savings in cost and weight, and the limit for silicon is getting thin enough sheets to use as little material and to be asl light as possible. Graphene has the advantage of being already thin, as well as atomically lighter.

The graphene can be incorporated into the solar panel in multiple ways, as there needs to be materials creating and receiving the electron to make the electricity and a way to transport it after. Graphene can be incorporated into almost all parts of this process, to make the cells more efficient or to completely redesign them, like making them flexible. Expect to see graphene in your solar panels in the future.

A flexible graphene based solar panel

Next is storing that energy. The most traditional way to do this is batteries. The big players here are lithium ion batteries. These are currently limited in speed of charging and lifespan by the electrodes, rather than the actual battery material. These are currently graphite, which is many disorganised stacks of graphene. These are being improved by replacing them with carbon nanotubes, which can have a higher contact area with the battery, speeding up the process and last longer.

If this isn’t a radical enough change, lithium ion batteries could be superseded by graphene ones completely. These operate by letting charged atoms drift from one sheet to another, releasing electrical energy. Lithium is actually quite rare, so a more abundant replacement is of great benefit. It also needs to be light, because even if you have and amazing but heavy material, it still won’t be good per kilo. Graphene can solve both these problems, and is a candidate for the batteries of the future.

Current use of graphite in lithium-ion batteries

A more experimental option are the much-hyped supercapacitors. Capacitors operate by physically separating positive and negative charges on parallel sheets. They are better if they have larger surface area or a smaller gap between them. You can probably see where this is going. Graphene sheets have an enormous surface area and are thin enough so that the capacitors made from them would have enormous energy density.

There must be a separation of the graphene sheets, done by “intercalation” or insertion of charged atoms between layers. With current technology, these supercapacitors are at the level of other battery technologies such as nickel hydride.

Graphene sheets in a supercapacitor

The final energy storage method that these materials open up is hydrogen fuel. Hydrogen can be made by using electricity to break apart water, H2O, into oxygen, O2, and hydrogen, H2. The same fuel cell can be used in reverse to recombine oxygen and hydrogen into water and an electric current is generated. This means it is in a sense a battery, but with a fuel that can be extracted and stored like a traditional fuel, and the energy per kilo is higher than petroleum.

Graphene has potential as both the anode and cathode (positive and negative electrodes), which has advantage over the competition as it is lighter and less expensive than the rare metals that it would replace. It also is a candidate for the storage of hydrogen. This is useful as hydrogen is so small it tends to leak out of any container you put it in. The graphene is “buckled” like crumpling a sheet of paper, and the high points attract and bind hydrogen, meaning it won’t leak out of the container.

Hydrogen fuel cell

So, carbon has been both the problem and the solution. While creating the problem in the form of carbon dioxide and methane, it has the potential to be the solution. Graphene can be made into solar panels, batteries, supercapacitors and hydrogen fuel cells and storage while carbon nanotubes can improve the batteries we already have.

Maybe carbon isn’t the hero we deserve, but it’s the one we need.

“When will we use this in the real world?”

A question all secondary school math teachers have heard at some point in their career. Not the worst thing they’ve heard students say, but I’m sure it stings nonetheless.

At face value, it’s a fair enough question. Most students won’t find themselves in a situation where they need to use the minus b formula, or imaginary numbers, or integration (blasphemy, I know). In reality, a base knowledge of arithmetic and percentages are all most people need to survive in the big bad world (if they stay away from STEM careers). Knowing all this, I began to wonder why math is a core subject comparable to English, something everyone uses everyday. After much (some) pondering I came to the conclusion that math education is as much about teaching students logical thinking as it is about teaching them math.

English, Irish, and math are core subjects because they each give students important skills. English teaches us how to communicate effectively, reading comprehension, how to write. Irish teaches us another language, it is our culture, it improves our communication skills. Math is the only one of the three that instills logical problem solving skills. Solving problems is a crucial part of everyday life, and the teaching of math is a very convenient way to strengthen this ability. Dr Arlene Gallagher, the director of the Walton Club, has this to say about the subject:

Mathematics is all around us, yet many people’s perception and experience with this subject is to learn a set of steps ‘to get the right answer’. In teaching mathematics, if we reduce this subject to a set of procedures, we are missing out on critical opportunities to develop mathematical habits of mind and advanced cognitive skills that are highly transferrable and outlast any exam setting.”

Dr Gallagher brings up something which many teachers can be prone to do, that is the reduction of math to a set of procedures. Of course, to solve problems students must first develop the tools needed to so, but there must always be an emphasis on the greater-picture. The quickest way to bore students is by giving them a handful of abstract math problems that only require one skill! I’m a maths educator at the Walton Club, and in my lessons I always relate the mathematical concepts to real-world applications. My lessons are often structured such that a new concept is introduced, and then the students (known as alphas, after Ernest Walton’s Nobel prize winning experiment) apply that concept to solve a real-world problem. For example, a lesson on mean, median, mode, and expectation values began with example calculations of each. Then the alphas were tasked with determining which character has the best dice on Nintendo’s Super Mario Party for the Switch. The alphas had to figure out which tools to apply to the problem, all I showed them was how. I believe this is how maths should be thought, that way the students will never have to ask that question again.

Give a man a fish…

 

With the EU’s ambitious goal to be carbon neutral by 2050, it’s increasingly important to examine all aspects of our energy production, storage and usage. At present, buildings account for 40% of the EU’s energy consumption and approximately 75% of the EU building stock is energy inefficient (1). Therefore increasing the energy efficiency of buildings, could be one of the most effective methods of reaching our climate goals.

Although in countries with climates similar to Ireland we mostly focus on retaining heat in buildings, in hotter climates cooling is the most important feature of buildings. Air conditioning is one way to cool these buildings, but it is horribly energy inefficient. With many developing countries having hot or mixed climates, it is predicted that the demand for air conditioning will spiral out of control in the coming decades, unless we can find a better solution. Smart glass, may be this solution.

Smart glass, is a glass whose transmission changes when voltage, light or heat is altered. It can be used in smart windows to change the visible fraction of transmitted light. It thus, minimises the need for cooling and keeps rooms at a comfortable temperature. One of the technologies used for smart windows, are chromic materials. Electrochromic and photochromic smart glass are of particular interest.

Electrochromic smart glass changes its transmittance when it is stimulated by an electrical signal (2). It can change to any state between and including transparent and opaque. An electrochromic glass panel will consist of layers making up a glass stack. This glass stack is usually only a few microns thick. On  the top and bottom a transparent conductive layer (TCO) will be present, while the centre of the structure will contain the electrochromic layer and electrolytic layer. When a voltage is applied to the TCO layer, charged particles move from the electrolyte to the electrochromic layer (2). The electrochromic layer is normally transparent  in its inactive state, but this increase in charge results in an increase in the absorption of light. The reason for this change in transmission is due to the ions from electrolyte inserting themselves into the electrochromic layer.

             Electrochromic smart glass layer (3)

This results in the reduction of the band gap, which means photons with lower energy can now be absorbed by the glass. Electrochromic glass allows the user to control the amount of light by a flick of a switch which means they can regulate temperature or privacy. However due to electrochromic glass’s need for electricity, it is not quite as environmentally favourable as photochromic glass

Photochromic smart glass changes colour with exposure to light (4). It does not need any electricity to work, and is usually manufactured in the form of a thin film on the window. The thin films used are often photochromic rare-earth oxhydrides. As well as the benefit of not needing electrical power, the decrease in transmittance of rare earth thin films extends from the UV up to mid-IR (5). This means that photochromic smart glass can reduce solar thermal gain, which can often overheat a room. This in addition to the glass being able to reduce visible light. The main problem of photochromic smart windows, is their slow switching speeds and their stability over long periods. At present research is underway to try and address these problems.

If the problem of the switching speed could be addressed (at present it can take 5 minutes for the glass to return to a clear state), photochromic glass could have major benefits to the transportation industry. It would reduce glare for drivers and also reduce the need for air conditioning for passenger comfort. Electrochromic glass can also take a time in the order of minutes to change state, which again is a disadvantage for transport applications. However it has been suggested for architectural applications the slow switching speed of electrochromic and photochromic glass is an advantage, as it gives our eyes a chance to adjust to the change in light level (2).

                Boeing electrochromic window (6)

Electrochromic and photochromic glass are already utilised in industry. For example the Boeing 787 Dreamliner uses electrochromic windows to replace window shades in the aero-plane, while photochromic sunglasses are becoming more commonplace in everyday life. I expect that in the coming years smart glass will become even more prevalent, as we shift our attention to climate adaptive structures which can reduce are energy usage.

References

  1. European Comission. In focus: Energy Efficiency in buildings. European Comission website. [Online] February 17, 2020. [Cited: May 11, 2022.] https://ec.europa.eu/info/news/focus-energy-efficiency-buildings-2020-lut-17_en#:~:text=Collectively%2C%20buildings%20in%20the%20EU%20are%20responsible%20for,mainly%20stem%20from%20construction%2C%20usage%2C%20renovation%20and%20demolition..
  2. Smart Glass World. What is electrochromic smart glass? Smart Glass World Web site. [Online] 2022. [Cited: May 13, 2022.] https://www.smartglassworld.net/what-is-electrochromic-glass.
  3. Infinity SAV. Smart Glass Inifinity Sav Web site. [Online] 2022. [Cited: May 13, 2022]
  4. Durr, Heinz and Bouas-Laurent, Henri. Photochromism Molecules and Systems. s.l. : Elsevier Science, 2003. ISBN: 9780080538839.
  5. Colombi, Giorgio. Photochromism and Photoconductivity in Rare-Earth Oxyhydride Thin Films. Delft : TU Delft, 2022.
  6. Coxworth Ben. Electronically-dimmable windows on offer for Boeing 777x. New Atlas Web site. [Online] January 07, 2020. [Cited: May 13, 2022]

 

 

Nanotechnology is still an emerging science and alongside this has uncountable opportunities, discoveries, and advancement ready to be uncovered. So far it is predicted to contribute significantly to economic growth in the upcoming decades. In the modern age scientist have predicted that there are four distinct generations of advancement. Currently for nanotechnology we are slowly emerging into the second said generation as we have thoroughly examined material science with the incorporation of nanostructures (coatings/ carbon nanotubes for example) for strengthening/ improvement of a material. The second generation incorporates active nanostructures such as drug development. Onwards from here it is unclear what exactly is the next advancements maybe combining these individual researched structures into a system. This could be where things become tricky and intertwining with other sciences such as a combination of nanorobotics or the regrowth of organs or potentially limbs. The possibilities are endless with new ideas prompted regularly and with years to come up with ideas it does appear limitless.

There are various barriers however to progressing in this fields such as the impact on the research in benefitting industry. As modern technology is seen as very advanced already the research must have high expectations and promises to meet the demand and the economic rate to be adapted into the system. This scaling factor from lab to industry is difficult alone without including the various trial and errors that comes in to play. Yet nanotechnology has seemed to earn the interest of many already, as the subject to many scientific speculations including  focus in non-scientific pop culture leading to dramatized events by the unknowns of the new discoveries. This trend reoccurs in the science world with most new age advances such as the renowned terminator franchise surfacing from the robotics advancements in the 1940s and 1950s. By this comparison maybe it can be said that nanotechnology is seen as something that has large potential it has sparked interest in pop culture, possibly sparking interest in the larger public. Which of course pop culture is not a valid source of truthful facts and expectations of the science, but it does make one wonder where the limitations lie in these newer sciences. This assists somehow in escaping the barrier as if the progression is seen as limitless and ideas can be stemmed from the over the top fiction it will always be worthwhile investigating.

Futuristic nanotech has been seen to use the nano-particle within the body via both diagnostic and medical uses (which of course pop culture finds a way to make this into fiction with highly unplausible outcomes like growing scales or mind-control etc.). There are extremes to what people envision both realistic and unrealistic but more, so the current uses are much more mundane and focus instead on improving life one step at a time. Currently refining the plastics of our bikes, clothes which are stain resistant, drug delivery, cosmetics and so much more. Nanotechnology is helping use improve inventions that we already possess but were stumped on how to fix errors and improve them further. It truly is an evolutionary science that people have tried to define or place as its true purpose for a while.

Many roles set forward for nanotechnology such as the differentiations  set imposed by certain authors describe nanotech as either incremental, evolutionary or radicle. The deciding differences in these are for example incremental being the reinforcement of previous known structures and materials, essentially improvements of what we have. Evolutionary takes it a step forward in using nanostructures to explore the world and see how their interactions -possibly by chance- can be adapted into useful roles. Finally, radicle nanotechnology, the most far-reaching version, being the construction of machines, whose mechanical components are molecule sized or rather <100nm in diameter.

 

The far stretched versions of nanotech can be traced to Eric Drexler where he writes that the anticipation of this great science is that we will end up on nanoscale robots and vehicles – for the robots or people is uncertain- being an everyday normality. Unfortunately, this rather cool idea forgets the scaling factor in terms of how this would essentially work, so although the idea of robots the size of atoms sounds rather cool this is one thing that will probably always remain apart of pop culture and the centre of fiction rather than on the side of science. The laws of physics must still be obeyed. Of course, Drexler discussed many more possibilities of nanotechnology is his book, (Engines of Creation: The Coming Era of Nanotechnology), it is still an amusing idea that people have once thought plausible is now known as unrealistic. Yet in saying this modern science has surprised us yet as once it was seen as absurd to fly or even land on the moon, yet it was completed. I am not saying that we will have tiny robots that will build all our phones and chips for us, but we still do not know the extent on nanotechnology and where it will lead us in 10, 20 or 100 years.

References 

[1] The Royal Society, ‘Nanoscience and nanotechnologies: opportunities and uncertainties’, Nanomanufacturing and the industrial application of nanotechnologies, Chapter 4, 4.6 Barriers to progress, pages 32-33

[2] O’Reilly, Introduction to Nanoscience and Nanotechnology, Chapter 7, Radicle Nanotechnology