Posts

Have you ever wondered how those water fountains at airports have perfectly smooth jets of water, that look like rods of glass jumping around in all directions? Well, that is because of a phenomenon called laminar flow! In fluid dynamics, when the flow of fluid particles follows smooth paths, all parallel to each other with no mixing between the layers, laminar flow is achieved where the fluid is so streamlined that when it exits a nozzle or pipe, it looks exactly like a rod of glass.

In contrast to laminar flow, turbulent flow is when cross currents, swirls, and mixing between the paths of the fluid cause chaotic flow of fluid. The one example of laminar flow that occurs in nature is in waterfalls. At the very edge of the waterfall, the currents of water follow laminar flow because of the velocity of water as it reaches the edge. Soon after the water falls over the edge, mixing of the different currents with acceleration due to gravity causes turbulence in the flow, breaking away from the smooth laminar flow.

Osborne Reynolds studied and theorised the distinction between laminar and turbulent regimes of flow. His research yielded the Reynolds Number (Re) which is a dimensionless number that acts as a parameter to identify the transition between laminar and turbulent flow. Laminar flow is generally governed by the geometry of the tube, the velocity of the fluid, and the viscosity of the fluid. Hence, Re is directly proportional to the velocity of the fluid and the diameter of the tube and is inversely proportional to the viscosity of the fluid. At lower values of Re, the flow of the fluid is laminar and when Re exceeds a certain transition threshold, the flow becomes turbulent.

Therefore, using water with a viscosity of approximately 1, at low velocities and small diameters of nozzles, the water fountains at airports can be engineered to create majestic glass rod looking flow of water. The jets can be fitted with LEDs to create even more spectacular formations. Another important application of laminar flow is seen in the wings of an aircraft. The wings are specially designed to cut the air in order to create laminar flow of air around it so as to minimise drag effects and help produce lift.

The world is full of forces that are important for our everyday life – without friction we would be unable to drive cars on the road safely. Did you know that all forces can be classified into one of the following four categories: gravitational force, electromagnetic force, strong force and weak force. These make up the fundamental forces of nature and can also be thought of as interactions. These forces all have a particle associated with them. In this article, we will discuss the four different types.

 

Gravitational Force

This is the force that people are probably most familiar with and is commonly referred to as gravity. Out of all of the forces it is the weakest, with only 6×10-39 of the strength of the strong force. However, it has an infinite range. Gravity is the force of attraction between two objects, which depends on the distance between the two objects. This force exists between every single particle in the universe with every other particle, however weak it may be.

The gravitational force felt by each object due to its attraction to the other, based on Newton’s law of gravitation [1]

From the figure, it shows that the strength of the force depends on the masses of the two objects (m1 & m2) and the distance between them d (this is sometimes referred to as r instead). The further away the objects are, the smaller and weaker the force is. It is understandable then that due to its infinite range, it can correspond to quite weak forces.

 

Electromagnetic Force

This is also sometimes referred to as electromagnetism and is the category that electric and magnetic forces fall under. This is likely the type of force that people will understand and encounter the most after gravity: taking the example of driving again, friction is due to electrical interactions between the atoms of the car tyres and the atoms on the road. Magnetic forces, for example, the force between two magnets, occur due to electric charges in motion. For a long time, it was thought that electric and magnetic forces were completely separate from each other. However, it was discovered that they were both just components of the electromagnetic force, the Lorentz force. It also has infinite range but is much stronger than the gravitational force.

 

Strong Force

The last two forces one is probably less acquainted with. The strong nuclear force, or often shortened to the strong force, is the strongest of the four as the name would suggest. It is the force that allows larger particles to exist. It is responsible for binding clusters of quarks together, therefore allowing protons and neutrons to exist. Neutrons and protons then form the nucleus of an atom and are also held together by the strong force. Protons are positive particles and neutrons are neutral particles; as you may know from magnets, like tends to repel like. The positive protons experience an electric force that repels them from each other; if not for the strong force the nucleus would then break apart.

The strong force overcomes the electric repulsion experienced by the protons, keeping the nucleus together [2]

The strong force however has a very limited range in comparison to the gravitational and electromagnetic forces, roughly in the subatomic region. Therefore, this force only occurs when particles are no further than the length of a proton apart.

 

Weak Force

Finally, the weak force has the smallest range of all the forces. Unlike the name suggests, it is not actually the weakest (this is gravity), however it is the second weakest. The weak force is the cause of beta decay, a form of radioactivity. In the nucleus of a radioactive atom, a neutron is then converted to a proton, and two particles are released from the nucleus, an electron (a small negatively charged particle) and an antineutrino (nearly massless antiparticle). This is one form of beta decay and it occurs due to the weak force. The weak force is the reason why carbon dating exists, allowing archaeologists to determine the age of many organic-based objects that previously came from living organisms. Carbon-14 is an unstable nucleus due to the weak force and will decay into nitrogen over time. By measuring the fraction of carbon-14 remaining in the object, its age can be determined.

 

Comparing the forces

Now that we have described each force briefly, let us compare their strengths, ranges and discuss the particles that are responsible for the different type of interactions.

The four fundamental forces and their properties, in order of decreasing strength [3]

The comparative strengths can easily be seen, as well as the varying range and what this size actually compares to for the strong and weak forces. The mediating particle is the one that is involved to cause a certain interaction to occur. For the strong force, it is mediated by gluons, massless bosons with a spin of 1. All the forces in fact have bosons as their mediating particles, particles with integer spin (0, 1, 2, …). The electromagnetic force is mediated by photons, particles of light. The weak force has mediating particles of W and Z bosons. It is believed that the gravitational force has a mediating particle, the spin 2 graviton; however currently this is all theoretical as this boson has yet to be observed experimentally.

 

Combining all four?

As previously stated, originally electric and magnetic forces were thought to be separate, until it was discovered that they can be described by the electromagnetic force. Since the 20th century, the electroweak theory has been developed, that describes both the electromagnetic and weak forces as a single electroweak force. This theory has been tested rigorously and so far has passed all experimental testing. There has been a desire to create a grand unified theory, that would combine the electroweak force and the strong force. This ‘electronuclear’ force has yet to be observed, however there has been models created that predict its existence. There has also been great interest in a theory of everything that would combine all the forces. There is still much about the universe we are discovering and it will be interesting to see if such theories are ever proven to be true.

 

Image sources

Featured image: https://commons.wikimedia.org/wiki/File:FOUR_FUNDAMENTAL_FORCES.png

[1] https://www.sciencefacts.net/gravitational-force.html

[2] https://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-structure-of-matter/the-nuclei-of-atoms-at-the-heart-of-matter/what-holds-nuclei-together/

[3] http://hyperphysics.phy-astr.gsu.edu/hbase/Forces/funfor.html

 

Crystals are something we come across every day. From the salt on your table to the ice in your cup, from your phone screen to the crystals sold for good luck, it’s unlikely that you can make it through the day without encountering at least a crystal or two. When all’s said and done though, what makes crystal structures so different and special from any other materials, and how do we know all that we do about them?

Just like crystals are part of our everyday life, the study of crystal structure is an integral part of physics and chemistry in identifying the chemical composition of a structure. This however cannot be done by observation alone and instead a technique called x-ray diffraction is used. Crystals are solids that are made up of building blocks such as atoms or molecules. These building blocks come together in a repeating pattern with an ordered arrangement to form a crystal. It is because of this long range order that crystalline solids can be studied using x-ray diffraction. This is due to the fact that the uniform spacing between the atoms and molecules in a crystal have a similar size to the wavelength of the x-rays that are used in x-ray diffraction. Solids that are not crystalline (amorphous solids) cannot be studied using x-ray diffraction as they do not have this uniform spacing.

To understand x-ray diffraction, we must first understand how the periodicity (repetitiveness) of a crystalline solid can be described. Each of the repeating building blocks or motifs in a crystal are called lattice points and as they repeat in the same pattern, i.e. are periodic, they can come together to form something called a lattice, which is used to describe the periodicity of the crystal. Crystals can be viewed as structures made up of “unit cells” which are the smallest group of atoms or molecules that when used with the lattice, make up the entire crystal structure.

The unit cell of a crystal can come in many different shapes and forms such as cubic and tetragonal and within those crystal systems, there are four different arrangement atoms can take, primitive, body centred and end-face centred. In order to define the lattice, the locations of the atoms on the unit cell, the symmetry of the crystal structure and the lattice parameters must be known. The lattice parameters are the lengths of each side of the unit cell and they are related to Miller indices and Miller planes. Miller planes are the set of parallel planes that exist on the axes of unit cells, and can be described by a set of three numbers called the Miller indices of the crystal.

Bragg’s Law, the Principle Used in X-Ray Diffraction (1)

This is where x-ray diffraction comes in, as the Miller indices can be found from the results of this technique. In x-ray diffraction, the x-rays are emitted and passed through a crystal. When this happens, the planes of the crystal reflect some of the x-rays at a scattering angle of theta. This angle theta corresponds to a Miller index. This relationship can be found using Bragg’s law as shown in the image above. X-ray diffraction provides an x-ray diffraction pattern of these different values of theta and so give the different Miller indices of the crystal. These can be used to find the unit cell parameters of the crystal, and give the type of unit cell that the crystal has. This pattern can also be compared to a database of diffraction patterns of known substances to identify the likely chemical composition of the crystalline solid or powder that is being analysed by x-ray diffraction.

With that we see how x-ray diffraction can be used to find out the lattice parameters of the unit cell of a crystal and from this its crystal structure, and can also be used to obtain the chemical composition of an unknown crystalline substance and thus is an incredibly important tool in the study of crystal structures.

(1) Image credit: X-Ray Diffraction, Veqter: https://www.veqter.co.uk/residual-stress-measurement/x-ray-diffraction

Figure 1- showing the Ternary phase diagram for the C, H systems.[1]

Dimond like carbons (DLC) are a distinct set of amorphous carbon materials which share structural similarity to diamond, due to them both possessing sp3 hybridised carbon atom. The irregularity in its arrangement comes from the presence of filler atoms such as sp2 hybridised carbons, metals and even C-H bonds. The presence and arrangement of these filler atoms in the sp3 system, causes the material to exhibit unique and/or improved properties not observed in a simple. In fact there are 7 classified type of DLC commonly used (demonstrated in figure 1), the most abundant of which being tetrahedral amorphous carbon (ta-c), consisting of an even blend of sp3 and sp2 carbons, making the material stronger, smoother, have a higher gas barrier performance as well as a better biocompatibility compared to diamond, some research suggesting that it is even possible to scratch diamond.

 

Why and What are they used for?

The unique properties of DLC materials have made them one of the most sought after and researched materials within a large range of fields. The amazing this about DLCs is that in most applications they are only needed as coating materials, which are most formed as thin films (about 5 m thick), this means that not a lot of material is needed and therefore cuts cost for manufactures. These coats can be used on several materials to improve their hardness, they are so successful at improving toughness, that when steal is coated with DLC and exposed to rough wear, its lifetime was improved from a few weeks to 85 years[1]. Because of this DLCs have been found to have a great importance in the durability of materials ranging from scratch resistant car windows to coating space shuttles to prevent wear during launch, due to their high environmental temperatures. DLCs can be made to have an incredible smooth surface allowing them to have an extremely low friction. This technology has found its way to the locomotive industry, where coated gears and equipment make a perfect replacement to lubricants, as well as increase the longevity of the vehicle.

For a long time now DLCs have been researched and developed for biomedical use due to their superior mechanical properties and biocompatibility, they have already been used in the field of medicine biomedical applications. Implants made from DLCs have shown great success, since tissue can easily adhere to the surface as well as when blood is present a layer of protein is formed around the surface making the body less prone to blood clots and less likely to reject the implant. Because of this, DLC coating has been of great important to the development stents, a device which is able to expand veins and arteries. DLCs, much like diamonds, are very inert, research shows that they are very resistant to acidic substances, making them ideal for storage of highly corrosive and dangerous chemical which would otherwise seep through when using uncoated glass, as well as to protect sensitive equipment[1].

The structure of diamond is well known for being extremely electrically insulating, whilst the its graphite allotrope is very conducting along its planes, since DLCs have an internal structure consisting of a  mixture of diamond and graphite carbons, they have been observed to have conducting properties. The extent of which is directly proportional to the amount of conducting sp2 carbons and doping which the material has, this conductivity is achieved via quantum mechanical tunnelling between sp2 sites (pockets of electron site). Because of this DLCs can be easily manufactured to have a range of different conductivities from super conducting to insulating, this also means that right at a key sp2 percentage semiconducting properties are observed. Moreover, the ease that which DLC’s properties can be modified, means that they can be fine tunned to have a desired band gap for a particular job, making them a very useful and prominent technology in the semiconducting industry[2]. Unfortunately, the market is currently controlled by silicon semiconductors, due to them being cheaper and having more investment, meaning that current DLCs are used to coat and improve on the properties of already developed silicon based semiconductors. Due to the incredible conductivity features that DLCs can manufactured to have, means that they are very regularly used in the electronic community both for passive and active materials.

 

How are they manufactured?

Figure 2- showing 5 types of ion beam deposition methods [2]

Since the invention of DLCs in the early 70s, there have been a multitude of ways to develop them, all of which based on deposition methods to grow thin films. The first DLCs where developed via ion beam deposition, which have expanded into several beam type depositions as shown in figure 2, all of which sharing the general features, carbons ions are created by a plasma sputtering to a graphite surface. Sputtering is the process where an accelerated ion is targeted on to the surface of a material (in this case graphite) to remove particles of the material. The ejected carbon ions can them be guided using a forward bias to a substrate target where the thin films can grow from. Unfortunately, this process does require immense temperatures and a high vacuum environment which reduces the number of materials that it can grow from as many materials might decompose in the process. The other major deposition technique is called chemical vapour decomposition (CVD) where a solid material is vaporised via a chemical reaction and then deposited to the surface of a substrate. This technique is widely used in the formation of thin films for semiconductor materials. [2]

 

Their future?

As mentioned before the properties of DLCs can be highly tuned by controlling their manufacturing process, because of this most current research usually targets the production process of this material. Recent improvements in the field of DCLs focuses on developing deposition methods which do not require a high vacuum and temperature, Keio university in Japan have been studying these novice deposition growths, able to develop thin films at atmospheric temperature and observing any changes in the structure and properties.[3]

 

References:

[1] Rajak, D., Kumar, A., Behera, A. and Menezes, P., 2021. Diamond-Like Carbon (DLC) Coatings: Classification, Properties, and Applications. Applied Sciences, 11(10), p.4445.

[2] American Elements. 2022. Robertson, J., 2002. Diamond-like amorphous carbon. Materials Science and Engineering: R: Reports, 37(4-6), pp.129-281.

[3] Hasebe, T., Ishimaru, T., Kamijo, A., Yoshimoto, Y., Yoshimura, T., Yohena, S., Kodama, H., Hotta, A., Takahashi, K. and Suzuki, T., 2007. Effects of surface roughness on anti-thrombogenicity of diamond-like carbon films. Diamond and Related Materials, 16(4-7), pp.1343-1348.

Carbon is the villain. It is the reason for the trapped heat that is causing Climate Change. The two main gases that cause the greenhouse effect are carbon dioxide and methane, CO2 and CH4, and C is for carbon. We burn oil and gas and coal and turf, all laden with carbon. So, carbon is the villain, right?

Wouldn’t it be poetic if it was also the hero?

Carbon can save us by being used in more sustainable and cheaper batteries, solar panels, supercapacitors (read: fast batteries) and fuel cells.

To be fair, this isn’t all carbon does. It makes most of you, you. If you are mostly protein (and you are), proteins are mostly carbon. This this is what makes it both a problem and a solution. It’s everywhere.

Carbon, just by itself, can be wildly different. The number to keep in mind: four. Carbon wants to have four bonds.

What you want in a material is: 1) strong bonds and 2) lots of bonds.

For strong bonds you need to be small. This is because the larger the atom, the less the bonds to it care. They are so far away, they don’t feel as impacted by the nucleus, like a voter in Kerry not caring about the problems in Dublin.

For lots of bonds, you need a Goldilocks of not too many electrons and not too few. The reason 4 is the magic number is because its as far from 0 and 8 as you can get. Carbon has these 4 electrons, and it either wants to get 4 more or get rid of them, and to do this it needs bonds. More electrons or fewer electrons, it could achieve this with fewer bonds.

So to explain why carbon is just so good, really just look at it on the periodic table: it’s at the intersection of Strong Bond Street and Many Bond Avenue.

 

How can we use this? There are two ways this can go: diamond or graphene. Diamond puts these four strong bonds to good use by being bonded to four other carbons at equal distances, in a pattern that looks like this:

Periodic Table of the Elements and location of carbon

Graphene, on the other hand, breaks our rule: four bonds is better. It does this by making a hexagonal pattern, bonding to three others. The last electron is allowed to float freely between the others, strengthening the bonds. This free electron also gives the great electrical properties of graphene, as loose electrons are necessary for the movement of electricity in a material.

The fact that the carbon is only bonded to three others is what makes it a “2 dimensional” material as there are sheets one atom thick.

Structure of graphene from hexagonal benzene

Like rolling paper into a straw, the graphene sheets can be rolled up into a tube, called a carbon nanotube. Scientists find these extremely exciting materials because they act like wires, with a similar level of conductivity as graphene but in a more useful form.

Carbon nanotubes rolled from sheets of graphene.

Now that the properties are explained, we can move on to why this is important: carbon is the hero.

First step is producing energy. Solar panels now are made with silicon. These are fairly efficient, about 20%. But for almost any application, efficiency can be replaced with savings in cost and weight, and the limit for silicon is getting thin enough sheets to use as little material and to be asl light as possible. Graphene has the advantage of being already thin, as well as atomically lighter.

The graphene can be incorporated into the solar panel in multiple ways, as there needs to be materials creating and receiving the electron to make the electricity and a way to transport it after. Graphene can be incorporated into almost all parts of this process, to make the cells more efficient or to completely redesign them, like making them flexible. Expect to see graphene in your solar panels in the future.

A flexible graphene based solar panel

Next is storing that energy. The most traditional way to do this is batteries. The big players here are lithium ion batteries. These are currently limited in speed of charging and lifespan by the electrodes, rather than the actual battery material. These are currently graphite, which is many disorganised stacks of graphene. These are being improved by replacing them with carbon nanotubes, which can have a higher contact area with the battery, speeding up the process and last longer.

If this isn’t a radical enough change, lithium ion batteries could be superseded by graphene ones completely. These operate by letting charged atoms drift from one sheet to another, releasing electrical energy. Lithium is actually quite rare, so a more abundant replacement is of great benefit. It also needs to be light, because even if you have and amazing but heavy material, it still won’t be good per kilo. Graphene can solve both these problems, and is a candidate for the batteries of the future.

Current use of graphite in lithium-ion batteries

A more experimental option are the much-hyped supercapacitors. Capacitors operate by physically separating positive and negative charges on parallel sheets. They are better if they have larger surface area or a smaller gap between them. You can probably see where this is going. Graphene sheets have an enormous surface area and are thin enough so that the capacitors made from them would have enormous energy density.

There must be a separation of the graphene sheets, done by “intercalation” or insertion of charged atoms between layers. With current technology, these supercapacitors are at the level of other battery technologies such as nickel hydride.

Graphene sheets in a supercapacitor

The final energy storage method that these materials open up is hydrogen fuel. Hydrogen can be made by using electricity to break apart water, H2O, into oxygen, O2, and hydrogen, H2. The same fuel cell can be used in reverse to recombine oxygen and hydrogen into water and an electric current is generated. This means it is in a sense a battery, but with a fuel that can be extracted and stored like a traditional fuel, and the energy per kilo is higher than petroleum.

Graphene has potential as both the anode and cathode (positive and negative electrodes), which has advantage over the competition as it is lighter and less expensive than the rare metals that it would replace. It also is a candidate for the storage of hydrogen. This is useful as hydrogen is so small it tends to leak out of any container you put it in. The graphene is “buckled” like crumpling a sheet of paper, and the high points attract and bind hydrogen, meaning it won’t leak out of the container.

Hydrogen fuel cell

So, carbon has been both the problem and the solution. While creating the problem in the form of carbon dioxide and methane, it has the potential to be the solution. Graphene can be made into solar panels, batteries, supercapacitors and hydrogen fuel cells and storage while carbon nanotubes can improve the batteries we already have.

Maybe carbon isn’t the hero we deserve, but it’s the one we need.

“When will we use this in the real world?”

A question all secondary school math teachers have heard at some point in their career. Not the worst thing they’ve heard students say, but I’m sure it stings nonetheless.

At face value, it’s a fair enough question. Most students won’t find themselves in a situation where they need to use the minus b formula, or imaginary numbers, or integration (blasphemy, I know). In reality, a base knowledge of arithmetic and percentages are all most people need to survive in the big bad world (if they stay away from STEM careers). Knowing all this, I began to wonder why math is a core subject comparable to English, something everyone uses everyday. After much (some) pondering I came to the conclusion that math education is as much about teaching students logical thinking as it is about teaching them math.

English, Irish, and math are core subjects because they each give students important skills. English teaches us how to communicate effectively, reading comprehension, how to write. Irish teaches us another language, it is our culture, it improves our communication skills. Math is the only one of the three that instills logical problem solving skills. Solving problems is a crucial part of everyday life, and the teaching of math is a very convenient way to strengthen this ability. Dr Arlene Gallagher, the director of the Walton Club, has this to say about the subject:

Mathematics is all around us, yet many people’s perception and experience with this subject is to learn a set of steps ‘to get the right answer’. In teaching mathematics, if we reduce this subject to a set of procedures, we are missing out on critical opportunities to develop mathematical habits of mind and advanced cognitive skills that are highly transferrable and outlast any exam setting.”

Dr Gallagher brings up something which many teachers can be prone to do, that is the reduction of math to a set of procedures. Of course, to solve problems students must first develop the tools needed to so, but there must always be an emphasis on the greater-picture. The quickest way to bore students is by giving them a handful of abstract math problems that only require one skill! I’m a maths educator at the Walton Club, and in my lessons I always relate the mathematical concepts to real-world applications. My lessons are often structured such that a new concept is introduced, and then the students (known as alphas, after Ernest Walton’s Nobel prize winning experiment) apply that concept to solve a real-world problem. For example, a lesson on mean, median, mode, and expectation values began with example calculations of each. Then the alphas were tasked with determining which character has the best dice on Nintendo’s Super Mario Party for the Switch. The alphas had to figure out which tools to apply to the problem, all I showed them was how. I believe this is how maths should be thought, that way the students will never have to ask that question again.

Give a man a fish…

 

With the EU’s ambitious goal to be carbon neutral by 2050, it’s increasingly important to examine all aspects of our energy production, storage and usage. At present, buildings account for 40% of the EU’s energy consumption and approximately 75% of the EU building stock is energy inefficient (1). Therefore increasing the energy efficiency of buildings, could be one of the most effective methods of reaching our climate goals.

Although in countries with climates similar to Ireland we mostly focus on retaining heat in buildings, in hotter climates cooling is the most important feature of buildings. Air conditioning is one way to cool these buildings, but it is horribly energy inefficient. With many developing countries having hot or mixed climates, it is predicted that the demand for air conditioning will spiral out of control in the coming decades, unless we can find a better solution. Smart glass, may be this solution.

Smart glass, is a glass whose transmission changes when voltage, light or heat is altered. It can be used in smart windows to change the visible fraction of transmitted light. It thus, minimises the need for cooling and keeps rooms at a comfortable temperature. One of the technologies used for smart windows, are chromic materials. Electrochromic and photochromic smart glass are of particular interest.

Electrochromic smart glass changes its transmittance when it is stimulated by an electrical signal (2). It can change to any state between and including transparent and opaque. An electrochromic glass panel will consist of layers making up a glass stack. This glass stack is usually only a few microns thick. On  the top and bottom a transparent conductive layer (TCO) will be present, while the centre of the structure will contain the electrochromic layer and electrolytic layer. When a voltage is applied to the TCO layer, charged particles move from the electrolyte to the electrochromic layer (2). The electrochromic layer is normally transparent  in its inactive state, but this increase in charge results in an increase in the absorption of light. The reason for this change in transmission is due to the ions from electrolyte inserting themselves into the electrochromic layer.

             Electrochromic smart glass layer (3)

This results in the reduction of the band gap, which means photons with lower energy can now be absorbed by the glass. Electrochromic glass allows the user to control the amount of light by a flick of a switch which means they can regulate temperature or privacy. However due to electrochromic glass’s need for electricity, it is not quite as environmentally favourable as photochromic glass

Photochromic smart glass changes colour with exposure to light (4). It does not need any electricity to work, and is usually manufactured in the form of a thin film on the window. The thin films used are often photochromic rare-earth oxhydrides. As well as the benefit of not needing electrical power, the decrease in transmittance of rare earth thin films extends from the UV up to mid-IR (5). This means that photochromic smart glass can reduce solar thermal gain, which can often overheat a room. This in addition to the glass being able to reduce visible light. The main problem of photochromic smart windows, is their slow switching speeds and their stability over long periods. At present research is underway to try and address these problems.

If the problem of the switching speed could be addressed (at present it can take 5 minutes for the glass to return to a clear state), photochromic glass could have major benefits to the transportation industry. It would reduce glare for drivers and also reduce the need for air conditioning for passenger comfort. Electrochromic glass can also take a time in the order of minutes to change state, which again is a disadvantage for transport applications. However it has been suggested for architectural applications the slow switching speed of electrochromic and photochromic glass is an advantage, as it gives our eyes a chance to adjust to the change in light level (2).

                Boeing electrochromic window (6)

Electrochromic and photochromic glass are already utilised in industry. For example the Boeing 787 Dreamliner uses electrochromic windows to replace window shades in the aero-plane, while photochromic sunglasses are becoming more commonplace in everyday life. I expect that in the coming years smart glass will become even more prevalent, as we shift our attention to climate adaptive structures which can reduce are energy usage.

References

  1. European Comission. In focus: Energy Efficiency in buildings. European Comission website. [Online] February 17, 2020. [Cited: May 11, 2022.] https://ec.europa.eu/info/news/focus-energy-efficiency-buildings-2020-lut-17_en#:~:text=Collectively%2C%20buildings%20in%20the%20EU%20are%20responsible%20for,mainly%20stem%20from%20construction%2C%20usage%2C%20renovation%20and%20demolition..
  2. Smart Glass World. What is electrochromic smart glass? Smart Glass World Web site. [Online] 2022. [Cited: May 13, 2022.] https://www.smartglassworld.net/what-is-electrochromic-glass.
  3. Infinity SAV. Smart Glass Inifinity Sav Web site. [Online] 2022. [Cited: May 13, 2022]
  4. Durr, Heinz and Bouas-Laurent, Henri. Photochromism Molecules and Systems. s.l. : Elsevier Science, 2003. ISBN: 9780080538839.
  5. Colombi, Giorgio. Photochromism and Photoconductivity in Rare-Earth Oxyhydride Thin Films. Delft : TU Delft, 2022.
  6. Coxworth Ben. Electronically-dimmable windows on offer for Boeing 777x. New Atlas Web site. [Online] January 07, 2020. [Cited: May 13, 2022]

 

 

Nanotechnology is still an emerging science and alongside this has uncountable opportunities, discoveries, and advancement ready to be uncovered. So far it is predicted to contribute significantly to economic growth in the upcoming decades. In the modern age scientist have predicted that there are four distinct generations of advancement. Currently for nanotechnology we are slowly emerging into the second said generation as we have thoroughly examined material science with the incorporation of nanostructures (coatings/ carbon nanotubes for example) for strengthening/ improvement of a material. The second generation incorporates active nanostructures such as drug development. Onwards from here it is unclear what exactly is the next advancements maybe combining these individual researched structures into a system. This could be where things become tricky and intertwining with other sciences such as a combination of nanorobotics or the regrowth of organs or potentially limbs. The possibilities are endless with new ideas prompted regularly and with years to come up with ideas it does appear limitless.

There are various barriers however to progressing in this fields such as the impact on the research in benefitting industry. As modern technology is seen as very advanced already the research must have high expectations and promises to meet the demand and the economic rate to be adapted into the system. This scaling factor from lab to industry is difficult alone without including the various trial and errors that comes in to play. Yet nanotechnology has seemed to earn the interest of many already, as the subject to many scientific speculations including  focus in non-scientific pop culture leading to dramatized events by the unknowns of the new discoveries. This trend reoccurs in the science world with most new age advances such as the renowned terminator franchise surfacing from the robotics advancements in the 1940s and 1950s. By this comparison maybe it can be said that nanotechnology is seen as something that has large potential it has sparked interest in pop culture, possibly sparking interest in the larger public. Which of course pop culture is not a valid source of truthful facts and expectations of the science, but it does make one wonder where the limitations lie in these newer sciences. This assists somehow in escaping the barrier as if the progression is seen as limitless and ideas can be stemmed from the over the top fiction it will always be worthwhile investigating.

Futuristic nanotech has been seen to use the nano-particle within the body via both diagnostic and medical uses (which of course pop culture finds a way to make this into fiction with highly unplausible outcomes like growing scales or mind-control etc.). There are extremes to what people envision both realistic and unrealistic but more, so the current uses are much more mundane and focus instead on improving life one step at a time. Currently refining the plastics of our bikes, clothes which are stain resistant, drug delivery, cosmetics and so much more. Nanotechnology is helping use improve inventions that we already possess but were stumped on how to fix errors and improve them further. It truly is an evolutionary science that people have tried to define or place as its true purpose for a while.

Many roles set forward for nanotechnology such as the differentiations  set imposed by certain authors describe nanotech as either incremental, evolutionary or radicle. The deciding differences in these are for example incremental being the reinforcement of previous known structures and materials, essentially improvements of what we have. Evolutionary takes it a step forward in using nanostructures to explore the world and see how their interactions -possibly by chance- can be adapted into useful roles. Finally, radicle nanotechnology, the most far-reaching version, being the construction of machines, whose mechanical components are molecule sized or rather <100nm in diameter.

 

The far stretched versions of nanotech can be traced to Eric Drexler where he writes that the anticipation of this great science is that we will end up on nanoscale robots and vehicles – for the robots or people is uncertain- being an everyday normality. Unfortunately, this rather cool idea forgets the scaling factor in terms of how this would essentially work, so although the idea of robots the size of atoms sounds rather cool this is one thing that will probably always remain apart of pop culture and the centre of fiction rather than on the side of science. The laws of physics must still be obeyed. Of course, Drexler discussed many more possibilities of nanotechnology is his book, (Engines of Creation: The Coming Era of Nanotechnology), it is still an amusing idea that people have once thought plausible is now known as unrealistic. Yet in saying this modern science has surprised us yet as once it was seen as absurd to fly or even land on the moon, yet it was completed. I am not saying that we will have tiny robots that will build all our phones and chips for us, but we still do not know the extent on nanotechnology and where it will lead us in 10, 20 or 100 years.

References 

[1] The Royal Society, ‘Nanoscience and nanotechnologies: opportunities and uncertainties’, Nanomanufacturing and the industrial application of nanotechnologies, Chapter 4, 4.6 Barriers to progress, pages 32-33

[2] O’Reilly, Introduction to Nanoscience and Nanotechnology, Chapter 7, Radicle Nanotechnology

Back to the Future 2 made many predictions about what the world would be like in 2015, from flying cars to a holographic 19th Jaws movie. While many of these predictions seem outlandish now, there are several technologies that the movie did get right, at least partially. While fingerprint technology is not generally used to secure people’s homes, it is used as a security feature in one way or another on most smart devices that we use almost every day. This, however, is not the most outlandish prediction that the movie got right, that honor goes to the hoverboard that Marty McFly uses throughout the movie, at least in theory.

Many attempts have been made to create a functional hoverboard, including using the same designs as hoverboats by placing fans on the bottom of the board, but the design that most resembles the hoverboard in the movie uses something called quantum locking. While the word quantum preceding anything is enough to make some people apprehensive, in practice this term simply refers to how a superconductor will hover in place above a source of magnetic field.

A superconductor is a material that allows an electric current to pass through it with no electrical resistance whatsoever. Resistance in a conductor such as a metal arises from the collision of electrons in the metal. If the temperature of this conductor is lowered, these collsions happen less frequently, and the resistance of the conductor also lowers. At a certain temperature, these collisions will stop altogether, and the electrons can carry the current through the material without any resistance. This temperature is called the critical temperature, when the material changes from a conductor to a superconductor.

Superconductors display several interesting properties, but the one most relevant here is something called the Meissner effect. When a superconductor is placed in a magnetic field, it will repel all of the magnetic field from within it, so that it effectively bends around the superconductor. Due to this repulsion, the superconductor will float above the magnet at an exact height, as the repulsion has to work against gravity. The levitation is not very stable, however, as the superconductor will repel the magnetic field no matter which orientation it is in. This is where quantum locking comes into play.

When a superconductor becomes thin enough, the magnetic field will be able to go through the superconductor at certain points, called flux tubes. The superconductor will still repel the magnetic field through these tubes, trapping the magnetic field in these areas. This causes the superconductor to be locked in place, as it will oppose the movement of these field lines. The superconductor will then hover in place above the magnet in whatever orientation it was placed in the magnetic field. It will also hold this orientation if it is moved along a magnetic track.

The main problems with using this type of levitation in hoverboards or even flying cars is that, firstly, the superconductor has to be above a magnet to levitate. To make this a viable way to travel anywhere, first magnetic tracks would have to be built, which would be costly and time-consuming. The second problem is that the critical temperature of most known superconducting materials is very low, close to absolute zero in some cases. Work is being done to make materials whose critical temperature is relatively high. Until such a material is discovered, attaching a cooling system capable of maintaining these low temperatures to a hoverboard would be extremely difficult to do efficiently. Unfortunately this means that it is highly unlikely that we will see hoverboards in public anytime soon, but it is a possibilty in the future.

 

Sources:

  1. https://www.thoughtco.com/quantum-levitation-and-how-does-it-work-2699356#toc-quantum-locking, accessed 12/05/2022
  2. https://www.britannica.com/science/Meissner-effect, accessed 12/05/2022
  3. https://www.livescience.com/superconductor, accessed 12/05/2022

The standard light microscope is ubiquitous, from children’s science kits to industry labs. They are very useful instruments but that have their limitations. Standard light microscopes typically magnify by 5, 10 and 20 times. The use of a combination of these lenses gives a larger magnifying ability. The world’s most powerful light microscope can see objects down to 500 nm [1], but due to the wavelength of light this is the limit.

This is where electron microscopes come in, instead of using the light reflecting off the sample, electron microscopes fire a beam of electrons. The wavelength of the electrons are significantly smaller than the wavelengths of visible light, this allows gives the microscope a much higher resolving power. There are different types of electron microscopes, the ones I will be talking about are scanning electron microscopes (SEM) and transmission electron microscopes (TEM), these differ as the electrons reflect from the sample in the SEM and they transmit through the sample in the TEM. These are used in a number of different industries and can be used for both biological and non-biological.

The electron microscopes only provide visual information of the sample, although different components are being added for increased functionality. Electron energy  loss spectroscopy (EELs) and energy dispersive x-ray spectroscopy are good examples of this as they both provide elemental analysis of the sample.

Electron microscopes are incredibly sensitive pieces of equipment and a number of different factors can warp and distort the results. Fluctuating magnetic fields and vibrations are the main issues. Since the objects they are viewing can be about a angstrom in length, the smallest fields and vibrations can be seen. Therefore, electron microscopes are typically surrounded by a Faraday cage, which acts similar as noise cancelling but for electromagnetic fields. Fluctuating electromagnetic fields (from overhead wires) can cause large disturbances and ruin the imaging. Similarly, any vibrations will distort the images. This means that it is the important the electron microscope is situated away from events such as heavy traffic.

It is a lot more simple to have the electron microscopes situated in quite areas rather than creating the the equipment to compensate, so smart planning is required. There are examples of the issues above seen within Trinity college; there are two areas with electron microscopes, the advanced microscopy lab (AML) and the CRANN research institute. The CRANN building is located in a busy area within Dublin, with the DART constantly crossing overhead, plenty of traffic and electrical wiring. This causes a lot of interference, to try reduce this, the microscopes were built on a separate foundation to rest of the building, which  travelled all the way down to bedrock. This was an attempt to reduce the impact of the vibrations. There are also Faraday cages around them, but despite this there are still issues using the instruments as the DART passes over head. Compared to CRANN, there are a lot more electron microscopes situated at the AML. It is a quieter area, which is a lot better suited to housing the microscopes.

Electron microscopes are probably some of the most sensitive instruments there are, and as we look at smaller and smaller objects more and more things that were once negligible become significant issues. There will never be the perfect place to escape all the issues so all that can be done is to come up with new ideas to compensate and dampen these external influences.

 

References

[1]  Lynn Charles Rathbun (2013); “World’s most powerful microscope” [online]

Accessed from: https://www.nanooze.org/worlds-most-powerful-microcope/#:~:text=The%20smallest%20thing%20that%20we,about%201000%20nanometers%20in%20size.