It is estimated that around 80% of the mass in our universe is dark matter. Despite making up so much of the cosmos, it cannot be directly observed, and scientists have still yet to find solid evidence of it being detected. Dark matter emits no energy, and no light. This all begs the question, why do scientists believe it makes up the majority of the universe?

Physicists have long hypothesised that there’s more matter out there in space than meets the eye. The origin of the concept of dark matter can be traced back as early as the late 1800s, when Lord Kelvin estimated the mass of our galaxy to be different to the mass of the visible stars: “perhaps a great majority of them, may be dark bodies”. In 1906, Henri Poincaré coined the term ‘matière obscure’ or ‘dark matter’ when speaking of Kelvin’s work, However, it was American astronomer Fritza Zwicky who gave us the first solid prediction of the existence of dark matter. In 1933, his observations of a cluster of galaxies showed that only 1% of the mass needed to keep the galaxies from escaping the cluster was present. There appeared to be 400 times more mass than could be observed, leading Zwicky to infer that unseen matter was the reason for the gravitational pull and extra mass. Over 100 years on from Kelvin’s postulation, and scientific support for the existence of dark matter continues to grow; but the direct data needed to prove it still eludes us.

The above figures show the role of dark matter in galaxy clusters (


The fabric of the universe that we know and can observe is made of three principal particles: protons, neutrons, and electrons – and is known as ‘baryonic’ matter. Dark matter could be made up of this matter or ‘non-baryonic’ matter. In order for the contents of the universe to stay held together, we need 80% of it to be made of dark matter. The only issue is that this matter could be much more difficult to detect if it is baryonic. In the search for dark matter there are a number of candidates; ranging from neutron stars to dwarf stars (white and dim brown ones), or even black holes (Kashlinksky, 2016). There is a problem though, as these stealthy objects would need to have a bigger role than they have been observed to, in order to make up the missing mass.

There is some consensus that dark matter is indeed made of non-baryonic matter. One of the most leading candidates is known as ‘weakly interacting massive particles’ or WIMPS (Bernabei & al, 1997). Compared to a proton, they have 10 to 100 times the mass. Their weak interactions with regular matter harder to detect which of course is an issue. Ahead of WIMPS as a leading candidate are ‘neutralinos’. These are theoretical particles, which (compared to neutrinos) are slower and heavier. The sun beams neutrinos towards, with very rare interactions with regular matter. A suggested type of neutrino known as ‘sterile’ is thought to be a candidate for dark matter; interacting with normal matter via gravity. The Gran Sasso National Laboratory, LNGS, in Italy have previously stated the following in relation to the search for dark matter: “Several astronomical measurements have corroborated the existence of dark matter, leading to a world-wide effort to observe directly dark matter particle interactions with ordinary matter in extremely sensitive detectors, which would confirm its existence and shed light on its properties. However, these interactions are so feeble that they have escaped direct detection up to this point, forcing scientists to build detectors that are more and more sensitive.”


So, a question remains still; how do scientists actually know dark matter exists? Well, the answer very much lays in how we find the mass of large objects. We look at their motion. Flashback to the 1970s, as scientists were studying spiral galaxies – assuming they would observe the centre move faster than the outer parts. So there was some surprise when they found that stars had the same velocity regardless of position relative in the galaxy. This showed that there was more mass than could be seen. As previously touched on, looking at clusters of galaxies, they would simply spring apart if they just had the mass that is visible to us. An important note on this issue was added when Einstein was able to show that extremely large objects bend light, meaning they can be used as gravitational lens. Astronomers looking into how light behaves around galaxy clusters had meant we can create a map of dark matter, which is crucial in showing that the majority of matter cannot
be seen.

Core of merging galaxy cluster, showing distribution of galaxies, hot gas, and dark matter (

Research in the area of dark matter is thriving, with several different experiments aiming to directly detect it. Many of these experiments have very interesting setups. One such experiment is taking in Italy under a mountain. The Gran Sasso National Laboratory is part of this search, with their ‘XENON1T’, looking at interactions after collisions between xenon atoms and WIMPs.  This is not the only massive detector with low background. The LUX (Large Underground Xenon) experiment in North America is another example. Sat in a gold mind, it searches similarly to XENON1T using xenon and WIMPs. There has yet to be the significant breakthrough needed to confirm direct detection of dark matter. Another underground experiment can be found under the ice of Antarctica. The IceCube Observatory is on the hunt for the aforementioned sterile neutrinos.

Above ground part of the IceCube experiment facility, the underground is situated 1.6 km under the ice (IceCube Neutrino Observatory)

Despite no breakthrough yet, there has been significant progress and we are truly close. The Fermi Gamma-ray Space Telescope has drawn out maps of the centre of our galaxy and shown a huge excess of emissions (gamma-ray) coming from this area. This could be interpreted as coming from the annihilations of dark matter particles, however more data is still needed from other experiments in order to confirm this possible explanation. Dan Hooper (lead author, physicist at Fermilab) says the “signal cannot be explained by current proposed alternatives and is in close agreement with predictions of very simple dark matter models” (Daylan, et al., 2014). It is truly amazing to think how tantalising close we have come in our efforts to understand dark matter, find it, and directly detect its existence; the journey beginning a long time ago, with scientists like Kelvin and Poincaré, continuing still to this day.

Blog written by Alexander Fay, Eoin McMahon, Adam Watt, Cian Harley, Matt Kavanagh

Academic references:

Bernabei, R., & al, e. (1997, November 2014). Searching for WIMPs by the annual modulation signature. Retrieved from

Daylan, T., Finkbeiner, D. P., Hooper, D., Linden, T., Portillo, S. K., Rodd, N. L., & Slatyer, T. R. (2014, February 26). The Characterization of the Gamma-Ray Signal from the Central Milky Way: A Compelling Case for Annihilating Dark Matter. Retrieved from



By Kate Britton, Ciaran Furey, Aoife O’Kane Hackett and Sophie Thomson

Everything around you – your laptop screen, your clothes, even your own body – all have one thing in common: they are composed of tiny building blocks called atoms. It was long thought that these were the most fundamental particles in the universe; the name ‘atom’ is derived from the Greek word ‘atomos,’ meaning ‘indivisible.’ However, it was further realised that atoms are composed of even smaller particles; a dense nucleus containing neutrons and protons, with electrons surrounding this nucleus.  Later experiments in 1968 involving the scattering of protons and electrons at extremely high velocities in the Stanford Linear accelerator found that, by examining how the electron ‘bounced’ off the proton, the proton was composed of even smaller particles called quarks. It is believed that quarks, and other particles like them, represent the smallest scale of the universe, so The Standard Model of particle physics has been devised. These elementary particles represent the fundamental LEGO blocks of the universe, if you like. However, the smallest, strangest, and most elusive of these is the neutrino, which will be the topic of this blog. 

What are neutrinos?

Neutrinos are elementary particles produced by the radioactive decay of larger, unstable particles. The nature of neutrinos is described poetically by John Updike in his poem “Cosmic Gall” [1]:

Neutrinos, they are very small.

They have no charge and have no mass

And do not interact at all.

The earth is just a silly ball

To them, through which they simply pass,

Like dustmaids down a drafty hall

Or photons through a sheet of glass.


Figure 1 – The Standard Model

Although well described, he was using a bit of poetic license here.  They are not completely massless, however their mass is very small, with the upper limit of the electron neutrino being 2 eV/c²  (note how I said upper limit; neutrinos are so difficult to detect than only an upper limit may be placed on their mass.) For comparison, the mass of the up quark, which is found in protons and neutrons, is 2.2 MeV/c², roughly one million times heavier than the electron neutrino. Saying they do not interact at all is not true, although it is incredibly rare they do interact with regular matter. This is because they have no electric charge, and so do not interact by the strong nuclear force or electromagnetic force, meaning they only interact with matter via the weak nuclear force and gravity. In fact, with an impressively small cross section of 10-48 m² , a neutrino would have to pass through a few parsecs of lead before it even has a 50% chance of interacting with another atom!

As well as this, neutrinos come in three different ‘flavours:’ the electron neutrino, as was just discussed, the muon neutrino, and the tau neutrino. These are shown in the bottom row of the Standard Model, which is shown in Figure 1. 

So how are they detected?

Neutrinos are rightfully referred to as one of the most elusive fundamental particles there is. In the space of one second, around 100 trillion neutrinos pass through us. This goes unnoticed by us as the absence of electronic charge in neutrinos and their miniscule mass mean that they barely interact with any matter at all. Only one in every ten billion neutrinos crossing the Earth’s path successfully interacts with an atom. This makes direct neutrino detection extremely difficult with our current level of knowledge about them.

Figure 2 – The Super-Kamiokande Detector

However, the few neutrinos that do interact with atoms can be detected indirectly. This is the principle of the operation of the Super-Kamiokande detector in Japan, shown in Figure 2. The neutrinos in the Super-Kamiokande detector are observed in a 40m tall water tank containing 50,000 tonnes of ultra-pure water. When a neutrino interacts with matter, a charged particle is generated, and this particle can be detected. The vast body of water acts as a huge target and can increase the number of interactions between neutrinos and nucleons or electrons. When the generated charged particle travels faster than the speed of light in water (225,000 kilometers per second), cone shaped Cherenkov light is emitted in the direction of the charged particle. This light is detected by some of the 13,000 light detectors, called photo-multiplier tubes located in the walls of the detector. Depending on the quantity of light detected, and the timing of the detection, useful information can be extracted, such as the energy of the particle, the direction it was travelling and the location of interaction.

Figure 3 – The IceCube Detector

‘IceCube’ is another neutrino detector located in Antarctica. IceCube detects neutrinos based on the same principles as Super-Kamiokande, by detecting the by-products of the interactions between neutrinos and matter. IceCube measures the light produced by secondary particles when neutrinos interact with the South Pole ice. The amount of light and the pattern it produces allows scientists to estimate the energy, direction, and identity of the original neutrino.




What have neutrino detections told us?

Neutrino detection has shown us that the standard model of particle physics shown in Figure 1 is incomplete. Up until recently, the Standard Model had been predicated on massless neutrinos. However, experiments carried out in the Super-Kamiokande detector showed that neutrinos oscillate between their three flavours. This means that there is a chance that one flavour of neutrino may change flavour, or oscillate, into another flavour neutrino.  This implies that they must have mass, so new theories must now be formed to explain this.

This discovery of neutrino oscillation also solved a mystery that lingered in the astronomy community for quite some time: The Solar Neutrino Problem. In 1970, astrophysicist Raymond Davis devised an experiment that would detect neutrinos produced by the Sun, however, only one third of the expected amount were detected. These results were unexplained until recently, when the mechanism of neutrino oscillation was discovered. The reason this capture rate was observed was because this detector was only sensitive to electron neutrinos, so by the time they had reached the detector, they would have transformed into the other neutrino flavours.

In 1987, the detection of neutrinos from the supernova SN1987A allowed the astrophysics community to watch the formation of a neutron star, which had not been done before. This detection represents the first time neutrinos were observed from an astronomical source other than the sun, and also verified the basic theory of core-collapse supernovae.

What next for neutrino physics?

So the detection of neutrinos have already provided us with verification of our current as well as even modifying them. What next?

Figure 4 – The CMB

Scientists have already detected and documented the Cosmic Microwave Background (CMB); the oldest detectable electromagnetic radiation, produced roughly 380,000 years after the Big Bang where matter in the universe cooled enough to become transparent to photons. This detection strongly indicates that our theory of the Big Bang is indeed correct. Given this, if we go further back in time, we can imagine a period of time where the universe cooled enough to become transparent to neutrinos, resulting in the Cosmic Neutrino Background (CNB). Using our current knowledge of cosmology, and assuming the existence of the CNB, calculations [1] reveal that the total number density of CNB neutrinos is 9/11 times the number density of CMB photons. The detection of these neutrinos will further strengthen the Big Bang theory and reveal what conditions were like shortly after the beginning of the universe.

Future neutrino detections may also alter the Standard Model even more. In 2018, the MiniBooNE Short-Baseline Neutrino Experiment at Fermilab produced compelling results that indicate that this may be about to happen [5]. This experiment involved directing a beam of muon neutrinos into a 12.2 m diameter sphere filled with 818 tonnes of pure mineral oil, located 541 m from the neutrino source. However, a large excess of electron neutrinos was observed. This is a result of the muon neutrinos oscillating to the electron neutrino state.  However, to our current understanding, this event is extremely unlikely to occur over this relatively short distance, so at least four neutrino types are required for this oscillation to occur, indicating physics “beyond the three neutrino paradigm”. Even though this is reported to extremely high significance (6σ), these results are currently being further investigated [6]. 

In order to realise these future discoveries and to address the more fundamental questions of the universe, a new generation of neutrino detectors are being proposed, with one of the main ones being the Deep Underground Neutrino Experiment (DUNE), with the first detector planned for installation in 2024. 

This will consist of two detectors, one “near detector” underground at Fermilab, where neutrinos will be produced, and one “far detector” 1.5km underground at the Sanford Underground Research Facility (SURF) in South Dakota USA, 1300km from Fermilab. The far detector will detect neutrinos by means of a liquid argon time capture chamber. This will make it possible to detect neutrinos and document their interactions with image-like precision by producing a bubble-chamber-like image. This project has 3 primary goals [7]. The first is to carry out comprehensive oscillation measurements, which may describe the matter-antimatter asymmetry in the universe. The second is to observe proton decay. With a constrained half life on the order of 1033 seconds, observation of this decay would satisfy a key requirement for the grand unification of the fundamental forces. And the third is to measure the electron neutrino flux from core-collapse supernovae, which would provide new information about the early stages of core collapse and even indicate the formation of a black hole. 

Hopefully this blog has given you some insight into the elusive nature of the neutrino, as well as an appreciation for the extreme measures physicists have to go to to detect these ghost-like particles. Also, now you should know that understanding neutrinos is the key to understanding the universe on the most fundamental levels, so next time you here of a ground breaking theory surfacing, don’t be surprised when you hear that it was a result of a neutrino detection!



[1] Ryden, B., 2017. Introduction to cosmology. Cambridge University Press.

[2] IceCube. 2021. IceCube and Neutrinos. [online] Available at: <> [Accessed April 2021].

[3] 2021. Experimental Technique | Super-Kamiokande Official Website. [online] Available at: <> [Accessed April 2021].

[4] Bartels, M., 2018. Here’s Why IceCube’s Neutrino Discovery Is a Big Deal. [online] Available at: <> [Accessed April 2021].

[5] Aguilar-Arevalo, A.A., Brown, B.C., Bugel, L., Cheng, G., Conrad, J.M., Cooper, R.L., Dharmapalan, R., Diaz, A., Djurcic, Z., Finley, D.A. and Ford, R., 2018. Significant excess of electronlike events in the MiniBooNE short-baseline neutrino experiment. Physical review letters, 121(22), p.221801.

[6] Aguilar-Arevalo, A.A., Brown, B.C., Conrad, J.M., Dharmapalan, R., Diaz, A., Djurcic, Z., Finley, D.A., Ford, R., Garvey, G.T., Gollapinni, S. and Hourlier, A., 2021. Updated MiniBooNE neutrino oscillation results with increased data and new background studies. Physical Review D103(5), p.052002.

[7] Abi, B., Acciarri, R., Acero, M.A., Adamov, G., Adams, D., Adinolfi, M., Ahmad, Z., Ahmed, J., Alion, T., Monsalve, S.A. and Alt, C., 2020. Deep Underground Neutrino Experiment (DUNE), far detector technical design report, Volume II DUNE physics.

Can Electrons Cure Cancer?


What are Electrons?

What are electrons

Electrons are the tiny particles in atoms which orbit around the nucleus. Electrons can, however, also be free, meaning they are not attached to any atom. These subatomic particles are negatively charged and fundamental, meaning they cannot be broken down into anything smaller. In fact, electrons are so small that if the proton in a hydrogen atom, that is, the nucleus, were the size of 

a basketball, the electron would be the size of a golf ball. Moreover, it would be orbiting the basketball at a distance of 8 km! It might be strange to think that something so tiny, and which surrounds us wherever we are without being visible to us, could help us fight cancer. It is, after all, many times smaller than the size of a cell. However, thanks to the creativity of scientists, the idea might not be such an alien one after all.



Why this Strange Idea?

Cancer is a global burden. About 30-40% of people will develop cancer during their lifetimes. In 2020 alone, it is estimated that around 19.3 million cancer cases occurred, with 10.0 million cancer deaths. By 2040 the global cancer burden is expected to have seen a 47% rise from 2020, with 28.4 million cases that year. It is therefore not surprising that the race to improve the treatment of cancer, and, where possible, finding a cure, is more pressing than ever. The battle against cancer engages people and scientists from every field, including physicists. Their approaches include for example radiotherapy, hadron therapy and proton therapy. Here, we will briefly present a relatively novel treatment strategy physicists are developing for cancer, namely the use of electrons. This approach is itself divided, consisting of methods looking, for example, both at the use of high- and low-energy electrons. Both types will be outlined here. 


What is External Beam Radiation Therapy? 

External beam radiation generally involves focusing an ionised radiation source at a malignant part of the body. The goal of this treatment is to impart maximum damage, by means of radiation, to cancer while sparing any surrounding healthy tissue. Cancer treatments involving external beam radiation typically use either photons, protons or electrons. External Beam Electron Radiation therapy uses beams of these freed electrons, which were previously mentioned. Low energy electron beams are specifically used for superficial tumours, since they cannot travel deep into the skin. Cancers, for which treatment may utilise electron beam therapy, include skin, lip and neck cancers. Electron beam therapy differs from the conventional photon beam therapy because photon radiation can penetrate deep into the skin, while sparing surface tissue, but low energy electron beams cannot. Electron beam therapy damages tumour cells by causing DNA double strand breaks or cell membrane damage. In contrast, high energy electron beams can penetrate deep into the skin, but they become less accurate in targeting their treatment area when doing so. Therefore, mainly low-energy electrons have been used in medical treatment so far.


Low Energy Electrons

Low-energy electrons appear in the majority of physical and chemical phenomena underlying radiation, playing a central role in determining the effects of ionising radiation. Hence their potential use in the battle against cancer is evidently worth investigating, seeing as one of the main treatment forms until the present day involves radiation therapy (in addition to chemotherapy, immunotherapy and surgery). Looking at the production of low-energy electrons in ionising radiation events is essential to understanding their potential. Although initially studied in water, the research carried out thus far on low-energy electrons in liquid is nonetheless seen as helpful due to the fact that tumour-bearing tissues often have a significantly higher water content than the normal tissues from which they have been derived.

How Are Electrons with Low Energy Produced?
Ionising radiation is radiation which carries enough energy to knock an electron out of an atom, in a sense “freeing” it. Ionising radiation passing through molecular media transfers energy to the molecular electrons present. This is through discrete collisions and the result is the excitation of the electrons. In turn, this leads to either the production of an excited state of the molecule or complete ionisation, meaning the liberation of the electron. This transfer of energy from the radiation particle to a molecular electron is a stochastic event, which can be described using collision cross-sections and its radiation track structure modelled with Monte Carlo methods. Put simply, scientists have found a way to mathematically model the process of ionisation leading to the creation of  secondary, or “daughter” electrons. These are the electrons with low energy which are potentially useful in treating cancer tumours. The energy of the primary particle is reduced in a collision and any daughter, low-energy electron can be followed between collisions thanks to the modelling.

Amongst other things, the models and simulations developed by scientists can determine the energy distribution of daughter electrons when attenuated to sub-excitation level, defined as being less than 25eV. “eV” stands for “electron volt” and is a unit of the energy of an electron. It is found that the ionisation of liquid water occurs at approximately 6.6 eV yielding a most probable secondary electron energy of <15eV .

After liberation, these secondary electrons undergo their own attenuation: they transfer energy to molecular electrons, leading to ionisation, and excitation, until an energy of approximately 25eV is obtained. The below figure shows the energy of the sub-excitation electrons produced by a primary electron with an energy of 1,000,000 eV (this is also known as 1 mega eV, or 1 MeV). A quite large part of these resulting low-energy electrons (27%) are produced with an energy between 0eV and 1eV. The remaining electrons have an energy in the range of 1eV to 25eV. The average energy is around 9eV. Hence a value of approximately 9eV is the most probable secondary electron energy as a result of ionising radiation in liquid water.


Low-energy electrons can also be produced using the Auger Effect. This is when a vacancy in a k-shell electron is filled by an electron from a higher shell with lower binding energy. The energy difference of this transmission is either emitted as a characteristic x-ray, or it is transferred to another electron. This electron is then ejected, leaving two electron vacancies in a shell. This induces other electron ejections. 



The amount of  low-energy, ‘‘daughter’’ electrons at each energy, produced in liquid
water by 1,000,000 eV primary electrons.


Why this is Only Possible in Recent Times


In the early and mid 20th century, electrons had been studied by physicists in such a way that their movements could be predicted in an electric field. However, lack of computer power made it difficult to apply this research in a medical setting. Since 1975, there have been groundbreaking discoveries in the area of electron transport, such as the Monte Carlo method. This is a method which allows a computer to accurately predict the motions of electrons while being used in electron beam therapy. This discovery has allowed us to employ electrons in medical treatments. 


Intra-operative Electron Therapy

People first had the idea of using electron beams to treat cancer in Japan in the 1960s. Before that, doctors could only use x-ray beams, which were too weak and imprecise in comparison. It has since really taken off because when electrons are used to get rid of cancer, it’s normally gone for good! When the doctors have opened up the patient in surgery, they apply the electron beam directly to the cancer, which kills the cancer cells. Using the beam makes the surgery much easier for the doctor too! This treatment method is really popular with patients also, because it’s really good at only harming the cancerous cells, and leaving the healthy skin safe and sound. It’s so accurate that the doctors hail its precision regularly. Intra-operative means that the process takes place during the operations, so the physician actually gets to see exactly where the problem area is, and how to fix it. Doctors and scientists agree that the intra-operative method is one of the best ways of using electrons to treat cancer, and its high success rate means that patients can go back to living a normal life without having to worry about their cancer returning.


One of the leading causes of death in the United States is pancreatic cancer. A study of medical records from Massachusetts General Hospital from 1978 to 2010 found that when doctors manage to catch cancer early on, intra-operative radiotherapy can eliminate the cancer quite quickly, without too much invasive surgery.


Similarly, scientists in Italy have found that over the course of 7 years, intra-operative radiotherapy has managed to help 99 out of 100 people beat their breast cancer for good! Such strong results from these studies have meant that in the past 10 years, way more doctors have started using this method early on to catch cancer fast and help people go back to living their normal lives.


Two of the most difficult cancers to treat are cancers in the head and the neck. Head and neck cancers often also have a very high rate of recurrence. However, using the electron beam on this kind of cancer during surgery really helps the patient heal much quicker than they normally would, because the electrons are really good at stopping any remaining cancer cells from growing back.



Using Electrons to Cure Cancer in the Real World

This all sounds great, but is any of this science actually used in practise? Surprisingly yes! Admittedly, low energy electron radiotherapy is not the most popular form of cancer treatment in use, but it has been in use for quite some time. Low energy electron beams have been featured in radiotherapy wards since as early as the 1940s and really came into fruition in the 1970s when the linear accelerators became more commercially available. Today, low energy electron beams are used in some hospitals around the world and it is also the subject of many clinical trials which aim to improve the technology and make it more patient friendly. To prove the point, at the University of Texas MD Anderson Cancer Centre(MDACC), they estimate the 15% of cancer patients receive low energy electron radiotherapy as part of their treatment. At the MDACC, low energy electrons are used to treat cancer sites that are near to the surface of the skin such as the scalp, breast and tongue. The technology is most useful in a procedure that is known as total skin electron beam therapy. This involves the electron beam treating the entire surface of the skin and is currently being used in NHS hospitals.


As mentioned before, as with any medical technology there are ongoing studies and trials to try and improve the effectiveness of the technology. This is no different with low energy electron beam radiotherapy. There is currently a clinical trial taking place in the University Hospital Heidelberg that explores the effects of an intra-operative electron therapy on low-risk early breast cancer patients. One of the problems with electron radiotherapy, and indeed all radiotherapy, is that after the treatment, up to 80% of patients are left with symptoms of fatigue. This trial aims to use electron therapy during a tumour removal operation and only on small parts of the breast, in the hope that the procedure will still be effective but also leave the patient feeling less fatigued. Up to now (May 2021), 48 patients have been included in the trial and the estimated primary completion date is September 2021.



Using Auger Electrons in Clinical and Preclinical Trials

As we have talked about above, it is also possible to produce electron beams by the Auger effect. These are really low energy electrons and could be really effective for the treatment of cancer cells. These electrons can cause damage to the DNA of cancer cells by something called “water radiolysis”, this is the decomposition of water molecules due to radiation. Also the Auger electrons can damage the membrane of the cancer cell. There are lots of interesting clinical and preclinical trials involving these electrons, a few of which will be described below.

The most famous preclinical trial using Auger electrons was conducted by Kassis and Adelstien. In this trial, a special form of Auger electrons were incorporated directly into the DNA of the cancer cell. This was found to cause something called DNA double strand-breaks resulting in the conclusion that auger electrons are very cytotoxic (harmful to cells). This is an encouraging result as it means that if the auger electrons can be targeted at the correct cells, they are likely to destroy them. Another preclinical trial conducted by Chan et al, aimed to investigate how cells from a breed of Chinese hamster reacted when exposed to a type of Auger electrons. The result of this trial was that the electrons were extremely cytotoxic and most of the hamster cells were destroyed. It has been found that these early trials of DNA targeted auger electron radiotherapy in mammalian cells shows that the auger electrons are highly cytotoxic when emitted close to DNA. These results are encouraging for the future of this form of radiotherapy!

In comparison to pre-clinical trials of auger electron therapy, there are relatively few examples of clinical trials involving the use of auger electrons. However, there have been clinical studies performed over 20 years ago. For example, a trial performed by Macapinlac et al has investigated the size of dose  and safety in four patients with colorectal cancer using extremely low doses. It was found that no tumour responses were detected (however none were expected due to the low dose), but more importantly, no side effects or toxicity was detected. Another study performed by Krenning et al. involved 30 patients, each administered 14 doses with each dose at 6-7 GBq. The unit GBq is called a Giga Becquerel and is commonly used to describe the strength of a dose. Among 21 patients who completed the trial and received a cumulative amount greater than 21 GBq, 6 patients showed a reduction in tumour size and all patients reported no noticeable side effects.


Future Changes in the Practice of Electron Therapy Resulting from Challenges to its Utilisation & from Potential Future Technology

As mentioned above, unfortunately electron therapy is not the most popular form of cancer treatment available. There is not sufficient technology in place to take advantage of electron therapy, since it is not in demand from manufacturers. Another possible reason for its lack of popularity is that radiographers & oncologists are not extensively educated in the topic. Additionally, a treatment called tomotherapy  has similar capabilities as electron beam therapy since it uses intensity modulated radiation therapy (IMRT), which allows the avoidance of normal tissue. This is achieved by taking a 3D photo of the tumour prior to radiation & then imparting the highest dose of radiation on to various parts of the body, causing minimal damage to health nearby tissue.Although electron beam therapy is in use in some hospitals, it is likely that its use will only become more widespread following improvements to  dose homogeneity in tissue or creating a more efficient beam delivery system.


While there is currently no clinical trials taking place using Auger electrons as a form of treatment, trials from the past give reason to be optimistic about the future deployment of this technology in the field. Further preclinical testing combined with improved strategies to improve the delivery of Auger electrons to the nucleus could see an uptake in this form of radiotherapy.


High Energy Electrons

At the other end of the spectrum, technologies behind high-energy physics have historically contributed to great advances in the field of medicine. Treatment of tumours is complicated by the need to limit, or preferably avoid, doses to surrounding normal tissue. The use of high-energy electrons for the purpose of improving differentiation between malignant and normal tissue is a technique which scientists only started to turn their attention to in 2014. The technique is still being developed. Electrons with an energy of around 100 MeV penetrate many tens of centimetres of tissues and can thus reach deep-seated tumours. 


How Are Electrons with High Energy Produced?
Useful for the creation of such energetic electrons is the accelerator technology developed for the “CLIC electron-positron collider”. CLIC stands for the Compact Linear Collider and is being constructed at the research centre CERN, in Geneva. It is a large piece of machinery which collides electrons with their antiparticles, i.e. positrons, at energies which range from a few hundred GeV (giga eV, 1,000,000,000 eV!) to even a few TeV (tera eV, 1,000,000,000,000 eV – very energetic!). CLIC, whose mechanism is illustrated in the below figure, provides high levels of electron-beam polarisation at a range of high energy values. In fact, the CLIC project is capable of accelerating electrons at energies as high as 3 TeV!


Simplified schematic of the “drive beam” and “main beam” particles travelling in the
CLIC accelerator complex at CERN in Geneva.


In a linear accelerator such as CLIC the full energy from collisions must be delivered to the particles in a single passage through the accelerator. Hence, the accelerator must be equipped with several acceleration structures distributed along its length. CLIC is built to be operated in three stages with increasing collision energy and luminosity (brightness). Two-beam acceleration technology achieves a high acceleration of the particles per metre. It is radio-frequency pulses which accelerate the main beam, and these pulses are generated by decelerating a second, high-intensity electron drive beam in dedicated structures. Synchronisation of the arrival time of the two beam bunches is very important to generate electrons with the desired energy.


A scientist working on the CLIC  at CERN with the hope of generating high-energy
electrons which are effective in the treatment of cancer.


Using High Energy Electrons to Cure Cancer in the Real World

While low energy electron radiotherapy is useful for the treatment of sites that are within a few centimetres of the surface, there is potential for use of high energy electrons in treating deeper tumours. The issue here arises when we consider that the deep tumour must be treated without irradiating the surrounding healthy tissue. This proves to be a difficult task when dealing with electrons that have energy in excess of 100 Mev.


That being said, a paper published by Kokurewicz et al in 2019, details that increased inertia of high energy electrons due to relativistic effects can reduce scattering and enable its use in treatment of deep tumours. The paper focuses on Monte-Carlo simulations of high energy electron beams in water and concludes that the dose can be concentrated into a small volumetric element. This means that the surrounding tissue receives a dose that is spread over a larger volume. This is a promising result for high energy electron radiotherapy as it means that the technology can be used to treat tumours without irradiating surrounding, non-carcinogenic tissue.(Kokurewicz et al., 2019). In addition to this, studies are also being conducted to determine the dosimetry properties of high energy electron beams using Monte-Carlo methods. 


The current extent of this technology is only in the form of research and optimisation with no preclinical or clinical trials presently underway. In contrast to low energy electron radiotherapy however, this is very much an emerging technology and given more research we could very well see preclinical and even clinical high energy electron treatment trials in the future.



Doctors and scientists are always looking for new innovative ways to treat cancer. Radiotherapy has long been established as a common way to treat cancer patients, however recently the use of electrons has gained a foothold in the medical community. With current methods, electrons cannot travel very far through body tissues and the side effects have been severe enough to limit their use. Therefore, their use is limited to tumours on the skin or near the surface of the body, however use of electron radiotherapy during surgery is currently being studied, which would enable physicians to apply electrons to cancerous cells with surgical precision. The electron radiation works by making small breaks in the DNA inside cells. These breaks keep cancer cells from growing and dividing and cause them to die. Nearby normal cells may be affected by radiation, but most recover and go back to working the way they should. 


There are certain cancers that are more sensitive to radiation than others. Radiation may be used by itself in these cases in order to shrink or eradicate the cancer.  However in many other cases, chemotherapy or anti-cancer drugs may be required also. Electron radiotherapy has emerged as one of the most promising cancer treatment methods. Future and current research into the practice is a prominent topic within the field of medical physics.Potentially less invasive ways to treat cancer using precise applications of electron radiation will provide better outcomes for patients, and could provide effective treatments against cancer that avoids the negative physiological effects of chemotherapy. Further discoveries and research in both the realms of physics and medicine are highly anticipated as developing breakthrough cancer treatments could help humanity to turn the tide in the battle against cancer. 

Authors: Pratheek Kishore, Octavian Stoicescu, Nicollas S.M. Borges, Abhijith Jyothis P.

Nuclear energy has been suggested as a possible solution for climate change for a while, but how effective is this solution? What are the benefits of nuclear energy that would help with the current climate predicament? The purpose of this blog is to explore nuclear energy as a viable alternative energy source for cleaner emission and how this aids in alleviating climate change. We will consider the current state of the climate, the theory behind nuclear energy, comparison of different energy sources and their risks, and finally the future of nuclear energy (nuclear fusion?). 

We often hear about global warming being a current issue,  what we don’t hear is that this is something that’s been having an impact on our world for quite some time. The total average land and ocean surface temperature increase from 1850 to 2012 is 0.78  ͦ C. Total Glacier mass lost around the world except glaciers present on the periphery of ice sheets from 1971-2009 is 226 Gt per year. The Global Mean Sea Level (GSML) has risen by 0.19 m, as is estimated from a linear trend, from 1901-2010 and permafrost temperatures have increased in most regions worldwide from observations of trends since the early 1980s.Humans have been clearly impacting the climate of the planet for a very long time, but what does this all mean? It’s easy to look at statistics and forget these numbers have real world implications. For example, in some terrestrial systems, “Spring Advancement” can clearly be observed, which is the premature occurrence of spring-related events such as hibernation, breeding, flowering, and migration. Ocean acidification due to climate change decreases calcification which favours the dissolution of calcium carbonate and bioerosion, ultimately yielding coral reefs that are poorly cemented. Rocky shore animal and algae distribution, as well as abundance, has been observed to be altered due to climate change along with a significant decline in mussel bed biodiversity in the Californian coast. The body sizes of several marine species such as the Atlantic cod have been clearly negatively linked to changes in the temperature. The effects of climate change can be observed all around us, from the forests to the oceans, from the desert to the tundra, but perhaps the most relatable and personal effect would be the impact of climate change on humans. Climate change can have varying effects on several ecosystems, which can lead to food security issues globally. Climate extremes might increase the spread and ultimately the likelihood of attaining some diseases and infections. We can observe from studies on heat waves the effect increased heat will have on health, which leads to  increases in cardiovascular and respiratory disease, and generally increased mortality. During 2004-2018, in the United states alone, an average of 702 deaths annually were heat related (287 deaths had heat as a contributing cause, and 415 had heat as an underlying cause). So as you can see, climate change is an issue that has clear and direct effects on our ecosystems, on our environment, and on ourselves. If this matter is not addressed with the utmost of care and research, it may lead to dire consequences for us and our future on this planet. This is why there must be a global effort to utilise trialed and tested solutions to counter this problem. Currently our energy is supplied primarily by fossil fuels, which are not only limited in supply and depleting quickly, but major contributors to climate change through the emission of greenhouse gasses. Following this trajectory is not a viable solution for our future. But what could such a solution be? It turns out we had one answer from the mid-20th century: Nuclear Energy.

The atom with masses as small as 1.6735575 × 10-27 and length scales of around 10-10 m is our current best bet at tackling climate change? As small as atoms are one must realise the immense magnitudes of energy they hold in the form of binding energy of its nucleus i.e. the energy required to hold the atom together. Einstein’s famous formula E = mc2 is what is in-fact used to calculate the energy that gets released. The difference in masses of the reactants and products of a nuclear reaction, called the mass defect, is multiplied by c2 which as we know is a massive number. Hence this explains the enormous amount of energies released during such reactions. One can extract this energy methodically via nuclear reactors using a fuel source-fissile material.  The energy released by an average sized nuclear reactor in a day is of the order of 1013 J! A measure of the efficiency of  such power/energy generating plants is the so-called capacity factor which is defined as the ratio of the actual energy output during a specific period of time to the maximum energy that can theoretically be produced during the same period. Nuclear energy has a capacity factor of 93% twice that of coal. Currently this amount of energy at these rates of efficiency can be produced by no other source . Moreover, the carbon footprint associated with actual energy production or running of the nuclear power plant is almost nil. This, coupled with the high energy densities of the fuels, makes it a leading contender in providing a potential solution to the current climate change crisis. Current estimates of nuclear fuel reserves suggest that there is enough to power all the nuclear reactors in the world for 200 more years! To get a sense of this note that a volume of ideally enriched uranium fuel of the size of a golf ball is sufficient to provide an average human being more than sufficient to meet the energy needs for their lifetime. Despite not being a popular source due to its high initial setup cost’s nuclear energy currently responsible for up to 10% of global electricity generation, in spite of only being dominantly used in around 30 countries or less.

There are multiple fuel source categories such as oxide fuel, metal fuel, non-oxide ceramic fuels, liquid fuels etc. But the dominant fuel used in most nuclear reactors is oxide fuel of Uranium-235. Although to achieve the efficiency and numbers we have just stated above one needs to enrich mined uranium by percentages of around 3-5 % To ensure a sustainable chain reaction of the neutrons, the exact percentages may vary based on type of reactor and other factors/requirements. As a result, less than 1% of mined uranium can actually be used as fuel. Some of the principal methods that are widely used to enrich fuel are gaseous diffusion, gas centrifuge and laser separation.

As it is with everything there is no light without the dark and so it is with uranium fuel. extracting/ mining uranium is a high energy consuming process with a large carbon footprint! And there are safety risks also associated with its transportation, treatment and back-end activity associated with spent nuclear fuel management. However there exist certain strategies such as Twice-through fuel cycles and advanced fuel cycles which are aimed at reducing the mitigating and avoiding such issues. 

A further possibility to handle the issue is via using alternative fuels such as Thorium which provide a better alternative! Almost all the thorium that is mined may be used as fuel unlike uranium. Other advantages are that it produces far lesser nuclear waste and is much safer! But almost every reactor currently works only on uranium. There’s always a catch isn’t there ? The problem is that while thorium is extremely fertile i.e. All of it as is mined in theory can be transmuted into fissile material which is the actual fuel that can work in a nuclear reactor it is not fissile itself i.e. it  cannot be directly used as nuclear fuel. However there is currently a lot of research going around in order to develop efficient thorium fuel cycles that may be used by the nuclear reactors! Which sure seems like a step forward in the right direction. 

One of the other disadvantages with nuclear energy are high investment energy sources. However, one can say for certain that we are more capable today in handling nuclear energy than we have ever been in the past. And it is certainly a worthwhile investment. Surely its advantages can be weighed over its disadvantages. As a result, Further research in nuclear reactor methods, Alternative fuel cycles and safety protocols can immensely boost the impact nuclear energy may have in pulling back the adverse effects of climate change. To quote an Indian physicist, the late Dr Homi Bhabha: “No energy is more expensive than no energy“. 

One way of measuring how “effective” a process (in this case, a type of energy production) is, is by doing a life-cycle assessment (LCA). LCA takes into account the extraction of the fuel, processing, distribution, and disposal of the waste.  A life-cycle assessment can measure the impact that the production of nuclear energy could have. An LCA from a 2017 study predicts that the life-cycle emissions from fossil fuels with carbon capture and sequestration plants to be between 78 and 110 gCO2eqkWh⁻¹ by 2050. The same study gives the life-cycle assessment for a combination of nuclear, wind and solar to be between 3.5 and 11.5 gCO2eqkWh⁻¹. Another study on nuclear power says that so far, nuclear power has prevented 64 gigatonnes of CO2 greenhouse gas emissions and 1.84 million air pollution-related deaths (as of 2013) and could prevent another 80-240 gigatonnes of CO2 and 420000 – 7.04 million deaths by 2050.

Fusion energy has been in development for decades and it’s a common joke that fusion energy is always thirty years away. The strife towards fusion is because there is minimal environmental impact, unlike fission reactors fusion produces no radioactive waste, and even if there would be a nuclear meltdown the radioactive material (tritium) is only a transition element which is present only in small amounts in fusion reactors. The half life of tritium is around 12.3 years and hence it would not render the facility radioactive for centuries. Also if there would be a breach in the reactor, the plasma would be extinguished meaning the reactor will be intrinsically safe. The only byproduct of the reactor is helium which is not a greenhouse gas, and being the second lightest element, it would ascend in the atmosphere and drift off into space. Fusion is when you take two atomically light nuclei and smash them together to create a heavier nuclei, however the heavier nuclei is not an exact sum of the mass of the two original ones hence the extra mass is converted to energy accompanied by the release of a couple of bosons. There are several different types of fusion reactors currently being researched and developed, some examples being magnetic confinement fusion (MCF), inertial confinement fusion (ICF), Stellartor and magnetized target fusion (MTF). MCF, as it might sound, is when you confine a plasma within a volume and heat It enough so that fusion is possible. However the temperatures required to initiate fusion such that on average a particle has enough energy to break the Coulomb barrier is around 170 million degrees kelvin, which is roughly 11 times the temperature of the core of the sun! So you can imagine why physicists are having such a hard time developing this technology. However MCF currently holds the most hope for a fusion future as an experimental fusion reactor has been heavily invested  in by India, the EU, USA, Korea, Russia and Japan, which is designed to show the scientific feasibility of fusion as an energy source. It is set to have its first plasma test at the end of 2025, but will only be fully operational by 2035. It is designed to have a Q = 10, meaning it will produce ten times the amount of energy it ingests. If the reactor is deemed successful in achieving its Q value and testing the feasibility of fusion. A new fusion reactor will be built called DEMO which will be used to produce electricity, which is scheduled to be completed by the 2050s. Even though fusion shows great promise it is a technology of the future and should not be relied upon in dealing with the climate crisis, because for the moment reducing carbon emissions using fusion is just a fantasy. We cannot rely on the possible technology of the future to solve today’s problems.



by Sean O’Neill, Darragh Brennan and Jordan Kenny.




Example of an aerodynamic heat map.



Basic Definition:

Aerodynamics is a sub section of fluid dynamics that focuses on the interaction between moving objects and the air around them. Air is a gas which means it is a continually deformable medium. Essentially this means that it will change its shape in order to completely fill the container it lies in. When a shear force is applied to a gas, it will deform until the net force is reduced to zero.

The maths required to simulate this model is incredibly complex but there are some fundamental physical laws that can be used to describe the dynamics of air.

  1. Conservation of mass: This law states that matter cannot be created or destroyed. For our purposes, this means that the mass of the air encountered by a body must be the same as the mass left behind once the body has moved through it.
  2. Newton’s second law. This law states that when a body at rest or in constant motion experiences a force, the body gains an acceleration proportional to, and in the direction of said force ie: F = ma.
  3. The First law of Thermodynamics: This law states that the change if internal energy in a system is equal to the difference of the heat absorbed by the system and the work done by the system. This is essentially the law of conservation of energy expressed in thermodynamic quantities: ΔU = Q – W.
  4. The Second law of Thermodynamics: This law states that changes of entropy in the universe will never be negative. This essentially means that all processes will increase the disorder of something in the universe. In this case, a car moving through the air pushes the air around it, therefore increasing the entropy of the universe. The law can be expressed as ΔS = ΔQ/T where T is the temperature.

Another helpful quantity for analysing the behaviour of fluids like air is their density. This can substitute for the mass in some equations as the density is an inherent property of the fluid that does not change with the amount of fluid present. We can express the pressure of a fluid in terms of the density(ρ): P = ρRT. Where R is the universal gas constant.

2018 Spec F1 Car

Importance of Aerodynamics:

Aerodynamics play an essential role in the design of all modes of transport today. One needs only to compare the design of cars from 100 years ago to those of today to see the influence that aerodynamics have had. Sharp edges have been changed to smooth curves to reduce drag and wind shields slant back towards the driver rather than being almost perpendicular to the bonnet. High speed trains have a distinct cone shape at the front in contrast to the flat cylindrical noses of the 19th and 20th century.  Aeroplanes as the name would imply rely almost entirely on aerodynamics to take off and maintain flight.

When focusing on cars, aerodynamics impact their performance in 3 key ways.

  1. Fuel economy: more aerodynamically efficient a car is, the less work the engine must do to propel it through the air.
  2. Top speed: the lower the drag coefficient, the higher the top speed for cars with the same power output.
  3. Stability at speed: the aerodynamics of the body shape affect how the car will behave in crosswinds and at higher speeds. 

The aerodynamics of a car also plays an important role in many other aspects of automobile design such as engine cooling, airflow in the passenger compartments and using air to carry water droplets and dirt off side mirrors and headlights.

While aerodynamics does not wholly govern the design of consumer cars, as aesthetic choices need to be factored in as well, it is the main influence in the design of performance vehicles. This is very clearly demonstrated in the differences between Formula 1 cars and Land Speed Record vehicles.


The Aerodynamics of Formula 1:

Formula 1 cars are amongst the most advanced automobiles on the planet in terms of technology and aerodynamic developments. It is these aerodynamic capabilities that see F1 cars speed through corners well in excess of 200 km/h. For example, the 130R corner at Suzuka race track, Japan regularly sees drivers take the corner at speeds of about 300 km/h. The fact the car is able to do this is because of the aerodynamic concept of downforce. Much like an aeroplane generates lift in order to fly, an F1 car generates downforce in order to push down against the ground, generating grip.

There are three main components of the car that allow this to happen.

  1. The front wing
  2. The rear wing
  3. The underbody of the car, namely the diffuser.

Both the front and rear wing work in a similar manner to each other. They work by directing the incoming air around them. This causes areas of low pressure in and around the underside of the wings. This causes almost a “suction” like force, which sucks the car to the ground. The two wings combined generate up to 50% of the overall downforce generated by the car.

The final 50% or so of downforce is generated by the underside of the car and the diffuser. Incoming air is redirected by the front wing and other aerofoils on the sides of the car, such as the bargeboards, underneath the car. Much like what happens around the wings, a large area of low pressure is formed and sucks the car to the ground. This is in essence, how all the downforce of the car is generated. By today’s regulations, F1 cars weigh upwards of approximately 700 kg and at speeds of 150 km/h and above, the downforce generated can exceed 2000 kg of force. This is what allows the cars to navigate race tracks, rarely dropping below 150 km/h at any point.

The high forces generated have begged the question within the scientific community about whether or not F1 cars could drive upside down. Since the forces generated greatly exceed the weight of the car, in theory it should be possible, however, due to safety concerns and technological restrictions with operating an engine and components upside down, a test has never actually been carried out.

Aerodynamics of the Land Speed Record Vehicle (Bloodhound SSC/LSR):

Bloodhound is the vehicle created with the aim of breaking the current land speed record of 1,228 km/h. The previous vehicle responsible for this record is referred to as ThrustSSC which is the first land based vehicle to break the sound barrier. 

Initially, computational fluid dynamics (CFD) was used to calculate the pressure distribution that would occur across the surface of the car. These models were used to improve the design of the car before eventually, real world tests were used to ensure that the simulated model was accurate.

Since the vehicle aims to travel at supersonic speeds, there are several challenges that the engineers must overcome. Arguably the biggest challenge to overcome in terms of aerodynamics for Bloodhound is the inevitable shockwave it will experience when the vehicle reaches the speed of sound. The engineers responsible for the design of the car had to accurately predict the points at which the shockwaves will occur and the potential damage done by shock reflections. 



Katz, Joseph. (2016). Automotive Aerodynamics. John Wiley & Sons. Retrieved from

Schuetz, Thomas. (2016). Aerodynamics of Road Vehicles (5th Edition). SAE International. Retrieved from

Umberto Ravelli & Marco Savini. Aerodynamic Simulation of a 2017 F1 Car with Open-Source CFD Code. J. Traffic Transp. Eng. 6, 155–163 (2018).

Larsson, T. 2009 Formula One Aerodynamics: BMW Sauber F1.09 – Fundamentally Different. 4th Eur. Automot. Simul. Conf. 1–7 (2009).

Evans, B. & Rose, C. Simulating the aerodynamic characteristics of the Land Speed Record vehicle BLOODHOUND SSC. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 228, 1127–1141 (2014).

It should come as no surprise that the demand for renewable energy is on the rise, along with the cost of fossil fuels, global temperature, and sea levels. Sunlight is a form of renewable energy which is abundant and universally available and thus there is growing interest in its uses for energy applications. Nowadays, everyone knows what a solar panel is, indeed, you may be reading this now below one on the roof of your home or office. Photovoltaic (PV) solar energy offers an economically viable source of energy, paving the way to a sustainable, environmentally friendly, and decarbonised world. To fully utilise solar power, it is imperative to understand the operation of solar cells and the factors which affect their efficiency. At the heart of the solar cell lies a layer of semiconducting material, such as silicon, gallium arsenide (GaAs) or polymer, and is undoubtably the most important part of the solar cell. With a unique combination of the properties of conductors and insulators, these materials are especially adept at converting sunlight to electricity.

How do Solar Cells Work?

Solar cells are electronic devices, also known as PV cells, which are made of two types of semiconductors (an n-type and a p-type) and convert light into electricity in a process known as the photovoltaic effect. The n- and p-type layers are created by a process called ‘doping’ which changes the electrical and structural properties of the semiconductor through the intentional addition of impurities. Without getting into the specifics of doping, the n-type layer has an overall negative charge due to the addition of electrons and the p-type has an overall positive charge due to the removal of electrons and thus the formation of electron vacancies, referred to as holes.

When these two types of semiconductors are in contact, some of the electrons close to the edge of the n-type semiconductor fill the holes of the p-type semiconductor across what is known as the depletion zone, until equilibrium is reached. Now, the edge of the p-type side contains negatively charged ions and the adjacent edge of the n–type side contains positively charged holes. This causes the formation of an internal electric field.

Light incident on the depletion zone is absorbed as energy. This energy from the photons of light will allow the electrons (that carry charge within the semiconductor) to essentially ‘break free’ from their previous bound state and, due to the electric field that is set up across the depletion zone, creates a flow of electrical current. An array of solar cells can convert solar energy into direct current (DC) electricity, or an inverter can convert the power to an alternating current (AC).

The diagram below shows the operation of the solar cell with a silicon semiconductor.

Operation of a Silicon solar cell

Types of Solar Cells

When it comes to materials, we must first understand what characteristics are desired for optimum performance. Some cells are designed to handle sunlight that reaches the Earth’s surface, while others are optimized for use in space. Solar cells can be made of only one single layer of light-absorbing material or use multiple physical configurations to take advantage of various absorption and charge separation mechanisms.

Some widely used semiconductors include monocrystalline silicon (mono c-Si), multicrystalline silicon (poly-Si), gallium arsenide multijunction (GaAs) and polymer solar cells (PSCs).

Monocrystalline silicon is the common choice for high-performance PV devices. Since there are less stringent demands on structural imperfections compared to microelectronics applications, a lower-quality solar-grade silicon is often preferred.  Mono c-Si is a photovoltaic, light absorbing material in the manufacture of solar cells. It has a crystal lattice of which the entire solid is continuous, unbroken to its edges, and free of any grain boundaries. Mono c-Si can be prepared intrinsic, consisting only of exceedingly pure silicon, or doped, containing very small quantities of other elements added to change its semiconducting properties. This variety of silicone has been described as the most important technological material of the last few decades in what has become known as “the silicone era”. This is a result of its availability at an affordable cost has been essential for the development of the electronic devices on which the present day electronic and informatic revolution is based. Mono c-Si differs from poly-Si, that consists of small crystals, also known as crystallites.

Poly-Si, is a high purity, polycrystalline form of silicon, used as a raw material by the solar photovoltaic and electronics industry. They are the most common type of solar cells in the fast-growing PV market. However, the material quality of Poly-Si is lower than that of mono c-Si due to the presence of grain boundaries. Grain boundaries introduce high localized regions of recombination due to the introduction of extra defect energy levels into the band gap. This reduces the carrier lifetime from the material. In addition, grain boundaries reduce solar cell performance by blocking carrier flows and impacting current flow.  Both materials are selected for solar cells as silicon is an abundant and durable element. However, upon comparison the mono c-Si solar panels are generally thought of as a premium solar product. The main advantages of mono c-Si panels are higher efficiencies. As the cell is composed of a single crystal, the electrons that generate the flow of electricity have more room to manoeuvre. As a result, mono c-Si panels are more efficient than their poly-Si counterparts, whose main selling point is the lower price point.

Gallium arsenide is a semiconductor with a greater saturated electron velocity and electron mobility than that of silicon. It also has the useful characteristic of a direct band gap this means it is a compound which can emit light efficiently. GaAs is a compound of the element’s gallium and arsenic. It is a crystal structure. GaAs thin-film solar cells have reached nearly 30% efficiency in lab environments, but they are very expensive to make. Cost has been a major factor in limiting the market for GaAs solar cells with their main use being for spacecraft and satellites.

Polymer solar cells have for a long time not been comparable to the traditional solar cells mentioned above on both performance and stability. The fact that PSC’s can behave as semiconductors is a discovery which Alan J. Heeger, Alan MacDiarmid and Hideki Shirakawa received the Nobel Prize in Chemistry for in the year 2000.  In terms of the application of solar cells the emerging dye-sensitized solar cells, perovskite solar cells, and organic solar cells have all been regarded as being full of promise.  This is a due to the diverse properties of polymer which means it can be used to adjust the device components and structures. In dye-sensitised solar cells, they can be used as flexible substrates, for perovskite solar cells, polymers can be used as the additives to adjust the nucleation and crystallization processes whilst in organic solar cells polymers can often be used as donor layers or buffer layers.

Efficiency of Solar Cells

Solar cell efficiency is the amount of energy received from the sun that is converted into usable energy. This is often the most important parameter of a solar cell and the most difficult to improve.

Most solar cells utilize a single p-n junction, as described above, to convert solar energy into electrical energy. This occurs when a photon of energy Eph is incident on the solar cell. If the energy Eph is greater than the energy required to excite an electron, that is the band gap Eg, an electron-hole pair is created. Energy losses arise from temperature increases resulting due to different mechanics:

  • If Eph is less than Eg then the photon does not have enough energy to ionise the electron and thus Eph is just converted to heat.
  • If Eph is a bit bigger than Eg the electron-hole pair created will move off with kinetic energy KE = (Eph-Eg). This fast-moving pair will resist moving through the circuit until their kinetic energy is dissipated via phonons into the solid and thus are slowed down, generating heat.
  • The electrons and holes annihilate to produce photon in a process called recombination. This photon has a probability of being reabsorbed by the atoms in the solar cell giving rise to heat.

The efficiency of an infinite stack of solar cells is bounded above by the efficiency of its equivalent Carnot engine. The Carnot engine efficiency is given by 1 minus the ratio of TC to TH, where TC and TH are the absolute temperatures of the of the working system (solar cell) and the system supplying the energy respectively. For a solar cell, if we take   to be the ambient temperature of Earth and  to be the surface temperature of the sun then the maximum theoretical efficiency of solar cell is about 95%. This value however assumes an infinite stack of cells, no reflectance and that the stack doesn’t emit radiation. Solar cells will never reach such efficiencies in reality. Assuming the infinite stack of solar cells is instead exposed to blackbody radiation of 6000K from all directions the efficiency drops to 86.8%. If we were to assume that all the radiation comes only from the sun the efficiency is further reduced to 68.7%. These processes increase the temperature of the solar cell above that of the surrounding atmosphere. Since the solar cell and the atmosphere are in thermal contact the solar cell must lose heat to the atmosphere to establish thermal equilibrium. The efficiency of solar cells is very temperature dependent. One may design a single p-n junction photovoltaic in a way that the band gap is such that the energy loss is minimised. William Schockley and Hans-Joachim Queisser determined the optimum band gap energy for sun light to be 1.34eV. This results in an efficiency of 33.7%, which is also the maximum efficiency for single p-n junction photovoltaics and is known as the Ultimate Efficiency or the Shockley–Queisser limit. For a multiple p-n junction solar cell the maximum energy can be much higher (up to 44%). This is done by choosing semiconducting materials, such as polymer, with different band gaps as to harness the solar spectrum more efficiently.

Quantum Efficiency refers to the percentage of photons that give rise to electric current. A photon (with energy greater than the band gap) incident on the surface of the semiconductor has a certain probability of creating an electron-hole pair. This probability depends on many different aspects: some photons are reflected from the surface of the solar cell, some energise the nucleus of the atom in the solar cell lattice instead of the valence electrons, some electron hole pair created recombine straight away resulting in no net increase of current, etc. Many of these depend on the photon’s wavelength/frequency and thus the quantum efficiency is usually a function of either.

Methods to minimise losses include texturing the incident surface of solar cells to reduce reflection and the back side of the solar cell can having a mirror finish to increase the distance some photons will travel in the solar cell thus increasing the probability of electron pair production.

An example of surface texturing on a solar cell

Taking information from various sources, an average value of efficiency can be deduced for various types of solar cells. Crystalline silicon PV cells are the most common solar cells used in commercially available solar panels, representing 87 % of world PV cell market sales in 2011. These, however, achieve efficiencies ranging from 18-24%. By far the greatest efficiencies have been attained by GaAs multi-junction cells, with a record of 43.5%. Polymer solar cells currently have the lowest efficiencies of only 8.7% but are only in the early stages of research and development.

In recent years, the development of the solar cell has accelerated, bringing exciting opportunities and new challenges, and now has many space and terrestrial applications. According to Energy Ireland, solar power will play an important role in future of decarbonised energy, diversifying Ireland’s renewable energy portfolio over the next decade. It is estimated that installation of solar PV to a capacity of 1,500MW is achievable by 2022, representing 5% of Ireland’s electricity demand. If the road map towards a decarbonised world is to be made possible, it is imperative to understand the operation of solar cells, factors affecting the efficiency of the solar cells and the trade-off between this efficiency and the cost of materials.


Chen F.L., Yang D.J., Yin H.M. (2016) Applications. In: Paranthaman M., Wong-Ng W., Bhattacharya R. (eds) Semiconductor Materials for Solar Photovoltaic Cells. Springer Series in Materials Science, vol 218. Springer, Cham.

“How a Solar Cell Works.” American Chemical Society,

“Solar Energy in Ireland.” Energy Ireland, 25 Nov. 2019,

“Eco-friendly perovskite solar cells made from peppermint oil and walnut aroma” Feb 28 2020,

Detailed Balance Limit of Efficiency of p‐n Junction Solar Cells, Journal of Applied Physics 32, 510 (1961);

“How a Solar Cell Works” American chemical society,

Hou, W.; Xiao, Y.; Han, G.; Lin, J.-Y. The Applications of Polymers in Solar Cells: A Review. Polymers 201911, 143.

Khanam, J. J., & Foo, S. Y. (2019). Modeling of High-Efficiency Multi-Junction Polymer and Hybrid Solar Cells to Absorb Infrared Light. Polymers11(2), 383.

De Vos, A. and Pauwels, H. ,”On the thermodynamic limit of photovoltaic energy conversion”, Applied Physics, 1981, doi :10.1007/BF00901283,

Paranthaman, M., Wong-Ng, W. and Bhattacharya, R., n.d. Semiconductor Materials for Solar Photovoltaic Cells.

Askari Mohammad Bagher, Mirzaei Mahmoud Abadi Vahid, Mirhabibi Mohsen. Types of Solar Cells and Application. American Journal of Optics and Photonics. Vol. 3, No. 5, 2015, pp. 94-113. doi: 10.11648/j.ajop.20150305.17