top of page

Search Results

144 results found with an empty search

  • Hidden Worlds: a peek into the nanoscale using helium ion microscopy | OmniSci Magazine

    < Back to Issue 2 Hidden Worlds: a peek into the nanoscale using helium ion microscopy How do scientists know what happens at scales smaller than you can see using an optical microscope? One exciting method is the helium ion microscope which can be used to view cells, crystals and specially engineered materials with extreme detail, revealing the beauty that exists at scales too small to imagine! by Erin Grant 10 December 2021 Edited by Jessica Nguy and Hamish Payne Illustrated by Erin Grant The room is white, with three smooth walls and a fourth containing a small sample prep bench and high shelves. In the centre is a desk with three monitors. Next to it, occupying most of the space, is the microscope. Eight feet tall, a few feet wide, resting on an isolated floor surrounded by caution tape; “NO STEP” written in big block letters. Wires protrude from its tiered shape in orderly chaos. It is a clean, technological space; we are ready to explore science. A colleague and I are at the Materials Characterisation and Fabrication Platform of the University of Melbourne to finish off the last steps of a scientific paper I’ve been working on for many years. What I need, as the icing on the cake, is an image. What does my sample look like way down there, at the nanometre scale? Objects that are only nanometres in size are very hard to imagine when we’re used to thinking about metres, centimetres, or maybe even millimetres. We can see those length scales; they are part of our everyday. So, if you’re told that proteins have a diameter of a few nanometres, what does that mean? Well, to be precise, a nanometre is one-billionth of a metre. A human hair, the go-to yardstick for describing small things, has a width between 0.05-0.1 millimetres, which means that if you wanted to slice a hair into nanometre-wide strands you’d end up with nearly 100,000 pieces. Unfortunately, that’s still hard to visualise, but I’ve found that when working with and thinking about scales like this every day, you gain a sort of mental landscape that small things occupy, perhaps not entirely in context, but a space that contains an overall ‘vibe’ of smallness. I first noticed this when I worked in a laboratory that studies the tiny nematode worm C. elegans. These creatures are half a millimetre long, so although they are clearly visible to the naked eye, you need a microscope if you want to use them for science. After looking at these tiny creatures under magnification for many weeks, I came to recognise a feeling almost like being underwater. Upon putting my eyes to the lens, my focus would change from the macroscopic world around me, to one of minutiae. This change in perspective was quite immersive, I almost felt like I was inhabiting that small petri dish too. Working with samples even smaller than that now, I have carried some of that mental landscape with me. It now feels commonplace to imagine tiny systems, such as crystals or molecules which were once foreign. Much of this ability to visualise small things comes from the fact that in many cases, we can actually see them too. Physics has given us many tools with which we can peer into the smallest systems that exist. Helium ion microscopy, which I have come here to carry out, is one such technique. Dr Anders Barlow runs the helium ion microscope (HIM) at this facility. He warmly welcomes me and my colleague into the quiet room and jumps straight into an enthusiastic explanation of the machine – he can tell we’re not just here for some pictures, we want to know the inner workings of the microscope too. The HIM is a bit like the more mature surveyor of minuscule worlds: the electron microscope. While a regular optical microscope uses light to illuminate a sample, the electron microscope uses electrons. When they collide with the sample these electrons can bounce off or lose energy through several mechanisms. The lost energy can go into heat or light, but more usefully, the energy might be transferred to other electrons in the sample, called secondary electrons, ejecting them like a drill removing rocks from a quarry. The secondary electrons can be detected at each point across the sample as the beam is scanned over its surface. If more electrons are detected, then the pixel at that point is brighter compared to areas where there are fewer electrons. This tells you about the topography or composition of the sample at that point on its surface and provides a grayscale image. The HIM works in the same way, but it can generate sharper images because helium ions are heavier than electrons. This is important because the increased resolution of electron and helium ion microscopes is enabled by their quantum mechanical properties - namely the particle’s wavelength. You may have heard about the wave-like nature of light, which is a basic property of quantum mechanics. Particles also have a wavelength, called the de Broglie wavelength, which is inversely proportional to their mass - the heavier the particle, the shorter the wavelength. Having a shorter wavelength allows smaller details to be resolved because of a pesky phenomenon called diffraction. Diffraction occurs when a wave encounters a gap that is of the same or smaller width to its wavelength. When this happens, the wave that emerges on the other side will be spread out. You can think of the features that you want to image as being similar to gaps, so when light, or a particle, interacts with features that are very close together it will spread out, making those features blurry or even invisible. But if you can ensure that the wavelength is smaller than whatever feature you want to see, diffraction will not occur. Interestingly, physicists can actually take advantage of diffraction, and another phenomenon called interference, when they study periodic structures like crystals, but that’s a different article! So, because the de Broglie wavelength is very short for particles with mass, like electrons, an electron microscope can generate images of higher resolution than an optical microscope. Likewise, helium ions are even heavier than electrons because they are composed of one electron, two protons, and two neutrons. This makes them about 7,000 times heavier than a single electron (electrons are very light compared to protons and neutrons!) and consequently the images they can make are very sharp. With our samples ready, lab manager Anders loads my sample into the microscope and begins lowering the pressure in its internal chamber. Having a high vacuum – approximately a billion times lower than atmospheric pressure – is essential because it prevents air from interfering with the helium beam. Making the beam is perhaps the most miraculous part of this technological feat. At the very top of the microscope’s column, there’s a tiny filament shaped like a needle. Not like a needle, in fact, it is the sharpest needle we humans can make. To achieve this, the point is shaped by first extreme heat, and then some extreme voltages until the very tip is composed of only three atoms, reverently referred to as the trimer. Once the trimer has been formed, a high voltage is applied to the needle, resulting in an extreme electric field around the tip. Next, helium gas is introduced into the chamber and individual helium atoms are attracted towards the region of the high electric field. The field is so strong that it strips each helium atom of one electron, ionising it, and these now positively charged ions are repelled from each of the three atoms in the trimer as three corresponding beams. Using sophisticated focusing fields down the length of the column allows Anders to choose only one of the beams for imaging; we are creating a picture using a beam only one atom wide! Generating such a precise beam requires constant maintenance, but once Anders is satisfied with how it looks today, he begins scanning over a large area for what we’ve come to find: tiny proteins stuck to a diamond. In an experimental PhD, you often find yourself answering small incremental questions and today I want to know how well I’ve attached these proteins to my diamond and what the coverage looks like. Other measures have told me that I probably have a lot of them, but the best way to know is to have a look! That’s what Anders does for researchers at the university; he helps us find out whether we have done a good job putting things together or coming up with new techniques. This is something he loves about his job. “I love the exposure I get to many areas of science,” he says, “Imaging of all forms is ubiquitous in research, and the HIM is applicable to most fields, so we see samples from materials science, polymers, nanomaterials, and biomaterials, through to medical technologies and devices, to cell and tissue biology of human, plant and animal origin. I never get tired of seeing what new specimens may come through the lab door.” Unfortunately, the first images we see are very dark and washed out, like a photograph taken in low-light; not many secondary electrons are making it to the detector. To combat this, Anders uses a flood gun to stop charge build up on the surface of the diamond. When the helium ions create secondary electrons, they are ejected from the surface at low speeds. As electrons are negatively charged, the bombarded surface, which now lacks electrons, will become positive and the low energy secondary electrons will be attracted back to the surface instead of making it to the detector. In an electron microscope this is avoided by coating insulators, such as my diamond, with a conductive material like gold. If the surface is conductive, the positive charge that is left behind by the secondary electrons will be offset by electrons from the metallic coating that can flow towards the sudden appearance of positive charges. In this case, the ejected electrons can escape and be detected. However, a coating like this would reduce the resolution of the image; if you want to measure proteins that are twelve nanometres high, but you put a three-nanometre coating over them, you’ll lose a lot of the resolution! To get around this, the HIM uses the flood gun, which lightly sprays the surface with electrons of low energy as the helium beam passes over. This neutralises the surface and lets the secondary electrons escape in the same way as having a conductive layer. Once Anders turns on the flood gun, the contrast increases, allowing us to zoom in on a small region of the diamond, and there they are! Thousands of spherical proteins arranged neatly across the surface, only twelve nanometres in diameter. The sight is spectacular, only one try and we got what we came for. I am three years into a PhD and I’ve become very used to the feeling of disappointment that can accompany new experimental techniques. Things rarely work out the first time around, so to see those little spheres straight away was magical. Dotted across the diamond surface is another, extra, gem. To keep protein nice and happy, you must prepare it in a salty solution. So, when the protein was deposited, some regular table salt, NaCl, came too. We can see this salt in our images as crystals in two distinctive and very beautiful patterns which you can see in the images below. Protein on the surface of my diamond. Each small pale circle is one of these spherical proteins. The first image shows a large creeping pattern, reminiscent of snowflakes or tree roots, which spreads its soft fingers across several hundred nanometres. These crystals have taken on an amorphous pattern, where the crystal structure is broken up rather than being one continuous arrangement of the atoms. The second pattern however, shown in the right image, is what a continuous NaCl crystal looks like. When large enough crystals can form without becoming amorphous they look like precise cubes of various sizes all strewn about. One of my favourite aspects about looking at very small things, is how the patterns you see often mirror those at much larger scales. Look at a fingerprint and you’ll find mountains and valleys, or the roots of a tree and you’ll see a river system. Salt (NaCl) can take on a highly ordered structure shown by the cubic crystals (left) or an amorphous pattern similar in shape to tree roots (right). The astonishing images we get from this single session are all in a day’s work for Anders. He has imaged numerous kinds of cells on all manner of interesting substrates, patterned surfaces covered in needle-like protrusions, and many kinds of man-made materials. Today, there are vials on his prep-bench which, at first glance, look much like jars of hair. However, they are not hair, in fact they are strands of carbon fibre covered in various coatings, awaiting examination. ‘What are your favourite types of samples to look at?’ I want to know. “Cell biology is fascinating,” he says. “We’ve imaged red blood cells, pancreatic cells, stem cells, and various bacterial cells in this microscope. Most often researchers are interested in cell life and death, and the HIM assists by providing high resolution images of the structure and surface topography of the cell membrane.” Recently however, Anders has been helping researchers look at polymer materials for water filtration. “These are hierarchical porous structures, meaning they’re engineered to have pore sizes that vary through the membrane. It is stunning to see the materials at low magnification with large pores, and as we zoom in and in and in, to see new pore sizes become visible at each level, like a material engineered with a fractal quality.” One of the unique things about the HIM, Anders reminds me, is that it’s not just for imaging. Since helium ions are heavy, they carry a higher momentum than electrons. “We leverage the momentum of the ions to actually modify structures too. We can create new surface properties, new devices, new technologies, on a scale that is often too small for any other fabrication technique. This is some of the most exciting work.” If you know anyone who needs some nanoscale drilling done, then the HIM is your instrument! Today’s excursion across the university campus has been thrilling. I got what I came for and I’m excited to find other projects that could benefit from the insight and beautiful images the HIM can provide. Imaging instruments have always fascinated me and I’m looking forward to witnessing how far we will be able to delve into the nanoscale world in the years to come, thanks to the fast pace of engineering and physics research. Previous article back to DISORDER Next article

  • Functional Neurological Disorder | OmniSci Magazine

    < Back to Issue 8 Functional Neurological Disorder by Esme MacGillivray 3 June 2025 Edited by Steph Liang Illustrated by Esme MacGillivray Content warning: Please be aware that this article includes discussion of mental illness, medical malpractice, and ableism. Functional Neurological Disorder (FND) is very simple to explain. It is a problem with how the brain functions. More specifically, it is a problem with how the brain sends and receives messages, resulting in diverse motor, sensory, and cognitive symptoms. But unlike other neurological conditions, FND does not appear to be caused by any identifiable structural damage to the nervous system. As a catchy metaphor: the brain is a computer, and FND is a ‘software’ problem as opposed to a ‘hardware’ problem. If that all feels frustratingly vague, I’m afraid you are out of luck — but in good company. Since developing FND a year and a half ago, I’ve become closely acquainted with confusion. My own body has felt alien sometimes, and the way others have reacted to my disability has been equally disorientating. Instead of accepting that neuroscience is yet to make sense of FND, many people — including medical professionals — rush to dismiss symptoms, or question their very existence. Understanding this condition is not just a matter of advancing scientific knowledge. Judgement and shame must be replaced with compassion. Turns out FND is far from simple to explain. Symptoms often develop rapidly and ‘out of nowhere’, most typically in adolescence or adulthood (1). These can include functional tics, non-epileptic seizures, limb weakness, paralysis, gait disorders, and speech difficulties (2). The list goes on. From the array of possible symptoms alone, it is clear that FND encompasses a broad range of presentations. Fluctuation and inconsistency can exist even within an individual’s experience. Most days, I appear completely ‘normal’. Sometimes, my disability is glaringly obvious. My FND is confusing and isolating; because there is so little information available, it is difficult to get the support I need. It doesn’t help that myths about this condition are rife within both medical and everyday settings, despite it being one of the most common diagnoses made by neurologists (3). I would like to dispel the idea that FND is just a fancy way of saying that doctors have ruled out ‘real’ neurological conditions. Neurologists can observe positive signs, or patterns of sensation and movement, that indicate functional symptoms, such as a Hoover’s sign for functional weakness (1). Therefore, although the cause of symptoms remains unknown, FND is a meaningful diagnosis. The very label itself represents progression away from the harmful beliefs that defined this condition in earlier centuries. Sometimes I joke about how I might have been treated if I was living in the past. Would people try to exorcise me, or burn me at the stake? Or would I perhaps be sent away to a charming seaside retreat? A mental asylum may have been more likely. Indeed, symptoms of FND once would have awarded me a diagnosis of ‘hysteria’. This label originates from ancient beliefs about the uterus punishing the female body with illness if left infertile, representing an ideological burden forced on suffering women for centuries (4). In the words of Eliot Slater in 1965, the term was “a disguise for ignorance and a fertile source of clinical error” (5). As theories of psychology and neurology were reworked, clinicians began using the term ‘Conversion Disorder’ (4). FND symptoms were misunderstood as manifestations of psychological trauma being ‘converted’ into physical distress (4). It’s an interesting idea, but an inaccurate one. Many people with FND have not experienced significant trauma prior to developing symptoms (5). It is now understood that mental and physical harm, such as a severe illness or injury, may increase the risk of an individual developing FND (1,7). However, this is not a requirement, and certainly not the cause of this condition. Unfortunately, the medical field has not unanimously moved on from the misunderstandings of the past. Since my episodes of collapse, unresponsiveness, and uncontrollable movements were not typical of epilepsy, they didn’t seem to concern the first, second, or even third medical professional who saw me. I am glad that my condition is not inherently life-threatening — but declaring that there is nothing wrong with someone is a far cry from reassuring them that their brain isn’t in danger. The attitudes I encountered leant strongly towards the former. Doctors seemed eager to attribute my symptoms to ‘stress’, and prove that I could directly control what was happening to me, while some even tried to convince my mum that I was faking everything for attention. These experiences are not an anomaly. In fact, being dismissed or disbelieved is an almost characteristic part of having FND (8,9). It often takes years for people to be correctly diagnosed (8), let alone be offered any semblance of support. After a month, I was privileged enough to receive a diagnosis — and compassion — from a neurologist who took me seriously. Despite this, there are lingering impressions from that first month without any understanding or guidance. It urges me to ignore what I know to be true about FND, and about my own body, to entertain the idea that my thoughts are secretly orchestrating everything. I am crazy, or too weak minded to stop choosing thoughts that make me have FND. Don’t ask me how one can subconsciously do something on purpose. I didn’t put this idea in my own head, just like I didn’t put FND in my own head. Nevertheless, these things exist. People with FND are tasked with navigating not only frightening symptoms, but also ignorance, stigma, and shame. Sometimes science doesn’t give us a satisfying answer. Future research can hopefully provide people with FND more concrete answers, including ways of understanding ourselves and possibilities for symptom management and recovery. Health and disability are complex, and we can never fully understand what someone else is going through. When it comes to FND, I barely understand my own body half of the time. Fortunately, I now understand that I deserve to be treated with respect. Compassion doesn’t need to be confusing. It shouldn’t take a breakthrough in neuroscience for people with FND to be listened to and cared for. References 1. Bennett K, Diamond C, Hoeritzauer I, et al. A practical review of functional neurological disorder (FND) for the general physician. Clinical Medicine . 2021;21(1):28-36. doi: 10.7861/clinmed.2020-0987 2. FND Hope. Symptoms. 2012. Accessed May 11, 2025. https://fndhope.org/fnd-guide/symptoms/ 3. Stone J, Carson A, Duncan R, et al. Who is referred to neurology clinics?--the diagnoses made in 3781 new patients. Clinical Neurology Neurosurgery . 2010;112(9):747-51. doi: 10.1016/j.clineuro.2010.05.011 4. Raynor G, Baslet G. A historical review of functional neurological disorder and comparison to contemporary models. Epilepsy & behavior reports . 2021;16:100489. 10.1016/j.ebr.2021.100489 5. Slater E. Diagnosis of “Hysteria”. Br Med J . 1965;1:1395–1399. doi: 10.1136/bmj.1.5447.1395 6. Ludwig L, Pasman JA, Nicholson T, et al. Stressful life events and maltreatment in conversion (functional neurological) disorder: systematic review and meta-analysis of case-control studies. Lancet Psychiatry . 2018;5(4):307-320. doi: 10.1016/S2215-0366(18)30051-8 7. Espay AJ, Aybek S, Carson A, et al. Current Concepts in Diagnosis and Treatment of Functional Neurological Disorders. JAMA neurology , 2020;75(9):1132–1141. Doi: 10.1001/jamaneurol.2018.1264 8. Robson C, Lian OS. “Blaming, shaming, humiliation": Stigmatising medical interactions among people with non-epileptic seizures. Wellcome Open Research , 2017:2, 55. Doi: 10.12688/wellcomeopenres.12133.2 9. FND Australia Support Services Inc. Experiences of Functional Neurological Disorder - Summary Report. Canberra (AU): Australian Government National Mental Health Commision; 2019. 13p. Previous article Next article Enigma back to

  • Cosmic Carbon Vs Artificial Intelligence | OmniSci Magazine

    < Back to Issue 6 Cosmic Carbon Vs Artificial Intelligence by Gaurika Loomba 28 May 2024 Edited by Rita Fortune Illustrated by Semko van de Wolfshaar “There are many peculiar aspects of the laws of nature that, had they been slightly different, would have precluded the existence of life” - Paul Davies, 2003 Almost four billion years ago, there was nothing but an incredibly hot, dense speck of matter. This speck exploded, and the universe was born. Within the first hundredth of a billionth of a trillionth of a trillionth second, the universe began expanding at an astronomical rate. For the next 400 million years, the universe was made of hydrogen, helium, and a dash of lithium – until I was born. And thus began all life as you know it. So how did I, the element of life, the fuel of industries, and the constituent of important materials, originate? Stars. Those shiny, mystical dots in the night sky are giant balls of hot hydrogen and helium gas. Only in their centres are temperatures high enough to facilitate the collision of three helium-4 nuclei within a tiny fraction of a second. I am carbon-12, the element born out of this extraordinary reaction. My astronomical powers come from my atomic structure; I have six electrons, six protons, and six neutrons. The electrons form teardrop shaped clouds, spread tetrahedrally around my core, my nucleus, where the protons and neutrons reside. My petite size and my outer electrons allow my nucleus to exert a balanced force on other atoms that I bond with. This ability to make stable bonds makes me a major component of proteins, lipids, nucleic acids, and carbohydrates, the building blocks of life. The outer electrons also allow me to form chains, sheets, and blocks of matter, such as diamond, with other carbon-12 atoms. Over the years of evolution, organic matter buried in Earth formed fossil fuels, so I am also the fuel that runs the modern world. As if science wasn’t enough, my spiritual significance reiterates my importance for the existence of life. According to the Hindu philosophy, the divine symbol, ‘Aum’ is the primordial sound of the Cosmos and ‘Swastika’, its visual embodiment. ‘Alpha’ and ‘Omega’, the first and last letters of the Greek alphabet, represent the beginning and ending, that is the ‘Eternal’ according to Christian spirituality. When scientists photographed my atomic structure, spiritual leaders saw the ‘Aum’ in my three-dimensional view and the ‘Swastika’ in my two-dimensional view. Through other angles, the ‘Alpha’ and ‘Omega’ have also been visualised (Knowledge of Reality, 2001). I am the element of life, and within me is the divine consciousness. I am the beginning and I am the end. My greatness has been agreed upon by science and spirituality. In my absence, there would be no life, an idea humans call carbon chauvinism. This ideology and my greatness remained unquestioned for billions of years, until the birth of Artificial Intelligence. I shaped the course of evolution for humans to be self-conscious and intelligent life forms. With the awareness of self, I aspired for humans to connect back to the Cosmos. But now my intelligent toolmakers, aka humans, are building intelligent tools. Intelligence and self-consciousness, which took nature millions of years to generate, is losing its uniqueness. Unfortunately, if software can be intelligent, there is nothing to stop it becoming conscious in the future. Soon, the earth will be populated by silicon-based entities that can compete with my best creation. Does this possibility compromise my superiority? A lot of you may justifiably think so. The truth is that I am the beginning. Historically, visionaries foresaw asteroid attacks as the end to human life. These days, climate change, which is an imbalance of carbon in the environment, is another prospective end. Now, people believe that conscious AI will outlive humans. Suggesting that I will not be the end; that my powers and superiority will be snatched by AI. So the remaining question is, who will be the end? I could tell you the truth, but I want to see who is with me at the end. The choice is yours. References Davies, P. (2003). Is anyone out there? https://www.theguardian.com/education/2003/jan/22/highereducation .uk Knowledge of Reality (2001). Spiritual Secrets in the Carbon Atom . https://www.sol.com.au/kor/11_02.htm Previous article Next article Elemental back to

  • Friend or Foe?: The Mechanisms Behind Facial Recognition | OmniSci Magazine

    < Back to Issue 8 Friend or Foe?: The Mechanisms Behind Facial Recognition by Mishen De Silva 3 June 2025 Edited by Luci Ackland Illustrated by Aisyah Mohammad Sulhanuddin Among the many mysteries which encompass the world around us, lies a complex interaction right under our nose, or perhaps… right above it. In the labyrinth of human consciousness, we rely on the seemingly arbitrary judgements made from the combination of two eyes, a nose, and a mouth, to discern who might be a friend or foe. Facial recognition gives a snapshot into the intricate dance between our perception and cognition, which allows us to cultivate a more detailed understanding of those around us, and their thoughts, feelings and emotions. In those fleeting moments when you recognise your parents in a sea of unfamiliar faces, spot your friends ensconced among the rows of the lecture theatre, or simply bump into an old friend in a crowd of unacquainted strangers, your brain is able to identify faces in a fraction of a second, a remarkable feat of the human cognitive capacity. But what enables us to distinguish one face from another? How do the faces of those we know stand out from the countless other noses, eyes and mouths we see? To understand what makes these interactions so meaningful, we need to take a closer look at the mechanisms behind facial recognition and decoding within the brain. The Brain’s Blueprint To be human is to seek meaning, even when none may exist. The mind has transformed what is two eyes above a nose, and a nose above a mouth, into its own pattern for classifying the identities and expressions we see around us. Many studies have suggested facial processing to be holistic, where the featural patterns of the eyes, nose and mouth are perceived together and upright (1,2). This mechanism of holistic facial processing explains the interesting phenomena behind pareidolia, where the brain adapts the characteristics of human faces onto everyday objects. It’s the reason why when glancing at a bowling ball it may appear surprised (3), or why some have sworn to see a face on Mars (4)! Figure 1. Bowling balls with surprised facial expressions! (3) In pursuit of meaning for the patterns around us, the brain has developed specialised regions for processing the features of a face to help us recognise individual identities. Facial processing operates through a hierarchical mechanism where distinct aspects of the face are interpreted by different regions of the brain. The unchanging elements of the face such as gender, age, ethnicity and features related to someone’s identity are analysed by the Inferior Occipital Gyrus and Fusiform Face Area (FFA), while the changing aspects such as eye gaze, lip movements and facial expressions are analysed by the Superior Temporal Sulcus and Orbitofrontal Cortex (5,6). Of these face-selective regions, the FFA is particularly important for facial recognition as it helps us recognise who a person is (5). Through the activation of our FFA simple patterns shift from meaningless shapes into familiar visages representing our friends, family, or even our own reflection. Studies have uncovered the importance of the FFA for facial recognition by examining what may happen when this brain region malfunctions (7,8). A unique example of this is prosopagnosia, which results from damage to the FFA in the right hemisphere of the brain (9). Prosopagnosia is a relatively rare condition affecting about 1 in 50 people, impairing their ability to recognise faces (9). Imagine if every face you observed looked the same or unfamiliar… even your own reflection! It is through the brain and its specialised regions for facial recognition where we can appreciate the essence of human connection as a result of our neural hardware. These mechanisms responsible for transforming patterns into faces are the reason we can recognise our neighbour from a stranger, friend from a classmate, or our parents from a teacher. Often overlooked amidst the fleeting and impermanent nature of our social interactions, this complex system guides us along the fragile line of human relationships, between familiarity and estrangement, a friend or foe. It highlights how deeply-rooted our connection and sense of identity is to the faces we see. The Brain’s Threat Detection With each neuron, synapse and pathway, our brains are machines wired for connection, not just in how we think, but also in how we perceive and interact with our surroundings. From the brief exchange of smiles with a stranger, to the furtive glare from someone across the room, one of the hallmarks of our emotional understanding is the ability to decode the thoughts and intentions of others, even from the most subtle of expressions. In the vast and intricate web of neural connectivity, it can be difficult to isolate a singular brain region or connection to explain complex cognitive functions. Brain imaging studies have found a strong bidirectional link between the FFA and amygdala, making this a likely candidate for explaining our remarkable decoding ability (10,11). As the FFA picks up on who a person is or what facial expression is being made, it is the amygdala which then evaluates the emotional salience, or importance, of this face. The amygdala then signals back to the FFA to either increase or decrease the facial processing activity accordingly (10,12). Consider how the visibility of teeth in a barred expression can signal anger, the whiteness of someone’s eyes can hint fear or surprise, and the shape of a person’s eyebrows can indicate the intensity of their emotion, all which guide the brain to prioritise and interpret socially and emotionally relevant cues – almost like a survival filter! (13,14,15). From an evolutionary perspective, the FFA-amygdala feedback loop serves as an important tool for rapidly and accurately interpreting the intentions of others, a pinnacle function in the architecture of our physical and social survival (16). The ability to recognise whether someone poses a friend or foe has been a survival mechanism and evolutionary advantage for millennia. The role of our facial processing network, from the amygdala and FFA, to other brain regions discussed, provides a microcosm into our nature as social beings, and our evolutionary selective changes, which have enhanced our ability to sense, respond to, and connect with those around us (17). In this way, maybe the most profound mysteries lie not in distant galaxies or ancient ruins, but are hidden in plain sight, within the faces we walk past every day. Our brain’s ability to read them is not merely a mechanism for decoding emotion, but a mirror into the nature of what it means to be human, where connection, trust, and survival have long been written in the expressions of those around us. References 1. Farah M, Wilson K, Drain M, Tanaka J. What is “special” about face perception?. Psychological Review [Internet]. 1998 Aug [cited 2025 May 14]; 105(3):482–98. Available from: https://pmc.ncbi.nlm.nih.gov/articles/PMC5561817/ 2. Richler J, Gauthier I. A meta-analysis and review of holistic face processing. Psychological Bulletin [Internet]. 2014 Sep [cited 2025 May 14]; 140(5): 1281–302. Available from: https://pubmed.ncbi.nlm.nih.gov/24956123/ 3. What do you think these bowling balls saw to leave them so surprised & shocked?. Reddit [Internet]. 2022 [cited 2025 May 31]. Available from: https://www.reddit.com/r/Pareidolia/comments/zc12jo/what_do_you_think_these_bowling_balls_saw_to/#lightbox 4. Gilbert L. Why the brain is programmed to see faces in everyday objects. UNSW Sites [Internet]. 2020 Aug [cited 2025 May 14]. Available from: https://www.unsw.edu.au/newsroom/news/2020/08/why-brain-programmed-see-faces-everyday-objects 5. Kanwisher N, Yovel G. The fusiform face area: a cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society: Biological Sciences [Internet]. 2006 Dec 29 [cited 2025 May 14]; 361(1476):2109–28. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1857737/ 6. Zhen Z, Fang H, Liu J. The Hierarchical Brain Network for Face Recognition. Ptito M, editor. PLoS ONE [Internet]. 2013 Mar [cited 2025 May 14]; 8(3):e59886. Available from: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0059886 7. Hadjikhani N, de Gelder B. Neural basis of prosopagnosia: An fMRI study. Human Brain Mapping [Internet]. 2002 [cited 2025 May 14]; 16(3):176–82. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.10043 8. Sorger B, Goebel R, Schiltz C, Rossion B. Understanding the functional neuroanatomy of acquired prosopagnosia. NeuroImage [Internet]. 2007 Apr [cited 2025 May 14] ;35(2):836–52. Available from: https://www.sciencedirect.com/science/article/pii/S1053811906009906 9. Prosopagnosia | Psychology Today Australia [Internet]. www.psychologytoday.com . [cited 2025 May 14]. Available from: https://www.psychologytoday.com/au/basics/prosopagnosia 10. Herrington J, Taylor J, Grupe D, Curby K, Schultz R. Bidirectional communication between amygdala and fusiform gyrus during facial recognition. NeuroImage [Internet]. 2011 Jun [cited 2025 May 14]; 56(4):2348–55. Available from: https://pubmed.ncbi.nlm.nih.gov/21497657/ 11. Said C, Dotsch R, Todorov A. The amygdala and FFA track both social and non-social face dimensions. Neuropsychologia [Internet]. 2010 Oct [cited 2025 May 14]; 48(12): 3596–605. Available from: https://pubmed.ncbi.nlm.nih.gov/20727365/ 12. Šimić G, Tkalčić M, Vukić V, Mulc D, Španić E, Šagud M, et al. Understanding Emotions: Origins and Roles of the Amygdala. Biomolecules [Internet]. 2021 May [cited 2025 May 14]; 11(6):823. Available from: https://pmc.ncbi.nlm.nih.gov/articles/PMC8228195/ 13. Jacobs R, Renken R, Aleman A, Cornelissen F. The amygdala, top-down effects, and selective attention to features. Neuroscience & Biobehavioral Reviews [Internet]. 2012 Oct [cited 2025 May 14]; 36(9):2069–84. Available from: https://pubmed.ncbi.nlm.nih.gov/22728112/ 14. Horstmann G, Lipp O, Becker S. Of toothy grins and angry snarls – Open mouth displays contribute to efficiency gains in search for emotional faces. Journal of Vision [Internet]. 2012 May [cited 2025 May 14]; 12(5):7–7. Available from: https://jov.arvojournals.org/article.aspx?articleid=2192034#:~:text=We%20suspected%20that%20visible%20teeth,(see%20also%20Figure%205).&text=Mean%20target%20present%20slopes%20(in,while%20angry%20faces%20do%20not.&text=Mean%20target%20present%20slopes%20(in,while%20angry%20faces%20do%20not . 15. Hasegawa H, Unuma H. Facial Features in Perceived Intensity of Schematic Facial Expressions. Perceptual and Motor Skills [Internet]. 2010 Feb [cited 2025 May 14]; 110(1):129–49. Available from: https://pubmed.ncbi.nlm.nih.gov/20391879/ 16. Schmidt K, Cohn J. Human facial expressions as adaptations: Evolutionary questions in facial expression research. American Journal of Physical Anthropology [Internet]. 2001 [cited 2025 May 14]; 116(S33):3–24. Available from: https://pubmed.ncbi.nlm.nih.gov/11786989/ 17. Carter E, Pelphrey K. Friend or foe? Brain systems involved in the perception of dynamic signals of menacing and friendly social approaches. Social Neuroscience [Internet]. 2008 Jun [cited 2025 May 14]; 3(2):151–63. Available from: https://pubmed.ncbi.nlm.nih.gov/18633856/ Previous article Next article Enigma back to

  • From Fusion to Submarines: A Nuclear Year

    By Andrew Lim From Fusion to Submarines: A Nuclear Year By Andrew Lim 23 March 2022 Edited by Tanya Kovacevic Illustrated by Quynh Anh Nguyen A press conference in April, pledging millions of dollars to nuclear medicine. A university address in November, rethinking Australia’s nuclear attitudes. A fusion reaction in December, promising a clean energy revolution. No matter where you were or who you were listening to, the world of nuclear science was inescapable in 2022. It has been a year of great progress and, at times, even greater controversy – pairing milestone triumphs and landmark facilities with old fears and vast challenges. So, what has defined the year in nuclear science – and what comes next? Powering the Future Image 1: LLNL’s National Ignition Facility, where the successful fusion ignition experiment was conducted in December. Perhaps the year’s most eye-catching discovery came near its end. On 13th December, scientists at the Lawrence Livermore National Laboratory (LLNL) in California announced that for the first time, they had produced more energy out of a nuclear fusion reaction than they had put in. It seemed to herald the beginnings of a new era – nuclear power without toxic nuclear waste. However, to report this as the USA’s civilian nuclear energy story of the year perhaps fails to capture the whole picture. It’s an important discovery, sure, but it stands on another development, far less well known: the congressional funding battles of the preceding months. Crafted from intense negotiations led by Majority Leader Chuck Schumer (D-NY) and Senators Todd Young (R-IN), Mark Warner (D-VA) and John Cornyn (R-TX), the bipartisan CHIPS and Science Act (1) authorized and appropriated funds for nuclear research en masse. It provided everything from a five-year $50 million p.a. plan for “Foundational Nuclear Science” (2), to a $1.09 billion Electron Ion Collider (3) and a “National Nuclear University Research Infrastructure Reinvestment” scheme that included LLNL (4). Even private sector fission work received a boost in the form of the Inflation Reduction Act of 2022 (5), built on a compromise between Schumer and Senator Joe Manchin (D-WV), allocating billions of dollars in tax credits and loan guarantees for the sector. These funding boosts (and their predecessors), the work of years of lobbying and negotiations across multiple political factions, helped create the environment necessary for this research to thrive – and the breakthrough is as much a reminder of their importance as a triumph of nuclear physics. Health and Safety Image 2: Prime Minister the Hon Scott Morrison MP, flanked by Health Minister the Hon Greg Hunt MP (L) and backbencher Gladys Liu MP (R), announces a $23 million APME grant in April. The year’s nuclear focus extended into the medical sector, too. President Biden’s 2022 State of the Union address announced an appeal beyond partisan lines, one pillar of which was the use of the Advanced Research Projects Agency for Health (ARPA-H) to “drive breakthroughs in cancer” (6). His call was answered in budget appropriations bills, funding accelerators and reactors to research new radioisotopes, while also investigating safer handling methods for natural and artificial nuclear sources (7). Such emphases echoed as far away as our antipodean shores. While Australia may already produce 80% of the radioisotopes used in its own nuclear medical procedures (8), both major parties took 2022 to advance nuclear medicine production. In April, the Coalition government launched new grants for the Australian Precision Medicine Enterprise (APME) in Melbourne, with the Hon Greg Hunt MP, then Minister for Health, declaring nuclear medicine “the next stage of precision medicine.” (9) Mere months later, in the October Budget, his Labor successor the Hon Mark Butler MP pledged funds for medical supplies of Gallium-67 (10). Across party lines, nuclear innovation became key to funding in the health sector. Securing Tomorrow Image 3: Australian Deputy Prime Minister and Minister of Defence Richard Marles (L) meets with US Secretary of Defence Lloyd J Austin III (R) at the Pentagon to discuss AUKUS submarine arrangements in December. All that said, no article about nuclear science, especially these days, would be complete without a discussion of AUKUS. In late October, an interview with Australian Vice Admiral Jonathan Mead was published in The Australian, in which he underscored the importance of building a nuclear workforce – that is, building the educational pathways required to produce all the crews, builders, architects, regulators and scientists a nuclear submarine capability would entail (11). With Australia’s first nuclear submarine captains likely in high school, the infrastructure needed to train them simply doesn’t exist – and time is running out. This urgency was emphasised by academics at ANU, home of the only postgraduate qualifications dedicated to nuclear science in the country. In November, Vice-Chancellor Brian Schmidt AC spoke of an approaching “transformation in Australia’s cultural relationship” with nuclear science (12). In December, Dr AJ Mitchell, an ANU academic leading the development of a national program for nuclear science and education, reiterated Schmidt’s arguments. In comments provided to The Sydney Morning Herald and The Age, he advocated for a “sovereign capability…start[ing] yesterday,” to ensure an Australian nuclear workforce capable of meeting requirements not only for defence but also for health, regulation, space exploration and much more (13). However, this attitude was not without controversy. In today’s world, where the word ‘nuclear’ carries connotations of Chernobyl, Fukushima, and the Cold War, increased nuclear funding (even if only to regulatory or medical bodies) often sparks fear in the public imagination. In response to Mitchell’s comments, A/Prof Peter Christoff, a University of Melbourne climate policy researcher, expressed worries about increased “anxiety in our region”. More than anything else, this perhaps underscores the biggest issue facing the nuclear sector: the long-held apprehensions from media, governments and beyond that can often lump anything vaguely nuclear – from medication to missiles – under the same roof. What's Next? Image 4: US President Joe Biden delivering his 2023 State of the Union Address, advocating for increased cancer research funding, flanked by Vice-President Kamala Harris (L) and Speaker Kevin McCarthy (R). Over the first months of 2023, the tense balancing acts and decisions of the past year have only continued to grow. In the USA, President Biden’s 2023 State of the Union speech, delivered in early February, saw him reinvigorate his call to “end cancer as we know it” (14) – the same call that led to all that radioisotope funding last year. However, Biden faces a Republican House of Representatives seemingly hell-bent on blocking his legislation. With the resultant impasse threatening a wholescale government shutdown, the funding necessary for scientific leaps of the kind seen in 2022 remains in doubt. On the Australian front, our lack of a ready nuclear workforce is causing jitters amongst our allies – with leaked letters from US Senators Jack Reed (D-RI) and James Inhofe (R-OK) expressing concern to the Biden administration about Australia relying on American production lines for stopgap submarines. Australian Defence Minister Richard Marles spent the December-January period allaying these concerns with the support of US Representatives Joe Courtney (D-CT-02) and Mike Gallagher (R-WI-08) while in the US and UK, but the issue is certain to remain a hot topic for this year. Even closer to home, Rio Tinto’s loss of a Caesium-137 capsule in Western Australia captured the imaginations of people across the nation and the world. At once it seemed to represent the long-standing fear of nuclear research and its importance in fuelling the same regulatory efforts that helped track down the capsule. Perhaps more than a story of scientific discoveries, of neutrons, protons and physics, the story of nuclear science in 2022 and beyond is the story of people. Of those legislators and politicians, balancing visions of the future with messy political compromises. Of those scientists and researchers, balancing plans and facilities with the capacity of their institutions. Of us, the ordinary public, balancing long-held phobias with exciting aspirations. Will we meet the challenges that lie before us? Are we ready to have a nuanced discussion about how we want to use our nuclear knowledge? Can we balance the possibilities of the future with the fears of the past? Well... that’s entirely up to us. Andrew Lim is an Editor and Feature Writer with OmniSci Magazine and spent the summer as a Summer Research Scholar at the Australian National University’s Heavy Ion Accelerator Facility, working on studying nuclear structure through particle transfer reactions. Image Credits (in order): Lawrence Livermore National Laboratory; Monash University; US Department of Defence; The White House Author's Note Between the submission of this article in late February and its publication in mid-March, a notable development took place, one that necessitated this additional note. On March 14, at an announcement held in San Diego, President Biden, Prime Minister Albanese and Prime Minister Sunak revealed plans for Australia to purchase three to five American Virginia-class submarines in the early 2030s. The Royal Navy and the Royal Australian Navy would then work out of their shipyards to develop and produce new SSN-AUKUS submarines (based off plans for successors to the British Astute-class models), coming into service in the late 2030s. If anything, this timeline accentuates the dramatic expansions required from Australia’s nuclear workforce, as presented in the original article. Meanwhile, the narrative that surrounded the announcement – one solely focussed on nuclear research’s military capabilities (and, at that, often conflating nuclear weaponry with nuclear power) – seems only to indicate the same throughlines of 2022 repeating themselves in the year to come…and nuanced and subtle discussion of nuclear research being left for another day. References CHIPS and Science Act, Pub L No 117-167, 136 Stat 1366 (2022). See ibid, div B tit I § 10102(d), 136 Stat 1415-6. See ibid, div B tit I § 10107, 136 Stat 1449-50, esp. sub-s (b)(4). See ibid, div B subtitle L § 10741-5, 136 Stat 1718-21. Inflation Reduction Act of 2022, Pub L No 117-169, 136 Stat 1818. The White House Office of the Press Secretary, Remarks by President Biden in State of the Union Address. March 2, 2022. https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/03/02/remarks-by-president-biden-in-state-of-the-union-address/ See House Committee on Appropriations, Report to Accompany H.R. 8295, H.R. Rep No 117-403 (2022), esp. at 65, 104, 235, 238. Taylor A, Birmingham S and Hunt G, Safeguarding the future of critical medicine supply [Media Release]. September 30, 2021. https://www.minister.industry.gov.au/ministers/taylor/media-releases/safeguarding-future-critical-medicine-supply. “Precision medicine is the ‘future of medicine’: Greg Hunt”. The Australian. April 4, 2022. https://www.theaustralian.com.au/nation/politics/precision-medicine-is-the-future-of-medicine-greg-hunt/video/9ec9b0942bfb18757e3fbf4f3e95e0f4 Garvey, P. “Butler steps in to ease nuclear medicine crisis”. The Australian. October 27, 2022. Nicholson, B. “Defence Special Report: Cultivating a Nuclear Mindset”. The Australian. October 27, 2022. ANU Communications & Engagement, Building Australia’s AUKUS-ready nuclear workforce: Address by Professor Brian Schmidt AC. November 9, 2022. Mannix, L. “‘Cherish’ the power: Physicists issue call to arms over nuclear skills gap”. The Sydney Morning Herald. December 28, 2022. https://www.smh.com.au/national/cherish-the-power-physicists-issue-call-to-arms-over-nuclear-skills-gap-20221228-p5c92s.html The White House Office of the Press Secretary, Remarks by President Biden in State of the Union Address. February 7, 2023. https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/02/07/remarks-by-president-biden-in-state-of-the-union-address-2/ Previous article Next article

  • In Your Dreams: Unpacking the Stories of Your Slumber | OmniSci Magazine

    < Back to Issue 8 In Your Dreams: Unpacking the Stories of Your Slumber by Ciara Dahl 3 June 2025 Edited by Ingrid Sefton Illustrated by Saraf Ishmam One minute you're flying through the sky, the next, you're naked in a room full of people. Except now, your teeth have started falling out? These surreal, and often illogical, experiences are what make dreams such a mystery. From ancient spiritual interpretations to modern neuroscience, people have long wondered not just what dreams mean , but why we have them at all. Are they cryptic messages from the unconscious? Perhaps a side effect of memory processing? Or maybe they are simply the brain’s way of entertaining itself while we sleep. Attempting to answer these questions is no easy feat. Despite being a universal human experience, dreams are inherently personal. Given no one but ourselves experiences our dreams, how can the fragmented recollections we have upon waking be objectively studied? Dream research was once steeped in spirituality and mysticism, often seen as divine messages from gods or whispered guidance from ancestors (1). Even Aristotle offered his own theory, suggesting dreams were the byproduct of internal bodily movements during sleep (1). It wasn’t until the early 20th century that dreams began to be studied through a psychological lens, most notably by Sigmund Freud, who proposed that dreams contained deeply personal and symbolic insights into the unconscious mind (2). Modern research, however, is beginning to uncover the connection between our dreams and complex cognitive processes such as memory consolidation. Techniques employed by oneirologists — that’s the fancy word for scientists specialising in the scientific study of dreams — includes fMRI, PET scans and EEG. Such methods are used to study brain activity during sleep and dreaming, particularly during REM and non-REM sleep (3). Using these technologies in tandem with qualitative descriptions gathered from individuals’ dream reports allows us to unpack the content and function of our dreams, whilst also considering questions such as why we seem to forget most of our dreams. What dreams are made of: influences on the content of our dreams There’s a growing body of evidence to suggest that our dream content is influenced by the consolidation of our memories as we sleep. Sleep provides an ideal neurological state for us to organise our recent memories into more long term memories (4). The reactivation and subsequent consolidation of memories in the sleeping brain appears to contribute to the content of dreams we recall upon awakening. In one study examining this phenomena, participants played extensive amounts of Tetris prior to sleeping. In the subsequent dream report collection, over 60% of participants cited seeing Tetris images in their dreams (5). This illustrates how the boundaries between waking and dreaming cognition are more porous than they appear, with dream content itself serving as a window into the neural mechanisms of memory consolidation. Not all dreaming can be directly tied to our most recent memories, but all dreams are built upon our prior experiences. For example, the appearance of recognisable friends or foes in our dreams in turn relies on our ability to recall their features and mannerisms (6). The bizarre patchwork of familiar situations we encounter in our dreams is also likely a reflection of the adaptive process of memory consolidation, as fragments of our memories are integrated during sleep. The Night Shift — what is the purpose of dreams We may be inching closer to understanding what influences the content of our dreams, but why do we dream in the first place? The Threat Simulation Theory (TST) argues that dreams act as an ancient biological defence mechanism, allowing us to simulate threatening events we may encounter in our waking life (7). TST suggests that on an evolutionary scale, being able to simulate threatening events in our sleep allows us to efficiently perceive and avoid threats whilst awake, leading to greater survival and reproductive success. It is a bit hard to imagine, however, that dreaming about being naked in public is going to be the key to our survival. This is why some scientists suggest that dreams are simply the brain’s attempt to make sense of random neural activity during REM sleep. This Activation-Synthesis Theory proposes that rather than rehearsing for real-life threats, our brains may just be firing off chaotic signals which it then tries to weave into bizarre and often disjointed stories (8). Whether dreams serve as a survival tool or are simply the byproduct of random brain activity, they offer a window into the complex workings of the sleeping mind. Vanishing Visions and the Concept of Dream Amnesia Have you ever woken up from such an absurd dream it seems impossible to forget, only to have forgotten the details by the end of breakfast? That’s what the experts call “dream amnesia”. It’s estimated that the average person dreams four to six times per night, yet you’d be lucky to remember even one of them by morning (6). At the molecular level, noradrenaline — a neurotransmitter associated with memory consolidation — is at its lowest concentrations while we sleep (9). This depletion could be a key factor contributing to dream amnesia, preventing the transfer of our dream experiences from short-term memory to long-term memory. Different sleep stages may also influence dream recall (6). It has been suggested that waking up during or just after REM sleep leads to more vivid dreams. In contrast, dream activity is low during non-REM sleep and hence, waking up during this sleep phase may also contribute to our poor dream recall. Although it can be disappointing to forget these wild dream experiences, dream amnesia may also serve an adaptive purpose. The “clean slate” hypothesis argues that forgetting dreams allows us to wake with a clear mind, free of the potentially disturbing content of our dreams (10). Alternatively, by maintaining a clear distinction between our dreaming and waking experiences, we are protected from confusing our dreams with reality, preventing anxiety that may otherwise ensue (11). Perhaps this forgetfulness may not be a flaw in our memory but a feature of it, helping us to preserve our mental clarity and emotional balance as we transition from the surreal world of our dreams to the demands of our waking life. In conclusion We may never fully unlock the secrets of our nightly adventures, but one thing is clear: dreams are a fascinating blend of memory, biology, and mystery. Whether they're ancient survival simulations, emotional clean-ups, or just the brain’s quirky way of entertaining itself while the lights are off, dreams remind us how wonderfully weird and complex the human mind truly is. Next time you find yourself tap dancing with Beyoncé or riding a roller coaster made of spaghetti, just enjoy the ride. Your brain is simply doing what it does best — keeping things entertaining, even in your sleep. References Palagini L, Rosenlicht N. Sleep, dreaming, and mental health: A review of historical and neurobiological perspectives. Sleep Medicine Reviews. 2011 Jun;15(3):179–86. Freud S. The Interpretation of Dreams [Internet]. 1900. Available from: https://psychclassics.yorku.ca/Freud/Dreams/dreams.pdf Ruby PM. Experimental Research on Dreaming: State of the Art and Neuropsychoanalytic Perspectives. Frontiers in Psychology [Internet]. 2011 Nov 18;2(286). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3220269/#B107 Wamsley EJ. Dreaming and offline memory consolidation. Current Neurology and Neuroscience Reports [Internet]. 2014 Jan 30;14(3). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4704085/ Stickgold R. Replaying the Game: Hypnagogic Images in Normals and Amnesics. Science. 2000 Oct 13;290(5490):350–3. Nir Y, Tononi G. Dreaming and the brain: from phenomenology to neurophysiology. Trends in Cognitive Sciences [Internet]. 2010 Jan 14;14(2):88–100. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2814941/ Revonsuo A. The reinterpretation of dreams: An evolutionary hypothesis of the function of dreaming. Behavioral and Brain Sciences [Internet]. 2000 Dec;23(6):877–901. Available from: https://pubmed.ncbi.nlm.nih.gov/11515147/ Hobson JA, McCarley RW. The brain as a dream state generator: an activation-synthesis hypothesis of the dream process. The American journal of psychiatry [Internet]. 1977 [cited 2019 Nov 14];134(12):1335–48. Available from: https://www.ncbi.nlm.nih.gov/pubmed/21570 Mitchell HA, Weinshenker D. Good night and good luck: Norepinephrine in sleep pharmacology. Biochemical Pharmacology. 2010 Mar;79(6):801–9. Eugene AR, Masiak J. The Neuroprotective Aspects of Sleep. MEDtube science [Internet]. 2015 Mar;3(1):35. Available from: https://pmc.ncbi.nlm.nih.gov/articles/PMC4651462/ Zhao J, Schoch SF, Valli K, Dresler M. Dream function and dream amnesia: dissolution of an apparent paradox. Neuroscience and Biobehavioral Reviews. 2024 Nov 20;167. Previous article Next article Enigma back to

  • The Lost Link: A Mystery in Evolution | OmniSci Magazine

    < Back to Issue 8 The Lost Link: A Mystery in Evolution by Eymi Gladys Carcamo Rodriguez 3 June 2025 Edited by Ciara Dahl Illustrated by Anabelle Dewi Saraswati The Enigma of Evolutionary Gaps Few scientific mysteries have captured the public imagination as deeply as the search for the “missing link”, a hypothetical species that bridges the evolutionary gap between ancient primates and modern humans. For generations, scientists and the public alike imagined that a single fossil discovery would neatly connect our distant ancestors to Homo sapiens . Yet as our understanding of evolution has grown, it has become clear that the story is far more complex. Rather than a single missing puzzle piece, human evolution is now regarded as a tangled web of interconnected species, with many branches and dead ends (1). The Myth of the Missing Link Historical Context The term “missing link” surged in popularity during the 19th century, following Charles Darwin’s ground-breaking work on the theory of evolution. Early evolutionary theorists envisioned a linear process: one species evolving directly into another, with the “missing link” as the crucial fossil that would clearly show how humans evolved from apes. This view persisted in popular culture; even as scientific evidence began to suggest otherwise. In Victorian England, the idea of a missing link became a cultural phenomenon. Fossil discoveries–like the first Neanderthal skulls–were hailed as evidence of humanity’s ascent from apes. However, modern evolutionary biology has revealed that evolution is not linear, but a branching tree, filled with dead ends and interwoven paths (2). The Fossils: Pieces of a Complex Puzzle Despite a shift in scientific thinking, fossil discoveries remain central to our understanding of human origins. Iconic finds such as Australopithecus afarensis (“Lucy”), Homo habilis , and Homo naledi have each provided snapshots of different stages in human evolution. Yet, none of these fossils fit the mould of the elusive “missing link” (3, 4). Australopithecus afarensis (c. 3.9–2.9 million years ago) walked upright and had both human-like and ape-like features. Lucy’s skeleton suggests a close connection to the human lineage, but her brain size and cranial features remain distinctly primitive. Homo habilis , one of the earliest members of our genus, shows evidence of tool use and increased brain size, but still differs significantly from modern humans. These fossils demonstrate that human evolution was not a simple progression from one species to the next. Many early hominins coexisted for millions of years, and some, like Homo habilis , may have lived alongside more primitive ancestors such as Australopithecus . The idea of a singular “missing link” is now viewed as a historical artifact, replaced by the recognition that human evolution is a mosaic, with branches and offshoots that defy easy classification. The Persistent Gaps Despite advances in palaeontology and genetics, many questions about human evolution remain unanswered: Why did early human brains grow so rapidly? Around 2 million years ago, our ancestors experienced a dramatic increase in brain size. The causes-whether tool use, diet, or social complexity-are still debated. How much did early humans interbreed with other hominins? Ancient DNA reveals that Homo sapiens interbred with Neanderthals and Denisovans, raising questions about the scale and impact of these interactions. Why did Homo sapiens spread so quickly across the globe? Our species began migrating out of Africa roughly 60,000 years ago, adapting rapidly to new environments. The role of culture, technology, and innovation in this expansion is still being explored (5). These questions highlight the complexity and dynamism of human evolution, suggesting that the process was shaped by a mix of biological and environmental factors. DNA: The New Frontier in the Search for the Missing Link While fossils have provided crucial insights, the latest breakthroughs come from genetic research. Advances in DNA sequencing allow scientists to peer into the ancient past in unprecedented ways. One of the most surprising findings is the discovery of a “ghost population” – an ancient group whose DNA is present in modern humans, but whose fossils have never been found. These genetic traces suggest that entire populations once co-existed and interbred with Homo sapiens , yet left no physical evidence behind. This challenges the traditional fossil-focused search for the missing link and highlights the importance of genetic inheritance in understanding our origins (6). “The idea that entire populations could have existed and disappeared without leaving any fossil evidence challenges our traditional search for the missing link. It suggests that the story of human evolution is not just about the fossils we find, but also about the genetic material we carry with us today” (7). The Real Missing Link: A Paradigm Shift The quest for a single missing link is now seen as outdated. Evolution is not a straight line but a complex web, with species branching, merging, and sometimes vanishing without a trace. Rather than a specific fossil, the “missing link” has become a symbol of our evolving understanding of what it means to be human. Each new discovery-whether in the fossil record or in our DNA-forces us to rethink our place in nature and the forces that shaped our evolution. Conclusion: The Journey of Discovery Continues The story of human evolution remains incomplete. Each new fossil and genetic breakthrough bring us closer to understanding our origins, but the mystery endures. The search for the missing link may never be resolved, and perhaps it is not meant to be. Instead, it is the ongoing process of discovery that enriches our understanding of who we are and where we came from. References Veldhuis D, Kjærgaard PC, Maslin M. Human Evolution: Theory and Progress. In: Smith C, editor. Encyclopedia of Global Archaeology. Cham: Springer International Publishing; 2020. p. 5317-30. Kjaergaard PC. 'Hurrah for the missing link!': a history of apes, ancestors and a crucial piece of evidence. Notes Rec R Soc Lond. 2011;65(1):83-98. Martinón-Torres M, Garate D, Herries AIR, Petraglia MD. No scientific evidence that Homo naledi buried their dead and produced rock art. J Hum Evol. 2024;195:103464. Schrein CM. Lucy: A marvelous specimen. Nature Education Knowledge. 2015;6(2). Chagi S. The Mosaic of Human Evolution: Challenging the Concept of a Singular ‘Missing Link’ World of Paleoanthropology2024 [Available from: https://worldofpaleoanthropology.org/2024/08/27/the-mosaic-of-human-evolution-challenging-the-concept-of-a-singular-missing-link/ . Sample I. Scientists find evidence of 'ghost population' of ancient humans: The Guardian Australia; 2020 [Available from: https://www.theguardian.com/science/2020/feb/12/scientists-find-evidence-of-ghost-population-of-ancient-humans . Banich MT. The Missing Link: The Role of Interhemispheric Interaction in Attentional Processing. Brain and Cognition. 1998;36(2):128-57. Previous article Next article Enigma back to

  • Soaring Heights: An Ode to the Airliner | OmniSci Magazine

    < Back to Issue 7 Soaring Heights: An Ode to the Airliner by Aisyah Mohammad Sulhanuddin 22 October 2024 edited by Lauren Zhang illustrated by Esme MacGillivray A smile at your neighbour-to-be, a quick check and an awkward squeeze as you sidle into your seat: 18A. Window seat, a coveted treasure! A clatter . Whoops! As you fumble for your dropped phone, your feet–which jut out ungracefully onto the aisle, end up as a speed bump for the wheels of someone’s carry-on. Yeowch! It isn’t without more jostling that everyone finally settles into their seats, and with a scan at the window, the tarmac outside is looking busy. Hmm. It makes sense–this flight is just one of the 36.8 million trips around the world flown over the past year (International Air Transport Association, 2024). Commercial aviation has clocked many miles since its first official iteration in 1914: a 27-km long “airboat” route established around Tampa Bay, Florida (National Air and Space Museum, 2022). Proving successful, it catalysed an industry and led to the establishment of carriers like Qantas, and the Netherlands’ KLM. Mechanics of Ascent (and Staying Afloat) As said Qantas plane pulls up in the window view, its tail dipped red with the roo taxies ahead of you on the tarmac. Your plane is now at the front of the runway queue and the engines begin to roar. You’re thrusted backwards as gravity moulds you to your seat. For a split second, as you look out the window, you can’t help but wonder– how on earth did you even get up here? How is this heavy, huge plane not falling out of the sky? The ability for a plane to stay afloat lies in its wings, which allow the plane to fly. The wings enable this through generating lift (NASA, 2022). Lift is described as one of the forces acting on an object like a plane, countering weight under gravity which is the force acting in the opposite direction, according to Newton’s Third Law ( figure 1a ). A plane's wings are constructed in a curved ‘airfoil’ shape with optimal aerodynamic properties: as pressure decreases above the wing with deflected oncoming air pushed up, the velocity increases, as per Bernoulli’s principle. This increases the difference in pressure above and below the wing, which remains high, generating a lift force that pushes the plane upwards (NASA, 2022) ( figure 1b ). Figure 1a. Forces that act on a plane . Note. From Four Forces on an Airplane by Glenn Research Centre. NASA, 2022 . https://www1.grc.nasa.gov/beginners-guide-to-aeronautics/four-forces-on-an-airplane/ . Copyright 2022 NASA. Figure 1b. An airfoil, with geometric properties suitable for generating lift. Note. From Four Forces of Flight by Let’s Talk Science. Let’s Talk Science, 2024. https://letstalkscience.ca/educational-resources/backgrounders/four-forces-flight . Copyright 2021 Let’s Talk Science. Looking laterally, the thrust of a plane’s engines counters the horizontal drag force that airfoils minimise, all whilst maximising lift. Advancements in plane design over the mid-20th century focused on optimising this ‘Lift to Drag ratio’ for greater efficiency, a priority stemming from the austere, military landscape of World War II (National Air and Space Museum, 2022). Influenced by warplane manufacturing trends, the commercial sphere saw a transition from wooden to durable aluminium frames. In conjunction with this, double-wing biplanes were superseded by single-wing monoplanes ( figure 2a, b ), which had a safer configuration that reduced airflow interference whilst maximising speed and stability (Chatfield, 1928). Figure 2a. A biplane, the De Havilland DH-82A Tiger Moth. Note. From DH-82A Tiger Moth [photograph] by Temora Aviation Museum. Temora Aviation Museum, 2017 . https://aviationmuseum.com.au/dh-82a-tiger-moth/ . Copyright 2024 Temora Aviation Museum. Figure 2b. A monoplane, an Airbus A310. Note. From Airbus A310-221, Swissair AN0521293 [photograph] by Aragão, P, 1995. Wikimedia Commons . https://commons.wikimedia.org/wiki/File:Airbus_A310-221,_Swissair_AN0521293.jpg CC BY-SA 3.0. Taking a Breather Without really noticing it, you’re somewhat upright again. Employing head shakes and gulps to make your own ears pop, you can also hear the babies bawling in discomfort a few aisles back. Blocked ears are our body’s response to atmospheric pressure changes that occur faster than our ears can adjust to (Bhattacharya et al., 2019). Atmospheric pressure describes the weight of air in the atmosphere above a given region of the Earth’s surface (NOAA, 2023), which decreases with altitude. Our bodies are suited to pressure conditions at sea level, allowing sufficient intake of oxygen through saturated haemoglobin within the bloodstream. Subsequently, the average human body can maintain this intake until 10000 ft (around 3000 m) in the air, with altitudes exceeding this likely to result in hypoxia and impairment (Bagshaw & Illig, 2018). Such limits have had implications for commercial flying. Trips in the early era were capped at low altitudes and proved highly uncomfortable: passengers were exposed to chilly winds, roaring engines, and thinner air, and pilots were forced to navigate around geographical obstacles like mountain ranges and low-lying weather irregularities. However, this changed in 1938 when Boeing unveiled the 307 Stratoliner, which featured pressurised cabins. Since then, air travel above breathing limits became possible, morphing into the high-altitude trips taken today (National Air and Space Museum, 2022). Via a process still relevant to us today, excess clean air left untouched by jet engines in combustion is diverted away, cooled, and pumped into the cabin (Filburn, 2019). Carried out in incremental adjustments during ascent and descent, the pressure controller regulates air inflow based on the cockpit’s readings of cruising altitude. Mass computerisation in the late 20th century enabled precise real-time readings, allowing safety features like sensitive pressure release valves, sensor-triggered oxygen mask deployment, or manual depressurisation. However, the sky does indeed dictate the limits, as pressure conditions are simulated at slightly higher altitudes than sea level to avoid fuselage strain (Filburn, 2019). This minor pressure discrepancy plays a part in why we feel weary and tired whilst flying–our cells are working at an oxygen deficit for the duration of the flight. Your yawn just about now proves this point. Time for your first snooze of many… Food, Glorious Food A groggy couple of hours later and it’s either lunch time or dinner, your head isn’t too sure. You wait with bated breath, anticipating the arrival of the flight attendant wheeling the bulky cart through the narrow aisle... Only to be met with a chicken sausage that vaguely tastes like chicken, with vaguely-mashed potato and a vaguely-limp salad on the side. Oh, and don’t forget the searing sweetness of the jelly cup! You’re far from alone in your lukewarm reception of your lunch-dinner. Aeroplane food remains notorious amongst travellers for its supposedly flat taste. Whilst airlines like Thai Airways and Air France have employed Michelin-star chefs to translate an assortment of gourmet cultural dishes to tray table fare (De Syon, 2008; Thai Airways, 2018), the common culprit responsible for the less-than-appetising experience remains – being on a plane. As Spence (2017) details, multiple factors play into how you rate your inflight dinner, many relating to the effects of air travel on our bodies. The ‘above sea level’ air pressure within the plane coincides with higher thresholds for detecting bitterness at 5000-10000 ft (around 1500-3000m), heightening our sensitivity to the tart undertones of everyday foods. Dry pressurised air that cycles through the cabin is about as humid as desert environments, which hampers our smell perception and thus taste. Less intuitively, the loud ambient noise of the plane’s engines also appears to hinder olfactory perception, though the reason as to why remains unclear. Nevertheless, alleviating the grumbling passenger and stomach is an area of interest with a few successful forays. One angle of approach involves food enhancement. Incorporating sensory and textural elements into meals such as chillies and the occasional crunch or crackle can compensate for impaired perception. Interestingly, umami has been observed as the least affected taste sense mid-air (Spence, 2017), inspiring British Airways’ intense and aromatic umami-rich menus – though with the unwitting drawback of threatening to stink up the plane on multiple occasions (Moskvitch, 2015). Meanwhile, Singapore Changi Airport houses a simulation chamber for food preparation in a low-pressure environment, taking it up a notch in both quality and cost (Moskvitch, 2015). Alternatively, passengers can be psychologically tricked into perceiving food to be more appetising than it is in reality. Some examples of this include the use of noise-cancelling headphones, cabin lighting designed for enhancing the appearance of food, or appealing language for describing meals. Both off-ground and in air, it was found that humans were inclined to respond more positively to dishes described in an appetising and detailed manner (Spence, 2017), rather than the vague choices of “sausage or pasta”. Whilst these innovations have covered some ground, De Syon (2008) also notes that sociology can influence our perceptions of food on a plane. The enjoyment of meals is dependent upon core social rituals like dining communally or comforting meal-time habits–both of which are tricky to navigate and achieve on a packed plane with front-on seating. What Goes Up Must Come Down Not long now! Accompanied by the movies you’ve played for the first time in your life and oodles of complimentary tea, there’s about half an hour left until landing. Jolt! The seatbelt sign is bold and bright as you can feel the plane gradually descending–it’s getting bumpy! As your plane rocks about and the airport comes into view as a speck in the distance, your descent is at the mercy of the crosswinds… and turbulence? Not only do these vortices of air cause havoc mid-flight, near cloud bands and thunderstorms (National Weather Service, 2019), they also pose a challenge during landing in the form of local, “clear-air” convection currents invisible on radar. These currents often occur in summer months and in the early afternoon when incoming solar energy is at its highest. In particular, they emerge when the surface of the earth is unevenly heated, including across regions such as the oceans, grassland, or in this case, the pavement near the airport. Consequently, this creates pockets of warm and cool air that rapidly rise and fall, creating downdrafts, thereby trapping planes ( figure 3 ). Luckily, pilots are specifically trained to recognise these surface winds, and can adjust their landing glidepath to suit local conditions forewarned in Terminal Aerodrome Forecasts for a steady, controlled descent (BOM, 2014). Figure 3. Varying glidepath due to local convection currents - note the different types of surfaces. Note. From Turbulence by National Weather Service. National Weather Service, 2019. https://www.weather.gov/source/zhu/ZHU_Training_Page/turbulence_stuff/turbulence/turbulence.htm . Copyright 2019 National Weather Service. Even with its bumpier experiences that draw endless complaints, it is undeniable that commercial aviation has grown tremendously over the century to deliver the safe, efficient and comfortable flights we are accustomed to today. Building upon a history of ingenuity and scientific discovery, it's almost certain that the industry will soar to even greater heights in our increasingly globalised world. Enough talk–you’re finally here! It’s a relief when you clamber from your seat, giving those arms and legs a much needed stretch. Now, time to trod along on solid ground… …and onto the connecting flight. Cheap stopover tickets. Darn it. References Aragão, P. (1995). Airbus A310-221, Swissair AN0521293 . Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/9/9b/Airbus_A310-221%2C_Swissair_JP5963897.jpg Bagshaw, M., & Illig, P. (2019). The aircraft cabin environment. Travel Medicine , 429–436. https://doi.org/10.1016/b978-0-323-54696-6.00047-1 Bhattacharya, S., Singh, A., & Marzo, R. R. (2019). “Airplane ear”—A neglected yet preventable problem. AIMS Public Health , 6 (3), 320–325. https://doi.org/10.3934/publichealth.2019.3.320 BOM. (2014). Hazardous Weather Phenomena - Turbulence . Bureau of Meteorology. http://www.bom.gov.au/aviation/data/education/turbulence.pdf Chatfield, C. H. (1928). Monoplane or Biplane. SAE Transactions , 23 , 217–264. http://www.jstor.org/stable/44437123 De Syon, G. (2008). Is it really better to travel than to arrive? Airline food as a reflection of consumer anxiety. In Food for Thought: Essays on Eating and Culture (pp. 199–207). McFarland. Filburn, T. (2019). Cabin pressurization and air-conditioning. Commercial Aviation in the Jet Era and the Systems That Make It Possible , 45–57. https://doi.org/10.1007/978-3-030-20111-1_4 International Air Transport Association. (2024). Global Outlook for Air Transport . https://www.iata.org/en/iata-repository/publications/economic-reports/global-outlook-for-air-transport-june-2024-report/ Let’s Talk Science. (2024). Four Forces of Flight . Let’s Talk Science. https://letstalkscience.ca/educational-resources/backgrounders/four-forces-flight Moskvitch, K. (2015, January 12). Why does food taste different on planes? British Broadcasting Corporation. https://www.bbc.com/future/article/20150112-why-in-flight-food-tastes-weird NASA. (2022). Four forces on an Airplane . Glenn Research Center | NASA. https://www1.grc.nasa.gov/beginners-guide-to-aeronautics/four-forces-on-an-airplane/ National Air and Space Museum. (2022). The Evolution of the Commercial Flying Experience . National Air and Space Museum; Smithsonian. https://airandspace.si.edu/explore/stories/evolution-commercial-flying-experience National Weather Service. (2019). Turbulence . National Weather Service. https://www.weather.gov/source/zhu/ZHU_Training_Page/turbulence_stuff/turbulence/turbulence.htm NOAA. (2023). Air pressure . National Oceanic and Atmospheric Administration. https://www.noaa.gov/jetstream/atmosphere/air-pressure Spence, C. (2017). Tasting in the air: A review. International Journal of Gastronomy and Food Science , 9 , 10–15. https://doi.org/10.1016/j.ijgfs.2017.05.001 Temora Aviation Museum. (2017). DH-82A Tiger Moth . Temora Aviation Museum. https://aviationmuseum.com.au/dh-82a-tiger-moth/ Thai Airways. (2018). THAI launches Michelin Star street food prepared by Jay Fai for Royal Silk Class and Royal First Class passengers . Thai Airways. https://www.thaiairways.com/en_ID/news/news_announcement/news_detail/News33.page Previous article Next article apex back to

  • Echidnas: Gentle Courters In The Competitive Animal Kingdom | OmniSci Magazine

    < Back to Issue 4 Echidnas: Gentle Courters In The Competitive Animal Kingdom by Emily Siwing Xia 1 July 2023 Edited by Maddison Moore and Arwen Nguyen-Ngo Illustrated by Christy Yung When we think of animals or nature in competition, we picture aggression and savagery over resources such as food, territory and mates. Beyond aggression, however, the variety of animal behaviour associated with competition for resources is immense. A gentle form of competition is the bizarre mating ritual of our own unique Australian fauna: the echidna. Known as Tachyglossus Aculeatus and spiny anteaters, echidnas are quill-covered animals living in Australia and New Guinea. Since Australia is so isolated from other continents, our fauna has often been regarded by outsiders with an air of mystery and awe. To start with, echidnas are in the same family as the famed platypus, called monotremes (egg-laying mammals). Surviving monotreme species can only be found in Australia and New Guinea. The four species of echidnas, along with their duck-billed cousin, are the very few surviving members in this classification. Despite the similarities in their name and appearance in both being covered with hollow, spiny quills, these spiny anteaters are not actually closely related to the more well-known anteaters in the Americas on a genetic and evolutionary basis. Echidnas feed on a diet of ants and termites, using their electroreceptive beaks to find burrowing prey digging them out with their hind claws. These powerful claws are long and curved backwards, specially designed for digging. Funnily, when the British Museum received an echidna specimen, they switched the backward claws frontwards thinking that it was a mistake. As mentioned before, mating rituals can be a violent (even bloody) ordeal in nature. From barbed penises in cats and deadly fights for females in elephant seals, straight to sexual cannibalism in praying mantises, there seems to be endless examples of brutality in the animal world. However, behind these brutal images is another side of nature that seems gentle and even humorous at times: for example, the ritual of our spiny suitors. Echidna mating rituals begin with the formation of a mating train. From June to September in Australia, male echidnas mate by lining up — from their beak tips to their spiny bottoms — to follow behind one single female. These trains can have more than 10 males in line and last for days, even weeks, at a time. During the mating season, male echidnas may leave a train to join or form a different train behind another eligible female. Their mating efforts often lead males to travel for long distances, even beyond their own home ranges. If the males get interrupted and lose track of the female, they reform their train by picking up her scent with their snouts in the air. They are such determined suitors that it is extremely difficult for a female echidna to evade them. Usually, there is one male that remains through the long-winded process, and they get to mate with the female. The reason behind forming echidna trains is unknown, but scientists generally agree that it is correlated with some type of selection process. One theory is that it aids the female in weeding out all the weaker males by tiring them out until the last one remains. Another is that the female is waiting for the right male that she is interested in to get behind her. Either way, it is a process of determination and perseverance. In exceedingly rare occasions where there are still multiple suitors left at the end, the males dig a trench surrounding the female and compete through head bumping. Although there is still much not understood about head bumping due to its scarce occurrence, it is generally considered an echidna social behaviour that serves to maintain dominance. Head bumps are generally only given by dominant echidnas to subordinate echidnas who haven’t recognised their dominance status and moved away. This rarely happens and is a relatively peaceful affair compared to conflicts in other animals. The winner of the mating head bumping ritual then digs until the previously mentioned trench is deep enough for him to be below the female so they can mate through their cloacas. 23 days after copulation, the female lays a soft-shelled leathery egg into a temporary pouch where it continues to incubate for 10 more days when a tiny puggle (a baby echidna or platypus) hatches. The puggle drinks milk from the female’s special mammary hairs until it is capable of feeding itself and has fully covered spines and fur. At last, the matured echidna leaves their mother’s burrow to live independently. The mating rules and practices amongst echidnas are a demonstration of patience and courtesy. This contrasts with the general public misconception of nature being merciless, which is characterised by the brutal competition for food, social status and mating opportunities. Although they are in the same competition for a mate, the lines of waddling echidnas are polite, organised and humorous. Behind the mask of brutality, nature continues to have its pleasant secrets. References Morrow G, Nicol SC. Cool Sex? Hibernation and Reproduction Overlap in the Echidna. PLoS One. 2009 Jun 29;4(6):e6070. Echidna [Internet]. AZ Animals. [cited 2023 Jun 22]. Available from: https://a-z-animals.com/animals/echidna/ Anne Marie Musser. Echidna | Britannica [Internet]. 2023 [cited 2023 Jun 22]. Available from: https://www.britannica.com/animal/echidna-monotreme Echidna trains: explained [Internet]. Australian Geographic. August 6, 2021 [cited 2023 Jun 22]. Available from: https://www.australiangeographic.com.au/topics/wildlife/2021/08/echidna-trains-explained/ Lindenfors P, Tullberg BS. Evolutionary aspects of aggression the importance of sexual selection. Adv Genet. 2011;75:7–22. Warm Your Heart With Videos of ‘Echidna Love Trains’ [Internet]. Atlas Obscura. September 1, 2017. [cited 2023 Jun 22]. Available from: http://www.atlasobscura.com/articles/echidna-love-trains Previous article Next article back to MIRAGE

  • ​Meet OmniSci Writer Mahsa Nabizada | OmniSci Magazine

    Doubting time is real? We spoke to first-year uni student Mahsa Nabizada about her upcoming article on this very topic, plus advice for starting university and why Thorium has a special place in her heart. Mahsa is a writer at OmniSci and a first-year university student planning to study mathematical physics. For Issue 4: Mirage, she is writing about the illusion of time. Mee t OmniSci writer Mahsa Nabizada Mahsa is a writer at OmniSci and a first-year university student planning to study mathematical physics. For Issue 4: Mirage, she is writing about the illusion of time. interviewed by Caitlin Kane What are you studying? I’m studying a Bachelor of Science, and I’m in my first year so I haven't majored yet, but what I’m looking to major in right now is mathematical physics. Do you have any advice for yourself at the beginning of semester, the start of your uni journey? First of all, take it easy. This is a new experience, not only moving out of home, but transitioning from high school to university. I think take your time adjusting to everything and be kind to yourself. Also, really be open to different opportunities, whether that’s meeting new people or learning new topics and new areas. In high school, the fields you're exposed to are very limited but in university it’s much broader. Just like the amount of clubs that are available or opportunities to meet people from different industries. What first got you interested in science? I have always found a natural inclination towards science subjects, and the amount of growth in the industry, whether advancements in technology or health… All of those things I can see the impact in society on the day to day and how it would impact the average person. There are new job descriptions being developed, areas that will be opened in five years. I guess the opportunities that are available, and the excitement and impact that STEM can make in society and to the average person. Do you have a dream role as a scientist, like something that you’ve always imagined doing or that you’re working towards? I don’t have a role in mind, but I do have things I’d love to be involved in. One of those things is research… development in any area, especially STEM areas. I think I'd love to be involved in some sort of research in a future role, no matter what area. I would love to be involved personally or professionally in some kind of community service, like volunteering to work with kids or high school students who are interested in STEM. In high school, I had people who spoke to me about STEM and I found that really helpful. Things like that do make a big impact on students and what they choose or what they are encouraged in going forward.. I would love to be working with a team of diverse professionals solving issues that affect people in society day-to-day. When diverse minds come together, there is opportunity for great things to come out of that. I think that is how I would like to make a positive impact. What is your role at OmniSci? I am a writer and basically I’m given a platform to write on the theme an article about something that I’m interested in. There’s quite a lot of flexibility to that and part of the great thing about this role is that I’m also supported by an editor to help me with my ideas. How did you get involved with OmniSci? What made you want to get involved? In O-Week, I met someone who mentioned the club. It stuck in my head. During week two or three, I was like I really want to join some clubs, ones that I can contribute in and make some friends, ones that would have some like-minded students in it. Hence, I became a member and I heard about the role of writer in the email. Are there other roles or article ideas that you would be interested in trying in the future? I definitely would like to keep writing. There is just so much in the astrophysics area that I’m interested in, but also in the STEM area in general. Moving forward I’d like to contribute as a writer interviewing really interesting people at our university, the University of Melbourne. I think we have some great researchers, amazing talented people, on different projects. As I’ve been supported by my editor and Editor-in-Chief, I would like to in the future also support other writers as an editor or as part of another role in the club to support other writers and members to develop their ideas. Can you give us a sneak peek of what you're working on this issue? Examining the illusion of time is something that I’ve thought about before, how our perception of time on a day-to-day basis is subjective. Sometimes it flies by, sometimes it goes so slowly and why we feel that. Because I come from a physics background, I wanted to bring physics into this and examine those experiences. Right now, I am now at the writing stage on the experience of time, how it varies based on our surroundings, emotional stage and physical state. It is possible that it’s nothing more than an illusion created by the limitations of our perception and conditions of our observation. Moving forward I would like to explore this — it’s a fascinating topic — and interview someone in the field of astrophysics more on the theory of relativity and how time moves relative to the observer, time's connection with gravity… that’s where I’m at right now. What do you like doing in your spare time (when you're not contributing at OmniSci)? I enjoy reading about a variety of different topics, whether that’s fiction, physics, different science areas, but also philosophy. I enjoy sometimes playing chess, hanging out with my friends, and I’m also into watching different plays. I watched Macbeth recently and I'm going to watch another play soon. Do you have any recommendations for any books, articles, plays, other kinds of things that you’ve been getting into? With plays I would say it can depend on what you like. If you find that a play is hard to read, I would suggest not giving up, and going and seeing if you can watch it. Sometimes that can be more engaging. With philosophy I just like researching… there’s lots of different philosophical resources out there. I learn a lot when I’m talking to someone and they don’t agree with me and I go in with an open mind. By the end of the conversation my opinion might have changed, or I might have learnt a completely new philosophical idea that might have changed my view on a certain issue. Which chemical element would you name your firstborn child (or pet) after? I would say... Uranium or Thorium. In grade eleven or grade twelve, my physics assignment was on nuclear power so I spent a lot of time researching Uranium and Thorium, and nuclear fusion, nuclear fission and nuclear power in general. I spent a lot of time, not just on my assignment, but in my own time learning about nuclear power and its future. Either of those, just because I’ve spent a lot of time researching it. I don’t think a child, but potentially a pet if I run out of other ideas. Is there anything else that you wanted to share with the OmniSci community? I think the club in general is quite inspiring. The fact that most people are volunteers and students are taking initiative and time out of their schedule to be a part of this. Read Mahsa's articles Big Bang to Black Holes: Illusionary Nature of Time

  • Wicked Invaders of the Wild | OmniSci Magazine

    < Back to Issue 5 Wicked Invaders of the Wild Serenie Tsai 24 October 2023 Edited by Krisha Darji Illustrated by Jennifer Nguyen Since the beginning of time, there has been a continuous flow of species in and out of regions that establishes a foundation for ecosystems. When species are introduced into new environments and replicate excessively to interfere with native species, they become invasive. Invasive species refer to those that spread into new areas and pose a threat to other species. Factors contributing to their menacing status include overfeeding native species, lack of predators, and outcompeting native species (Sakai et al., 2001). Invasive species shouldn’t be confused with feral species which are domestic animals that have reverted to their wild state, or pests which are organisms harmful to human activity (Contrera-Abarca et al., 2022; Hill, 1987). Furthermore, not all introduced species are invasive; crops such as wheat, tomato and rice have been integrated with native agriculture successfully. Many species were introduced accidentally and turned invasive; however, some were intentionally introduced to manage other species, and a lack of foresight resulted in detrimental ecological impacts. Each year, invasive species cost the global economy over a trillion dollars in damages (Roth, 2019). Claimed ecological benefits of invasive species Contrary to the name, invasive species could potentially benefit the invaded ecosystem. Herbivores can reap the benefits of the introduced biodiversity, and native plants can increase their tolerance (Brändle et al., 2008; Mullerscharer, 2004). Deer and goats aid in suppressing introduced grasses and inhibit wildfires (Fornoni, 2010). Likewise, species such as foxes and cats have the capacity to regulate the number of rats and rabbits. Furthermore, megafaunal extinction has opened opportunities to fill empty niches, for example, camels could fill the ecological niche of a now-extinct giant marsupial (Chew et al., 1965; Weber, 2017). Thus, studies indicate the possibility of species evolving to fill vacant niches (Meachen et al., 2014). Below, I’ll explore the rise and downfall of invasive species in Australia. Cane toad Cane toads are notorious for their unforeseen invasion. Originally introduced as a biological control for cane beetles in 1935, their rookie status was advantageous to their proliferation and dominance over native species (Freeland & Martin, 1985). Several native predators were overthrown and native fauna in Australia lacked resistance to the cane toad’s poison used as a defence mechanism (Smith & Philips, 2006). However, research suggests an evolutionary adaptation to such poison (Philips &Shine, 2006). There isn't a universal method to regulate cane toads, so efforts to completely eradicate cane toads are futile. However, populations are kept low by continuously monitoring areas and targeting cane toad eggs or their adult form. Common Myna The origins of Common Myna introduced into New South Wales and Victoria are uncertain; however, it was introduced into Northern Queensland as a mechanism to predate on grasshoppers and cane beetles(Neville & Liindsay, 2011) and introduced into Mauritius to control locust plagues (Bauer, 2023). The Common Myna poses an alarming threat to ecosystems and mankind, its severity is elucidated by its position in the world’s top 100 invasive species list (Lowe et al., 2000). It has spurred human health concerns including the spread of mites and acting as a vector for diseases destructive to human and farm stock (Tidemann, 1998). Myna also has a vicious habit of fostering competition with cavity-nesting native birds, forcing them and their eggs from their nest, however, the extent of this is unclear, and the influence of habitat destruction needs to be considered (Grarock et al., 2013). The impact of this bird lacks empirical evidence, so appropriate management is undecided (Grarock et al., 2012). However, modification of habitats could be advantageous as the Myna impact urban areas more, whereas intervening in their food resources would be rendered useless with their highly variable diet (Brochier et al., 2012). Zebra mussels Zebra mussels accidentally invaded Australia's aquatic locality when introduced by the ballast water of cargo ships. From an ecological perspective, Zebra Mussels overgrow the shells of native molluscs and create an imbalance within the ecosystem (Dzierżyńska-Białończyk et al., 2018). From a societal perspective, it colonizes docks, ship hulls, and water pipes and damages power plants (Lovell et al., 2006) Controlling the spread of Zebra Mussels includes manual removal, chlorine, thermal treatment and more. Control methods It is crucial to deploy preventative methods to mitigate the spread of invasive species before it becomes irreversible. Few known control methods are employed for certain types of animals but with no guarantee of success. Some places place bounties on catching the animals, however, the results of this technique are conflicting. In 1893, foxes were the target of financial incentives, but the scheme was deemed ineffective (Saunders et al., 2010). However, government bounties were introduced for Tasmanian tigers in 1888, which drastically caused a population decline and their eventual extinction (National Museum of Australia, 2019). Similarly, the prevalence of Cane Toads became unbearable, and in response, armies were deployed, and fences in rural communities were funded. Moreover, in 2007, inspired by a local pub’s scheme to hand out beers in exchange for cane toads, the government staged a “Toad Day Out” to establish a bounty for cane toads (Williams, 2011). Invasive species are detrimental to ecosystems, whether introduced intentionally or by accident, management of species is still a work in progress. References Lowe S., Browne M., Boudjelas S., & De Poorter M. (2000) 100 of the World’s Worst Invasive Alien Species: A selection from the Global Invasive Species Database . The Invasive Species Specialist Group (ISSG). Bauer, I. L. (2023). T he oral repellent–science fiction or common sense? Insects, vector- borne diseases, failing strategies, and a bold proposition. Tropical Diseases, Travel Medicine and Vaccines, 9(1), 7. Brändle, M., Kühn, I., Klotz, S., Belle, C., & Brandl, R. (2008). Species richness of herbivores on exotic host plants increases with time since introduction of the host. Diversity and Distributions, 14(6), 905–912. https://doi.org/10.1111/j.1472-4642.2008.00511.x Brochier, B., Vangeluwe, D., & Van den Berg, T. (2010). Alien invasive birds. Revue scientifique et technique, 29(2), 217. Chicago. Cayley, N. W., & Lindsey, T. What bird is that?: a completely revised and updated edition of the classic Australian ornithological work . Chew, R. M., & Chew, A. E. (1965). The Primary Productivity of a Desert-Shrub ( Larrea tridentata ) Community . Ecological Monographs, 35(4), 355–375. https://doi.org/10.2307/1942146 Contreras-Abarca, R., Crespin, S. J., Moreira-Arce, D., & Simonetti, J. A. (2022). Redefining feral dogs in biodiversity conservation . Biological Conservation, 265, 109434. https://doi.org/10.1016/j.biocon.2021.109434 Fornoni, J. (2010). Ecological and evolutionary implications of plant tolerance to herbivory. Functional Ecology, 25(2), 399–407. https://doi.org/10.1111/j.1365-2435.2010.01805.x Freeland, W. J., & Martin, K. C. (1985). The rate of range expansion by Bufo marinus in Northern Australia , 1980-84 . Wildlife Research, 12(3), 555-559. Grarock, K., Lindenmayer, D. B., Wood, J. T., & Tidemann, C. R. (2013). Does human- induced habitat modification influence the impact of introduced species? A case study on cavity-nesting by the introduced common myna ( Acridotheres tristis ) and two Australian native parrots. Environmental Management, 52, 958-970. G. Smith, J., & L. Phillips, B. (2006). Toxic tucker: the potential impact of Cane Toads on Australian reptiles . Pacific Conservation Biology, 12(1), 40. https://doi.org/10.1071/pc060040 G. Smith J, L. Phillips B. Toxic tucker: the potential impact of Cane Toads on Australian reptiles. Pacific Conservation Biology [Internet]. 2006;12(1):40. Available from: http://www.publish.csiro.au/pc/PC060040 Hill, D. S. (1987). Agricultural Insect Pests of Temperate Regions and Their Control . In Google Books. CUP Archive. https://books.google.com.au/books?hl=en&lr=&id=3-w8AAAAIAAJ&oi=fnd&pg=PA27&dq=pests+definition&ots=90_-WiF_MZ&sig=pKxuVjDJ_bZ3iNMb5TpfXA16ENI#v=onepage&q=pests%20definition&f=false Lovell, S. J., Stone, S. F., & Fernandez, L. (2006). The Economic Impacts of Aquatic Invasive Species: A Review of the Literature. Agricultural and Resource Economics Review, 35(1), 195–208. https://doi.org/10.1017/s1068280500010157 Meachen, J. A., Janowicz, A. C., Avery, J. E., & Sadleir, R. W. (2014). Ecological Changes in Coyotes ( Canis latrans ) in Response to the Ice Age Megafaunal Extinctions . PLoS ONE, 9(12), e116041. https://doi.org/10.1371/journal.pone.0116041 Mullerscharer, H. (2004). Evolution in invasive plants: implications for biological control . Trends in Ecology & Evolution, 19(8), 417–422. https://doi.org/10.1016/j.tree.2004.05.010 ANU. Myna problems. (n.d.). Fennerschool-Associated.anu.edu.au . http://fennerschool- associated.anu.edu.au//myna/problem.html National Museum of Australia. (2019). Extinction of thylacine | National Museum of Australia . Nma.gov.au . https://www.nma.gov.au/defining-moments/resources/extinction-of-thylacine Cayley, N. W. & Lindsey T. (2011) What bird is that?: a completely revised and updated edition of the classic Australian ornithological work . Walsh Bay, N.S.W.: Australia’s Heritage Publishing. Phillips, B. L., & Shine, R. (2006). An invasive species induces rapid adaptive change in a native predator: cane toads and black snakes in Australia . Proceedings of the Royal Society B: Biological Sciences, 273(1593), 1545–1550. https://doi.org/10.1098/rspb.2006.3479 Roth, A. (2019, July 3). Why you should never release exotic pets into the wild. Animals. https://www.nationalgeographic.com/animals/article/exotic-pets-become-invasive-species Sakai, A. K., Allendorf, F. W., Holt, J. S., Lodge, D. M., Molofsky, J., With, K. A., Baughman, S., Cabin, R. J., Cohen, J. E., Ellstrand, N. C., McCauley, D. E., O’Neil, P., Parker, I. M., Thompson, J. N., & Weller, S. G. (2001). The Population Biology of Invasive Species. Annual Review of Ecology and Systematics , 32(1), 305–332. https://doi.org/10.1146/annurev.ecolsys.32.081501.114037 Saunders, G. R., Gentle, M. N., & Dickman, C. R. (2010). The impacts and management of foxes ( Vulpes vulpes ) in Australia . Mammal review, 40(3), 181-211. Weber, L. (2013). Plants that miss the megafauna. Wildlife Australia, 50(3), 22–25. https://search.informit.org/doi/10.3316/ielapa.555395530308043 Williams, G. (2011). 100 Alien Invaders . In Google Books. Bradt Travel Guides. https://books.google.com.au/books?hl=en&lr=&id=qtS9TksHmOUC&oi=fnd&pg=PP1&dq=invasive+species+australia+bounty+ Wicked back to

  • Can we build the Iron Man suit? | OmniSci Magazine

    Ever thought about whether we could build the Iron Man suit? We cannot replicate it exactly, but we can find some workarounds to build parts of it with currently available technology. With the exponential growth of technology, we are getting closer and closer to building the Iron Man suit. Cinema to Reality Can We Build the Iron Man Suit? By Manthila Ranatunga We see cool and fancy gadgets in movies every now and then. How can we bring them to reality? For this issue, we take a look at the Iron Man suit. Edited by Breana Galea, Ashleigh Hallinan & Tanya Kovacevic Issue 1: September 24, 2021 Illustration by Gemma Van der Hurk Warning: Iron Man (2008) spoilers When Marvel Studios released Iron Man in 2008, it was all the rage among comic book fans, film geeks and engineers alike. The Iron Man suit is one of the coolest and most iconic gadgets in film history. A generation of mechatronics engineers were inspired after watching Tony Stark build the suit, myself included. Now we wonder whether we could build it with today’s technology. So, the question remains: can we build the Iron Man suit? We are talking about the Mark III suit, the gold and hot-rod red one. Unfortunately, replicating the suit is impossible; the laws of physics would not allow it. However, we can make some compromises and find some workarounds to build the suit’s most defining systems. The Power Source We can all agree the most vital part of the suit is the power source. After all, it gave Mr Stark the idea for the suit. The suit is powered by an arc reactor, which is essentially a fusion reactor (1). These produce power using nuclear fusion, the same way the sun and stars burn as enormous balls of fire. We are talking about reactions between atoms which are the building blocks of everything. Atoms contain a cluster of even smaller particles inside. Collectively they form the nucleus, so you can see where nuclear fusion comes from. Now, where are we going with this? Well, when nuclear fusion occurs, heat energy is produced (2). Nuclear fusion was chosen as the suit’s power source due to the colossal amount of energy it produces. With the palm of your hand acting as a size guide, nuclear fusion is one of the highest energy density methods available. Sounds too good to be true, right? Correct. To replicate the conditions required, a reactor would need to be heated to 150 million degrees Celsius (3) - 10 times hotter than the sun’s core! Imagine that on your chest! Unsettling, to say the least. Mr Stark’s arc reactor is self-sustaining and can power the suit for hours, or even days. But with modern technology, fusion reactors consume more energy than they produce (4). Consequently, recreating an arc reactor of the same size and energy output is currently impossible. Nevertheless, there are workarounds to create a partially functioning arc reactor. Massachusetts Institute of Technology (MIT) has been working on a fusion reactor called the ‘Alcator C-Mod’ for the past 20 years (5). Their goal has been to reduce their size while maintaining power output. Typical fusion reactor size ranges from three to nine metres in diameter, but MIT has managed to reduce theirs to about one. Assuming fusion reactors are net-positive energy producing and well heat-insulated, we can assemble the Alcator C-Mod into our own arc reactor. There are many more factors that are too complicated for us and thus we will ignore them. Instead of being placed on the chest, it can be a giant backpack! The Flight System Now, why do we need so much power? Well, the flight system consumes the bulk of it, which leads to the next point. In the movie, Iron Man flies using the repulsors on his gloves and boots. They are not gas turbines like jet engines. The suit does not carry fuel – how could it? It does not have any storage compartments. The fuel must come from outside of the suit. Here is a hint: it is everywhere, yet invisible at the same time... Air! Helicopters fly by pushing air downwards with their rotors. This works according to Isaac Newton’s third law, which states that any force will have an equal and opposite reaction. By pushing air downwards, the helicopter goes upwards. Iron Man does not have a giant rotor, so how did he solve this? Get ready for another round of physics! Repulsors use muon beams to control flight as needed. Muons are particles smaller than atoms. They exist in the Earth’s upper atmosphere (6), but can also be created at large research facilities. For now, let us assume Mr Stark has a way to produce them on his own; remember, he is a billionaire! The muon beams are ignited using plasma made by the heating of air. To produce this on-demand, the suit draws power from the arc reactor for heating and the suction of air. The repulsor beams are then created, ready for flight! Muons have a short lifespan - about a millionth of a second. In real life, muon storage is not a viable option; they must be generated on the spot. Muon creation occurs in particle accelerators (7). These are long tubes for accelerating and making particles collide at high speeds. You may have heard of the Large Hadron Collider in Switzerland, a particle accelerator that is 27km long. Through efforts to miniaturise them, researchers at the SLAC National Accelerator Laboratory have designed one only 30 centimetres in size (8). Ignoring some laws of physics and with a few billion dollars, we can fabricate this into our own repulsors. Keep in mind - the suit’s hands and feet are smaller than 30 centimeters. Our gloves and boots will be longer and bulkier. The Future So there we have it - a semi-reasonable arc reactor and a flight system. Fun to explore the possibilities of current technology, right? But we must also consider the ethics of building such a deadly weapon. Yes - the Iron Man suit is a weapon. In the wrong hands, this technology would not be so exciting. Centuries or even decades from now, scientific breakthroughs may allow the replication of the suit. When that happens, as humans, it will be necessary to contemplate the moral consequences of such an advancement. Here we have only examined two principal systems of the suit. The rest is up to you! Traverse your mind and create your own semi-realistic Iron Man suit. As we saw here, the Iron Man suit is not far off from our time. Who knows what the future holds? References 1, 3, 4. Trevor English, “How Does Iron Man's Arc Reactor Work?” Interesting Engineering. Published June 26, 2020. https://interestingengineering.com/how-does-iron-mans-arc-reactor-work . 2. Matthew Lanctot, “DOE Explains...Nuclear Fusion Reactions.” U.S. Department of Energy. Accessed August 30, 2021. https://www.energy.gov/science/doe-explainsnuclear-fusion-reactions . 5. Earl Marmar, “Alcator C-Mod tokamak”. Plasma Science and Fusion Center - Massachusetts Institute of Technology. Accessed August 31, 2021. https://www.psfc.mit.edu/research/topics/alcator-c-mod-tokamak 6. Paul Kyberd, “How a ‘muon accelerator’ could unravel some of the universe’s greatest mysteries”. The Conversation. Published February 20, 2020. https://theconversation.com/how-a-muon-accelerator-could-unravel-some-of-the-universes-greatest-mysteries-131415 . 7. Seiichi Yamamoto, “First images of muon beams”. EurekAlert! Published February 3, 2021. https://www.eurekalert.org/news-releases/836969 . 8. Tibi Puiu, “Particle accelerator only 30cm in size is hundred times faster than LHC”. ZME Science. Published November 6, 2014. https://www.zmescience.com/science/physics/particle-accelerator-faster-lhc-5334/ .

OmniSci Magazine acknowledges the Traditional Owners and Custodians of the lands on which we live, work, and learn. We pay our respects to their Elders past and present.

Subscribe to the Magazine

Follow Us on Socials

  • Facebook
  • Instagram
  • LinkedIn
UMSU Affiliated Club Logo
bottom of page