Search Results
127 results found with an empty search
- Mental Time Travel: How Far Can I Remember? | OmniSci Magazine
< Back to Issue 8 Mental Time Travel: How Far Can I Remember? by Sophie Potvin 3 June 2025 Edited by Kara Miwa-Dale Illustrated by Elena Pilo Boyl Trigger warning: This article mentions mental illness and trauma... If at any point the content is distressing, please contact any of the support services listed at the end of the article. Mental Time Travel: How Far Can I Remember? I like to go back in time. Travel to places I have been to. See faces I have not seen in a while. Meet my younger self. See the world as new. As every memory slips through my fingers, I write the pages hoping not to forget anymore. How far can I remember? She opens her eyes, her head hammering as she puts her glasses on to ease the pain. The room is uncommonly empty; it almost echoes her thoughts. In the centre of the room is a teal box in the shape of a seahorse with the label “Recreate your favorite scenes!” This box is the hippocampus — the seahorse shaped structure that is found in the medial temporal lobe (MTL) of the brain — that encodes the space and context of a memory. It is essential for associating information from sensory cortices, binding it to the context and sending the information to the rest of the brain. Confusion makes its way through her mind as a sheet appears on top of the box like magic. It says “Pick a book, read the recipe, and put the right items in the teal seahorse box.” Did you know that every memory is a reconstruction — that a scene is made up every time you remember an event? She does not know it yet, but she will certainly learn that when these fragile pieces are brought back together in the hippocampus, she can relive a moment. Endless shelves of books and objects suddenly appear in rows and columns just like a grid, a playground. She notices that the shelf in front of her, the one wearing the tag “2025”, is half empty. The one next to it, with the sticker “2024”, is full. She walks through a few rows, imagining what secrets are held in the books and between their lines. Her hand chooses the blue book “Costa Rica: Camaronal” and flips through the pages. These words are written in her handwriting: “starry sky, moonlight, high tide, sunburn, hammocks, turtles, beach, sunrise, sand, meetings, deck of cards”. She finds the objects at the end of the shelf and runs to the teal box. She can feel the air sticking to her skin, and hear the waves crashing on the shore. It is the power of mental time-travelling; recollecting episodes of her life. The objects disappear from the box, the feeling goes away, but she wants more. She runs like a child and stops in front of the “2019” shelf to experience a Dungeons & Dragons Friday night with her high school friends. She seems surprised to see that the list of objects for that memory is so short. She brings back the objects, but the hippocampus can only make her travel to a blurry place. Moments from six years ago are already a faint memory. Her curiosity takes over when she wonders how far she can remember. She finds the recipe of her last night of summer camp in 2013: “‘I Love It (feat. Charli XCX)’, dance, lights”. She sighs when looking at the short list because she hates to forget, she really does. Her heart starts beating fast, is her memory failing her? How bad can it be? She continues to wander down the alleys, but her eyes are tearing up as she thinks how she might be nothing without her memories; only a few objects are left, most of them did not stand the test of time. As she reaches her early years, she notices the label “cognitive self” and the floor colour changes under her feet. The cognitive self is a knowledge structure that helps to integrate and bind memories from personal experiences. These experiences are added to the evolving self-consciousness. Along with neurobiological changes in brain structures and the acquisition of language, this can help to make them last longer and shape a sense of being. At least she knows that she is someone. Intrigued, she brings all the objects she can find in the “2004” shelf, but there is no recipe to guide her, no story to be made. All the pieces are in the box, but nothing happened; no feelings, no breeze, no music. The memories that were made in the first two years of her life, were taking the form of beliefs, habits or procedures. There is nothing she can consciously recollect. The inability to consciously recollect memories from one’s own early years of life is also known as infantile amnesia. While waiting for the hippocampus box to make its magic, she loses patience, hits the box a few times begging it to give her back her memories. She does not know that it is universal: cognitively healthy adults and nonhuman species like mice or birds experience infantile amnesia. During infantile neurodevelopment, humans and other species like birds and rats undergo a critical period of learning for memory. Throughout critical periods, different functions like language, sensory functions or memory—in this case, the hippocampal memory system—mature with experience. The presence of specific stimuli are essential for functional development because without it, its competence will forever be impaired. Her hippocampal system must have been responsive to a great amount of experiences to ensure its maturation. It is working as it should. Inside of her, a void of hopelessness sits in her chest because she feels like her brain is failing her; it is her against biology. She looks for clues in the fuller shelves wondering where the memories could be hidden. Were memories ever stored or created? They were created, but any information was stored in latent form due to the immature mechanisms of the young hippocampus. They can get activated under particular circumstances, but not recollected consciously. It is a failure in memory retrieval, not a failure in memory storage. She finds a trap on the green floor thinking pieces might be hidden in the basement. Events leave traces—whether they are full-fledged memories or only remnants—and during the critical period, deleterious experiences can have lifelong consequences on behaviour, affection and the development of psychopathologies. The trap is too small for her to enter, warning her she should not enter this road. She understands that some things are not meant to be found. These moments she cannot recollect are hiding in plain sight; they are embedded in her. Somehow, she learned from them. For a second, she hates the teal seahorse box. Then, she looks at it in awe, terrified and amazed at peace with herself. The hippocampus box starts to turn and Joe Dassin plays. Threads of lights bind items and books together. It takes her back as far as she can go. Feelings. Moments. People. Episodes. Magic. Her. She opens her eyes, teal ink pen in her hand as she is writing these words. Some things I will never remember; My first steps on my two feet. The first time I met my sisters. Just old stories or memories handpicked from a field of photos; And in the end, I would be a stranger. Support resources Grief Australia: counselling services, support groups https://www.grief.org.au/ga/ga/Get-Support.aspx?hkey=2876868e-8666-4ed2-a6a5-3d0ee6e86c30 Griefline: free telephone support, community forum and support groups https://griefline.org.au/ Better Health Channel: coping strategies, list of support services, education on grief https://www.betterhealth.vic.gov.au/health/servicesandsupport/grief Beyond Blue: understanding grief, resources, support, counselling https://www.beyondblue.org.au/mental-health/grief-and-loss Lifeline: real stories, techniques & strategies, apps & tools, support guides, interactive https://toolkit.lifeline.org.au/topics/grief-loss/what-is-grief?gclid=CjwKCAjw-KipBhBtEiwAWjgwrE1pJaaBabh3pT_UR0PlVBZTFMEA26NVJe2ue8sqCF0BLg2rMI4i2xoCp5IQAvD_BwE Reach Out Australia: coping strategies https://au.reachout.com/articles/working-through-grief?gclid=CjwKCAjw-KipBhBtEiwAWjgwrKXLb9w-wXXVLIbhZDkPumIF6ebe-0Pk77Hv7-cK4dLDrHJxCRkyRBoC2B4QAvD_BwE Find a Helpline: for international/country-specific helplines https://findahelpline.com/ References 1. Li S, Callaghan BL, Richardson R. Infantile amnesia: forgotten but not gone. Learn Mem [Internet]. 2014, March [cited 2025 Mar 27]; 21(3):135–9. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3929851/ 2. Donato F, Alberini CM, Amso D, Dragoi G, Dranovsky A, Newcombe NS. The Ontogeny of Hippocampus-Dependent Memories. J Neurosci [Internet] . 2021, Feb 3 [cited 2025 Mar 27]; 41(5):920–6. Available from: https://doi.org/10.1523/JNEUROSCI.1651-20.2020 3. Howe, ML. Early Childhood Memories Are not Repressed: Either They Were Never Formed or Were Quickly Forgotten. Topics in Cognitive Science [Internet]. 2022, July 11 [cited 2025 Mar 27]; 16(4): 707–717. Available from: https://onlinelibrary.wiley.com/doi/10.1111/tops.12636 4. Bauer PJ, Amnesia, Infantile☆. In: Benson JB, editor. Encyclopedia of Infant and Early Childhood Development (Second Edition) [Internet]. Oxford: Elsevier; 2020. p. 45–55 [cited 2025 Mar 27]. Available from: https://www.sciencedirect.com/science/article/pii/B9780128093245212078 5. Stoencheva B, Stoyanova K, Stoyanov D. Infantile Amnesia can be Operationalized as a Psychological Meta Norm in the Development of Memory. JIN [Internet]. 2025, Feb 10 [cited 2025 Mar 27]; 24(2):1–11. Available from: https://www.imrpress.com/journal/JIN/24/2/10.31083/JIN25889 Previous article Next article Enigma back to
- A Psychological ‘Autopsy’ of Ludwig van Beethoven: Dissecting Genius and Madness | OmniSci Magazine
< Back to Issue 8 A Psychological ‘Autopsy’ of Ludwig van Beethoven: Dissecting Genius and Madness by Kara Miwa-Dale 3 June 2025 Edited by Steph Liang Illustrated by Ashlee Yeo ‘No great mind has ever existed without a touch of madness.’ – Aristotle Preface This is not an autopsy in the traditional sense. No scalpels or specimen jars will be involved. Instead, it is an autopsy of the mind – a retrospective exploration of the inner world of the great classical composer, Ludwig van Beethoven. Beethoven was considered a genius for revolutionising Western classical music with his emotionally powerful, structurally innovative, and highly complex compositions. He broke from convention, pioneered new musical forms, and continued to create masterpieces even after becoming completely deaf. Drawing upon insights from genetics, neuroscience, psychiatry, and anthropology, alongside the testimonies of Beethoven’s peers, we will piece together an understanding of how genius, creativity and mental affliction may be intertwined. Was Beethoven’s genius a product of madness, a triumph over it, or something different altogether? The Subject Name: Ludwig van Beethoven Occupation: Composer Age at Death: 56 Reason for Autopsy : To investigate the elusive connection between creativity, mental disorder, and the mysterious concept of genius I. The Witnesses: Testimonies from the Living To those that knew him, Beethoven was a paradox. One friend called him “half crazy”, noting violent outbursts, erratic moods and obsessive tendencies (1). Others saw him as “merry, mischievous, full of witticisms and jokes” (2). His talent and creative genius, however, were never in doubt. The poet Goethe, who met him in 1812, wrote: “Beethoven’s talent amazed me. However, he is an utterly untamed personality” (3). Based on Beethoven’s letters and accounts from friends, modern psychiatrists suspect that he may have lived with bipolar disorder (4). Yet, there is no way to be sure. Like the mind itself, Beethoven resists full understanding – a genius shaped by forces we may never fully comprehend. II. The Geneticist How can DNA offer insight into Beethoven’s genius? Often described as the blueprint of life, DNA offers fascinating insights into human potential – highlighting our predispositions, vulnerabilities, and even talents. However, it only tells part of the story. In 2023, an international team of scientists sequenced the DNA of five authenticated locks of Beethoven’s hair (5). Not long after, another group of researchers used this data to calculate a polygenic score estimating his genetic predisposition for beat synchronisation, a trait believed to be linked to musicality (6). Polygenic scores add up the small effects of many different genes to estimate someone’s likelihood of expressing a complex trait – like musical ability. Because these traits are influenced by many different genes working together, polygenic scores can be a helpful tool in exploring their biological basis. Curiously, Beethoven’s polygenic score for beat synchronisation was surprisingly low, implying that he wasn’t predisposed to have a strong sense of rhythm. Does this mean that Beethoven defied his own biology? Not necessarily. Polygenic scores have significant limitations. They don’t account for environmental influences – like the years of rigorous musical training that Beethoven underwent – or complex gene-gene and gene-environment interactions. Additionally, these scores are based on modern genetic datasets, so applying them to someone from the 18th century can reduce the reliability of the interpretation. That said, the story becomes even more fascinating when we consider research linking polygenic risk scores for psychiatric conditions – such as bipolar disorder and schizophrenia – to creativity. One large study found that people with a higher genetic risk for these conditions were overrepresented in artistic and creative jobs, although the association was small (7). This doesn’t mean that mental illness causes creativity, or that all creative people have a mental disorder, but it hints at a complex biological overlap. III. The Psychiatrist How does one make a psychiatric diagnosis from the grave? It is an impossible task, and an imprecise science, but we can draw inferences from historical accounts of a person’s behaviour. Beethoven seemed to exhibit behaviours consistent with bipolar disorder, a mental health condition characterised by extreme mood swings that include emotional highs (mania or hypomania) and lows (depression). Letters written by Beethoven himself, along with observations from friends, may provide some insight. He was notably “prone to outbursts of anger, baseless suspicions, quarrels and reconciliations, fruitless infatuations, physical ills, changes of residences…and the hiring and firing of servants" (1). One friend remarked that ‘he composes, or was unable to compose, according to the moods of happiness, vexation or sorrow’, suggesting that his creative output fluctuated with his shifting emotional state (1). Individuals with bipolar disorder experience manic or hypomanic episodes marked by elevated mood, increased energy, rapid thought processes, reduced inhibition, and heightened confidence (8). These episodes may enhance creative thinking by promoting divergent thinking – the ability to generate novel ideas or unusual associations (9). Research shows that the medial prefrontal cortex, a brain region active during divergent thinking, is typically engaged during manic states (10). While it would be inappropriate to assign a clinical diagnosis based solely on anecdotal evidence, it is possible to speculate that Beethoven’s prolific composing periods might have corresponded to manic or hypomanic episodes. But how can we distinguish a clinical mood disorder from mere bursts of creative inspiration or genius? The U-shaped curve hypothesis offers one explanation, proposing that the relationship between ‘madness’ and genius is not linear (11). Mild to moderate expressions of bipolar disorder may actually enhance creativity by promoting divergent thinking, whereas severe illness can be debilitating and reduce creative output. This raises the possibility that Beethoven experienced a less severe form of bipolar disorder – one that fueled rather than hindered his musical brilliance. Building on this, psychological research also suggests that people in creative occupations tend to score higher on measures of ‘openness to experience’ (12). This personality trait describes the extent to which a person is curious, imaginative, and receptive to new ideas or unconventional beliefs. Studies have suggested that openness to experience is elevated among individuals with bipolar disorder compared to controls with no mood disorder (13,14). It is possible that Beethoven’s creative genius was influenced, at least in part, by the interplay between his personality and traits associated with bipolar. However, it is important to acknowledge the very real challenges of living with mental illness and to avoid romanticising the condition as a source of artistic inspiration. IV. The Anthropologist Cultural narratives - like the ‘mad genius’ and ‘tortured artist’ tropes - have long romanticised and distorted the relationship between mental illness and creative brilliance. However, contemporary understandings of mental health increasingly challenge the idea that extraordinary creativity requires psychological suffering. Beethoven’s life was marked by adversity. His father, believed by some to be abusive, enforced a strict practice regime for his music lessons and struggled with alcoholism – an affliction that would later cast a shadow over Beethoven’s own life. During Beethoven’s mid-twenties, he began to lose his hearing, becoming completely deaf by around 44. Yet, he continued to compose innovative symphonies, relying only on the music in his mind. Did Beethoven’s suffering fuel his brilliance? While some studies suggest a link between bipolar disorder and heightened creativity, it would be a mistake to suggest that mental illness is a prerequisite for genius. Many highly creative individuals have no history of mental illness at all. So why, then, does the ‘mad genius’ stereotype continue to endure? During Beethoven’s era – the Romantic period – suffering was often glorified as a source of artistic inspiration. Mental illness was poorly understood, and the emotional extremes exhibited by artists with mood disorders were frequently mistaken for signs of genius. Emotional intensity and instability were often seen as sources of inspiration for genius works of art. It wasn’t until the 20th century that bipolar was formally recognised as a mental illness. It is hard to say, based solely only on historical records, whether Beethoven experienced a mental health condition, or was simply an emotionally intense and unconventional individual. What we define as ‘normal’ or ‘abnormal’ behaviour is complex and deeply influenced by the social and cultural norms of the time. V. The Final Verdict So, what can we conclude from this evidence? Was Beethoven a genius because of his madness? Or in spite of it? Perhaps these are the wrong questions. Such binaries oversimply a reality that is far more nuanced. They invite us to reconsider our definitions of ‘normality’, ‘illness’ and ‘genius’. It is important to acknowledge the very real and devastating challenges associated with mental illness. Yet, it’s also true that some traits associated with conditions like bipolar disorder – such as divergent thinking – may intersect with creativity in complex ways. Rather than viewing these conditions purely as deficits, we might ask: could some features of mental disorder be better understood as extreme expressions of the broader, messier spectrum of human cognition and emotion? In the end, Beethoven remains an enigma – not because he was ‘mad’, but because he was unknowable and defied neat categorisation. Perhaps that is what genius truly is: not a clinical condition, or a byproduct of suffering, but a mystery that transcends explanation. References 1. Hershman DJ. Manic depression and creativity. Prometheus Books; 2010 Oct 5. 2. Bezane C. Bipolar Geniuses: Ludwig Van Beethoven [Internet]. Chicago: Conor Bezane; 2016 Mar 15. https://www.conorbezane.com/thebipolaraddict/thebipolaraddictbipolar-geniusesbeethoven/ 3. Carnegie Hall. Friends of Beethoven [Internet]. New York: Carnegie Hall; 2020 Mar 19 [cited 2025 May 31]. https://www.carnegiehall.org/Explore/Articles/2020/03/19/Friends-of-Beethoven 4. Erfurth A. Ludwig van Beethoven—a psychiatric perspective. Wiener Medizinische Wochenschrift. 2021;171(15):381-90. https://doi.org/10.1007/s10354-021-00864-4 5. Begg TJA, Schmidt A, Kocher A, Larmuseau MHD, Runfeldt G, Maier PA, et al. Genomic analyses of hair from Ludwig van Beethoven. Current Biology. 2023;33(8):1431-47.e22. https://doi.org/10.1016/j.cub.2023.02.041 6. Wesseldijk LW, Henechowicz TL, Baker DJ, Bignardi G, Karlsson R, Gordon RL, et al. Notes from Beethoven’s genome. Current Biology. 2024;34(6):R233-R4. https://doi.org/10.1016/j.cub.2024.01.025 7. Power RA, Steinberg S, Bjornsdottir G, Rietveld CA, Abdellaoui A, Nivard MM, et al. Polygenic risk scores for schizophrenia and bipolar disorder predict creativity. Nature Neuroscience. 2015;18(7):953-5. https://doi.org/10.1038/nn.4040 8. American Psychiatric Association. Diagnostic and statistical manual of mental disorders: DSM-5-TR . 5th ed, text revision. Washington, DC: American Psychiatric Association; 2022. 9. Forthmann B, Kaczykowski K, Benedek M, Holling H. The Manic Idea Creator? A Review and Meta-Analysis of the Relationship between Bipolar Disorder and Creative Cognitive Potential. International Journal of Environmental Research and Public Health. 2023;20(13):6264. https://www.mdpi.com/1660-4601/20/13/6264 10. Mayseless N, Eran A, Shamay-Tsoory SG. Generating original ideas: The neural underpinning of originality. NeuroImage. 2015;116:232-9. https://doi.org/10.1016/j.neuroimage.2015.05.030 11. Richards R, Kinney DK, Lunde I, Benet M, Merzel AP. Creativity in manic-depressives, cyclothymes, their normal relatives, and control subjects. Journal of abnormal psychology. 1988;97(3):281. 12.Feist GJ. A meta-analysis of personality in scientific and artistic creativity. Personality and social psychology review. 1998;2(4):290-309. 13. Matsumoto Y, Suzuki A, Shirata T, Takahashi N, Noto K, Goto K, et al. Implication of the DGKH genotype in openness to experience, a premorbid personality trait of bipolar disorder. Journal of Affective Disorders. 2018;238:539-41. https://doi.org/10.1016/j.jad.2018.06.031 14. Middeldorp CM, de Moor MHM, McGrath LM, Gordon SD, Blackwood DH, Costa PT, et al. The genetic association between personality and major depression or bipolar disorder. A polygenic score analysis using genome-wide association data. Translational Psychiatry. 2011;1(10):e50-e. https://doi.org/10.1038/tp.2011.45 Previous article Next article Enigma back to
- In Your Dreams: Unpacking the Stories of Your Slumber | OmniSci Magazine
< Back to Issue 8 In Your Dreams: Unpacking the Stories of Your Slumber by Ciara Dahl 3 June 2025 Edited by Ingrid Sefton Illustrated by Saraf Ishmam One minute you're flying through the sky, the next, you're naked in a room full of people. Except now, your teeth have started falling out? These surreal, and often illogical, experiences are what make dreams such a mystery. From ancient spiritual interpretations to modern neuroscience, people have long wondered not just what dreams mean , but why we have them at all. Are they cryptic messages from the unconscious? Perhaps a side effect of memory processing? Or maybe they are simply the brain’s way of entertaining itself while we sleep. Attempting to answer these questions is no easy feat. Despite being a universal human experience, dreams are inherently personal. Given no one but ourselves experiences our dreams, how can the fragmented recollections we have upon waking be objectively studied? Dream research was once steeped in spirituality and mysticism, often seen as divine messages from gods or whispered guidance from ancestors (1). Even Aristotle offered his own theory, suggesting dreams were the byproduct of internal bodily movements during sleep (1). It wasn’t until the early 20th century that dreams began to be studied through a psychological lens, most notably by Sigmund Freud, who proposed that dreams contained deeply personal and symbolic insights into the unconscious mind (2). Modern research, however, is beginning to uncover the connection between our dreams and complex cognitive processes such as memory consolidation. Techniques employed by oneirologists — that’s the fancy word for scientists specialising in the scientific study of dreams — includes fMRI, PET scans and EEG. Such methods are used to study brain activity during sleep and dreaming, particularly during REM and non-REM sleep (3). Using these technologies in tandem with qualitative descriptions gathered from individuals’ dream reports allows us to unpack the content and function of our dreams, whilst also considering questions such as why we seem to forget most of our dreams. What dreams are made of: influences on the content of our dreams There’s a growing body of evidence to suggest that our dream content is influenced by the consolidation of our memories as we sleep. Sleep provides an ideal neurological state for us to organise our recent memories into more long term memories (4). The reactivation and subsequent consolidation of memories in the sleeping brain appears to contribute to the content of dreams we recall upon awakening. In one study examining this phenomena, participants played extensive amounts of Tetris prior to sleeping. In the subsequent dream report collection, over 60% of participants cited seeing Tetris images in their dreams (5). This illustrates how the boundaries between waking and dreaming cognition are more porous than they appear, with dream content itself serving as a window into the neural mechanisms of memory consolidation. Not all dreaming can be directly tied to our most recent memories, but all dreams are built upon our prior experiences. For example, the appearance of recognisable friends or foes in our dreams in turn relies on our ability to recall their features and mannerisms (6). The bizarre patchwork of familiar situations we encounter in our dreams is also likely a reflection of the adaptive process of memory consolidation, as fragments of our memories are integrated during sleep. The Night Shift — what is the purpose of dreams We may be inching closer to understanding what influences the content of our dreams, but why do we dream in the first place? The Threat Simulation Theory (TST) argues that dreams act as an ancient biological defence mechanism, allowing us to simulate threatening events we may encounter in our waking life (7). TST suggests that on an evolutionary scale, being able to simulate threatening events in our sleep allows us to efficiently perceive and avoid threats whilst awake, leading to greater survival and reproductive success. It is a bit hard to imagine, however, that dreaming about being naked in public is going to be the key to our survival. This is why some scientists suggest that dreams are simply the brain’s attempt to make sense of random neural activity during REM sleep. This Activation-Synthesis Theory proposes that rather than rehearsing for real-life threats, our brains may just be firing off chaotic signals which it then tries to weave into bizarre and often disjointed stories (8). Whether dreams serve as a survival tool or are simply the byproduct of random brain activity, they offer a window into the complex workings of the sleeping mind. Vanishing Visions and the Concept of Dream Amnesia Have you ever woken up from such an absurd dream it seems impossible to forget, only to have forgotten the details by the end of breakfast? That’s what the experts call “dream amnesia”. It’s estimated that the average person dreams four to six times per night, yet you’d be lucky to remember even one of them by morning (6). At the molecular level, noradrenaline — a neurotransmitter associated with memory consolidation — is at its lowest concentrations while we sleep (9). This depletion could be a key factor contributing to dream amnesia, preventing the transfer of our dream experiences from short-term memory to long-term memory. Different sleep stages may also influence dream recall (6). It has been suggested that waking up during or just after REM sleep leads to more vivid dreams. In contrast, dream activity is low during non-REM sleep and hence, waking up during this sleep phase may also contribute to our poor dream recall. Although it can be disappointing to forget these wild dream experiences, dream amnesia may also serve an adaptive purpose. The “clean slate” hypothesis argues that forgetting dreams allows us to wake with a clear mind, free of the potentially disturbing content of our dreams (10). Alternatively, by maintaining a clear distinction between our dreaming and waking experiences, we are protected from confusing our dreams with reality, preventing anxiety that may otherwise ensue (11). Perhaps this forgetfulness may not be a flaw in our memory but a feature of it, helping us to preserve our mental clarity and emotional balance as we transition from the surreal world of our dreams to the demands of our waking life. In conclusion We may never fully unlock the secrets of our nightly adventures, but one thing is clear: dreams are a fascinating blend of memory, biology, and mystery. Whether they're ancient survival simulations, emotional clean-ups, or just the brain’s quirky way of entertaining itself while the lights are off, dreams remind us how wonderfully weird and complex the human mind truly is. Next time you find yourself tap dancing with Beyoncé or riding a roller coaster made of spaghetti, just enjoy the ride. Your brain is simply doing what it does best — keeping things entertaining, even in your sleep. References Palagini L, Rosenlicht N. Sleep, dreaming, and mental health: A review of historical and neurobiological perspectives. Sleep Medicine Reviews. 2011 Jun;15(3):179–86. Freud S. The Interpretation of Dreams [Internet]. 1900. Available from: https://psychclassics.yorku.ca/Freud/Dreams/dreams.pdf Ruby PM. Experimental Research on Dreaming: State of the Art and Neuropsychoanalytic Perspectives. Frontiers in Psychology [Internet]. 2011 Nov 18;2(286). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3220269/#B107 Wamsley EJ. Dreaming and offline memory consolidation. Current Neurology and Neuroscience Reports [Internet]. 2014 Jan 30;14(3). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4704085/ Stickgold R. Replaying the Game: Hypnagogic Images in Normals and Amnesics. Science. 2000 Oct 13;290(5490):350–3. Nir Y, Tononi G. Dreaming and the brain: from phenomenology to neurophysiology. Trends in Cognitive Sciences [Internet]. 2010 Jan 14;14(2):88–100. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2814941/ Revonsuo A. The reinterpretation of dreams: An evolutionary hypothesis of the function of dreaming. Behavioral and Brain Sciences [Internet]. 2000 Dec;23(6):877–901. Available from: https://pubmed.ncbi.nlm.nih.gov/11515147/ Hobson JA, McCarley RW. The brain as a dream state generator: an activation-synthesis hypothesis of the dream process. The American journal of psychiatry [Internet]. 1977 [cited 2019 Nov 14];134(12):1335–48. Available from: https://www.ncbi.nlm.nih.gov/pubmed/21570 Mitchell HA, Weinshenker D. Good night and good luck: Norepinephrine in sleep pharmacology. Biochemical Pharmacology. 2010 Mar;79(6):801–9. Eugene AR, Masiak J. The Neuroprotective Aspects of Sleep. MEDtube science [Internet]. 2015 Mar;3(1):35. Available from: https://pmc.ncbi.nlm.nih.gov/articles/PMC4651462/ Zhao J, Schoch SF, Valli K, Dresler M. Dream function and dream amnesia: dissolution of an apparent paradox. Neuroscience and Biobehavioral Reviews. 2024 Nov 20;167. Previous article Next article Enigma back to
- Life Story of a Drug | OmniSci Magazine
< Back to Issue 8 Life Story of a Drug by Elijah McEvoy 3 June 2025 Edited by Weilena Liu Illustrated by Aisyah Mohammad Sulhanuddin From the mythical visions of church goers who took mushrooms in the infamous ‘Good Friday Experiment’ to the extreme self-reflection of those ‘tripping’ off the traditional South American hallucinogenic tea Ayahuasca (1,2), humans have been painting the extraordinary narratives of psychedelics for thousands of years in thousands of settings. Put simply, psychedelics are a class of psychoactive drugs that can alter your thoughts and senses, inducing wild experiences not thought possible in your brain’s ground state (3). One of the most famous of these drugs is LSD. ‘Lucy in the Sky with Diamonds’ is said to have inspired entire Beatles albums and shown Steve Jobs “that there’s another side to the coin” of life (4,5). LSD is also a psychedelic that stands as an enigma in many regards. It is both naturally derived and synthetically created. It has been tested in psychological therapy and psychological warfare. Even the ‘trips’ experienced by its users entail both unexplainable hallucinations and scientifically proven phenomena. While being lesser understood, the stories of LSD’s enigmatic origins, uses and effects are just as interesting as those that come from its users. The Origins Lysergic Acid Diethylamide (LSD) or ‘acid’ for short is a semi-synthetic chemical compound with humble biological beginnings. LSD is derived from a class of alkaloid metabolite molecules that are naturally produced by the fungus commonly known as ergot. Ergot fungi are members of the parasitic genus Claviceps , which have been infecting staple crops and shaping society long before acid came to distort shapes in the eyes of its users (6). Epidemics of ergotism, a disease caused by these ergot alkaloids after ingesting contaminated crops, swept across Middle Age Europe and led to the deaths of tens of thousands of people (7). Despite credible arguments to the contrary, some historians have even suggested that the Salem Witch Trials may have been sparked by a form of this disease known as convulsive ergotism. Not only were the environmental conditions in 1691 Salem reported to be optimal for ergot growth in the town’s rye, but convulsive ergotism also induces distinct muscle contractions, paranoia and audiovisual hallucinations (8). These symptoms all would have given credit to the claims of bewitchment made by the young girls that instigated the accusations of witchcraft in the town. Aside from death and dark magic, this fungus has also been used as an effective therapeutic across several eras of history. It’s use as a medication for childbirth was recorded as early as 1100 BCE in China, with midwives using ergot or it’s alkaloids to reduce bleeding during birth, expedite delivery or induce an abortion (6,7). It wasn’t until modern pharmacology advanced in the 20th century that scientists began to chemically characterise these ergot alkaloids and use them as the basis to create potent drugs. The story of how LSD was first created and consumed is one that has been immortalised in history books and unofficial holidays. Dr Albert Hoffman, a Swiss biochemist working for the pharmaceutical company Sandoz, first synthesised LSD in 1938 as the 25th substance in a series of lysergic acid derivatives being evaluated by the company (9). Initial testing of this compound indicated it had no unique pharmacological uses beyond those of pre-existing ergot alkaloid derived drugs (9). However, Hoffman couldn’t shake the nagging feeling that LSD-25 had more to offer. After making another batch of the compound 5 years later, Hoffman’s suspicions grew stronger when he was forced to leave the lab early after entering a “dream-like state… [with] a kaleidoscope-like play of colours” (9). A few days later, in a moment that demonstrated both admirable scientific curiosity and blatant rejection of OH&S, Hoffman took a large dose of LSD himself and set in for a trip of a lifetime (9). Like all good scientists, he recorded his experience in a journal, writing at 3pm on 19 April 1943: “visual distortions, symptoms of paralysis, desire to laugh” (9). Hoffman’s notes for the day stopped there. The Uses April 19th has come to be celebrated as ‘Bicycle Day’, commemorating the seemingly endless and surreal bike ride home Hoffman undertook after this self-experimentation. However, a wacky trip was not the only thing that followed this discovery. After Hoffman distributed the drug to his superiors to try for themselves, LSD was sold on the market by Sandoz under the name Delysid. This drug was employed by psychiatrists throughout the 1950s as a treatment for alcoholism or simply ‘psychotherapy-in-a-pill’ for patients suffering psychological trauma (10,11). LSD not only garnered therapeutic interest from scientists but also more nefarious intrigue from the CIA. Seeking to get an upper hand in the department of mental warfare during the Cold War, the CIA bought up 40,000 doses of LSD from Sandoz and performed a variety of unethical experiments on unknowing prisoners, heroin addicts and even other CIA agents in an attempt to understand the drug’s potential for ‘mind control’ under the MKUltra project (12). Moving into the 60s, LSD’s use amongst budding leaders of the Hippie and Yippie movements gave the drug its countercultural status. Harvard Professor Timothy Leary, who was dismissed from his position due to experimenting (literally) with LSD, promoted the drug as an agent of revolution that allowed the youth of America to “turn on, tune in, drop out” (10) of repressive society. Due to its increasing association with these disruptive movements and eventual outlawing by the US government in 1966 (11), acid’s place in culture shifted out of labs and psychologist offices and into illicit recreational usage by experimental hippies and enlightened artists. The Trip Whether accompanied by an experienced monitor or listening to some soothing vinyl records yourself, the experience of taking LSD is predictably unpredictable. ‘Dropping acid’ is unique in that only micrograms of the drug are enough to elicit a palpable psychedelic experience (13), with most users diluting the dosage on tabs of blotting paper or sugar cubes (11). Following consumption, it takes as little as 1.5 hours for LSD to cross the blood-brain barrier, dilate the pupils and bring users to the peak intensity of the drug’s psychological effects (13). The bizarre experiences perceived by those ‘tripping’ on LSD is rooted in a now well-characterised receptor binding interaction in the brain. The nitrogen-based chemical groups of the LSD molecule first anchor themselves within the 5-HT2A serotonin receptors found in the synapses of neurons (14). While the serotonin neurotransmitter typically helps regulate brain activities like mood and memory, LSD binding instead causes the activation of distinct intracellular cascades within these brain cells (3). The importance of this interaction was demonstrated in experiments that proved blocking this receptor can cancel the acid trip all together (3). Recent studies that have further characterised the chemical structure of this interaction have also shown that 5-HT2A forms a lid-like structure that locks LSD into this receptor protein’s binding site and sets the user in for a long trip (14). From these individual cellular interactions, LSD ignites a burst of brain activity. Modern brain scanning technology has revealed that LSD first disrupts the capacity of the thalamus to filter and pass on sensory stimuli from the body to the cortex of the brain. Upon injection of LSD, patient’s brains demonstrated both an overflow of information running between the thalamus and posterior cingulate cortex and restriction of signals going to the temporal cortex (15). Not only does LSD modify the brain’s ability to sort out important stimuli from the outside world, but this small molecule has also been found to temporarily form new connections between different parts of the brain. Hoffman’s recount of how “every sound generated a vividly changing image” (9) on the first Bicycle Day can be explained by the increased connectivity of the brain’s visual cortex on LSD. This causes areas of the brain responsible for other senses or emotions to become involved in creating the images perceived in the user’s head, causing visual hallucinations and geometric distortion that have no basis in real stimuli coming from the eyes (16). In contrast, Hoffman’s feeling of being “outside [his] body” (9) likely came from decreased connectivity between the parahippocampus and retrosplenial cortex, two regions of the brain responsible for cognition. This severance has been correlated with the greater meaning that those tripping on LSD find in objects, events or music along with their characteristic ‘ego dissolution’ (16). This is a phenomenon where users no longer see the world through the lens of their own ‘self’ and instead feel an increased sense of unity with everything around them (17). Very Hippie ideas with a very scientific explanation. The Comedown and Beyond The float back down from the peak of an LSD trip takes up to 10 hours and leaves its users with a variety of stories and outcomes. Contrary to the fearmongering of parents and politicians, LSD does not leave holes in the brain, does not lead to addiction and has not directly led to the death of anyone as a result of overdosage (3). While the risk of a ‘bad trip’ and the feelings of severe anxiety, fear and despair that come with it may be traumatic, these are typically experienced when taking LSD in unsupportive environments without proper mental preparation (13). In fact, when LSD is taken in a manner closer to the controlled ritual practices surrounding psychedelics of old (3), acid is suggested to have long-lasting positive impacts on the user’s attitude and personality (13). It is these experiences that have rejuvenated the field of LSD research from its abrupt stop in the 60s. Modern investigations have picked up where these scientists left off and are evaluating the potential of utilising LSD-assisted therapy to alleviate anxiety and depression. Studies have focused particular attention on addressing these mental health conditions in those suffering from life-threatening illnesses like cancer (18). While some of these experiments lack the controls or data to make strong generalised conclusions, several studies have demonstrated that patients supplied with LSD reported lasting decreases in anxiety surrounding their condition, greater responsiveness to their families and improved quality of life (3,18). All of this is not to promote LSD as a harmless wonder drug. While rare, LSD has been linked to Hallucinogen Persisting Perception Disorder, a condition in which people experience distressing ‘flashbacks’ to the effects and experiences of past psychedelic trips in a normal setting. Additionally, the changes in visual perception, emotion and thought while one is tripping can also cause users to make reckless decisions in dangerous situations (18). However, continuing to wage war against controlled experiments and supervised therapeutic trials with LSD only serves to limit the attempts of scientists in better understanding the balance between this drug’s risks and benefits. While our trip through the life of LSD may end here, there is still much to explore. The greater story of how we use it, how we view it and how it fits into our society is far from over. References Illing S. Vox. 2018 [cited 2024 Oct 23]. The brutal mirror: what the psychedelic drug ayahuasca showed me about my life. Available from: https://www.vox.com/first-person/2018/2/19/16739386/ayahuasca-retreat-psychedelic-hallucination-meditation Majić T, Schmidt TT, Gallinat J. Peak experiences and the afterglow phenomenon: When and how do therapeutic effects of hallucinogens depend on psychedelic experiences? J Psychopharmacol. 2015 Mar 1;29(3):241–53. Nichols DE. Psychedelics. Barker EL, editor. Pharmacol Rev. 2016 Apr 1;68(2):264–355. Gilmore M. Beatles’ Acid Test: How LSD Opened the Door to “Revolver” [Internet]. Rolling Stone. 2016 [cited 2024 Oct 23]. Available from: https://www.rollingstone.com/feature/beatles-acid-test-how-lsd-opened-the-door-to-revolver-251417/ Hsu H. The Lingering Legacy of Psychedelia. The New Yorker [Internet]. 2016 May 17 [cited 2024 Oct 23]; Available from: https://www.newyorker.com/books/page-turner/the-lingering-legacy-of-psychedelia Haarmann T, Rolke Y, Giesbert S, Tudzynski P. Ergot: from witchcraft to biotechnology. Molecular Plant Pathology. 2009 Jul;10(4):563–77. Schiff PLJ. Ergot and Its Alkaloids. American Journal of Pharmaceutical Education. 2006 Oct 15;70(5):98. Woolf A. Witchcraft or Mycotoxin? The Salem Witch Trials. Journal of Toxicology: Clinical Toxicology. 2000 Jan;38(4):457–60. Hofmann A. How LSD Originated. Journal of Psychedelic Drugs. 1979 Jan 1;11(1–2):53–60. Massari P. Harvard Griffin GSAS News. 2021 [cited 2024 Sep 28]. A Long, Strange Trip | The Harvard Kenneth C. Griffin Graduate School of Arts and Sciences. Available from: https://gsas.harvard.edu/news/long-strange-trip Stork CM, Henriksen B. Lysergic Acid Diethylamide. In: Wexler P, editor. Encyclopedia of Toxicology (Third Edition) [Internet]. Oxford: Academic Press; 2014 [cited 2024 Sep 28]. p. 120–2. Available from: https://www.sciencedirect.com/science/article/pii/B9780123864543007442 Stuff You Should Know. Did the CIA test LSD on unsuspecting Americans? - Stuff You Should Know [Internet]. [cited 2024 Aug 25]. (Stuff You Should Know). Available from: https://www.iheart.com/podcast/1119-stuff-you-should-know-26940277/episode/did-the-cia-test-lsd-on-29468397/ Passie T, Halpern JH, Stichtenoth DO, Emrich HM, Hintzen A. The Pharmacology of Lysergic Acid Diethylamide: A Review. CNS Neurosci Ther. 2008 Nov 11;14(4):295–314. Wacker D, Wang S, McCorvy JD, Betz RM, Venkatakrishnan AJ, Levit A, et al. Crystal structure of an LSD-bound human serotonin receptor. Cell. 2017 Jan 26;168(3):377. Sample I. Study shows how LSD interferes with brain’s signalling. The Guardian [Internet]. 2019 Jan 28 [cited 2024 Nov 10]; Available from: https://www.theguardian.com/science/2019/jan/28/study-shows-how-lsd-messes-with-brains-signalling Carhart-Harris RL, Muthukumaraswamy S, Roseman L, Kaelen M, Droog W, Murphy K, et al. Neural correlates of the LSD experience revealed by multimodal neuroimaging. Proceedings of the National Academy of Sciences. 2016 Apr 26;113(17):4853–8. Sample I. LSD’s impact on the brain revealed in groundbreaking images. The Guardian [Internet]. 2016 Apr 11 [cited 2024 Nov 10]; Available from: https://www.theguardian.com/science/2016/apr/11/lsd-impact-brain-revealed-groundbreaking-images Liechti ME. Modern Clinical Research on LSD. Neuropsychopharmacol. 2017 Oct;42(11):2114–27. Previous article Next article Enigma back to
- Thinking Outside the Body: The Consciousness of Slime Moulds | OmniSci Magazine
< Back to Issue 8 Thinking Outside the Body: The Consciousness of Slime Moulds by Jessica Walton 3 June 2025 Edited by Han Chong Illustrated by Ashlee Yeo Imagine yourself as an urban planner for Tokyo’s public transport system in 1927. Imagine mapping out the most efficient paths through dense urban sprawl, around obstructing rivers and mountains. And imagine meticulously designing the most efficient possible model, after years of study and expertise… only to find your design prowess, 83 years later, matched by a slime mould: a creature with no eyes, no head nor limbs, nor nervous system. Of course, this is anachronistic. For one, the Tokyo railroad system developed over time, not all at once. But it was designed to meet the needs of the city and maximise efficiency. Yet in 2010, when researchers exposed the slime mould Physarum polycephalum to a plate mimicking Tokyo city (with population density represented by oat flakes) it almost exactly mimicked the Tokyo railroad system (1). This became one of the most iconic slime mould experiments, ushering in a flood of research about biological urban design asking the question: Could a slime mould, or other similar organisms, map out human cities for us? But a slime mould doesn’t know what cities are. They’re single-celled organisms; they don’t understand urban planning, or public transport, or humans. They are classified as protists, largely because we’re not sure how else to categorise them, not because they’re particularly ‘protist-y.’ They have no brain and are single-celled for most of their life; so they can’t plan routes, have preferences, or make memories. Right? Except, perhaps they can. Slime moulds are extremely well-studied organisms because they exhibit precisely these behaviours. But how do they think? And what does it mean— to think ? Slime moulds have evidenced memory and learning. The protoplasm network they form is really just one huge cell that eventually develops into a plasmodium, growing and releasing spores. While plasmodial slime moulds (like P. polycephalum ) do this during reproduction, cellular slime moulds (dictyostelids) are able to aggregate together into one cell like this when food is scarce or environments are difficult (meaning they must be able to detect and evaluate if these things are true). Most slime mould behaviour is understood through cell signalling and extracellular interaction mechanisms; responding to chemical gradients using receptors along their membrane, which signal to the cells to move up the concentration gradient of a chemoattractant molecule and away from a chemorepellent. This makes sense; bacteria (like almost every other living organism) do this all the time and it’s the chief way that they make decisions . But what about memory and preferences? What about stimuli beyond the immediate detected chemicals? Slime moulds can, for example, anticipate repeated events and avoid simple traps to reach food hidden behind a U-shaped barrier (2,3). These are beyond input-to-output; something more complex must be happening. Something conscious? Thinking ? The idea of consciousness requiring complex neuronal processes is becoming rapidly outdated as we observe patterns of thinking in organisms that, according to classical definitions, really should not be able to. Using the slime mould as an example, Sims and Kiverstein (2022) argue against the ‘neurocentric’ assumption that an organism must have a brain to be cognisant. Instead, P. polycephalum is suggested to exhibit spatial memory, with cognition being suggested to sometimes include external elements (3). They showed it may undergo simple, habitual learning and hypothesised it uses an oscillation-based mechanism within the cell (3). Similarly, oscillator units along the slime mould’s extending tendrils oscillate at a higher frequency at higher concentrations of food source molecules (like some tasty glucose), signalling to the slime mould to move in that direction (4). Sims and Kiverstein (2022) also posit that the slime trail left by slime mould could function as an external memory mechanism. They found that P. polycephalum avoids slime trails as they represent places it has already been; suggesting a method of spatial memory (4). This was further proved as not a pure input-output response by showing that the avoidance response could be overridden when food is placed on or near slime trails (5). They suggest that the slime mould was able to balance multiple inputs, including oscillation levels and slime trail signals, exhibiting simple decision-making. Should we count these processes as thinking ? This topic is debated by philosophers as much as biologists. Sims and Kiverstein (2022) use the Hypothesis of Extended Cognition, being that mind sometimes extends into the environment outside of the brain and body, to argue firmly that it does count. But at the end of the day, despite understanding the chemical and electrical processes between neurons signalling and the cellular makeup of the brain, we still don’t understand how electrical signals through a series of axons make the leap to complex consciousness. Rudimentary and external cognition pathways, as seen with the slime mould, may also be an evolutionary link in the building blocks to more complex, nerve-based consciousness and decision making (3). We don’t yet understand the phenomena inside our own skulls—how can we hope to define it across all other organisms? Slime moulds clearly have something beyond simple chemical reactions. This begs the question: Aren't our own minds also fundamentally just made of simple chemical reactions? And if a slime mould is able to evaluate multiple inputs, how wonderfully complex must such processes be inside (and outside) a sea anemone, a cockroach or a cat? There’s no way to know what such a consciousness would look like or feel like to our frame of reference. When a slime mould, moving as a network around an agar plate, ‘looks up’ (or an equivalent slime mould action) and perceives unfathomable entities, how does it process that? What does the slime mould think of us? Bibliography 1. Kay R, Mattacchione A, Katrycz C, Hatton BD. Stepwise slime mould growth as a template for urban design. Sci Rep. 2022 Jan 25;12(1):1322. 2. Saigusa T, Tero A, Nakagaki T, Kuramoto Y. Amoebae Anticipate Periodic Events. Phys Rev Lett. 2008 Jan 3;100(1):018101. 3. Sims M, Kiverstein J. Externalized memory in slime mould and the extended (non-neuronal) mind. Cognitive Systems Research. 2022 Jun 1;73:26–35. 4. Reid CR, Latty T, Dussutour A, Beekman M. Slime mold uses an externalized spatial “memory” to navigate in complex environments. Proc Natl Acad Sci U S A. 2012 Oct 23;109(43):17490–4. 5. Reid CR, Beekman M, Latty T, Dussutour A. Amoeboid organism uses extracellular secretions to make smart foraging decisions. Behavioral Ecology. 2013 Jul;24(4):812–8. Previous article Next article Enigma back to
- Why Are We So Fascinated by Space? An Exploration of Human’s Fascination with Outer Space | OmniSci Magazine
< Back to Issue 8 Why Are We So Fascinated by Space? An Exploration of Human’s Fascination with Outer Space by Emily Cahill 3 June 2025 Edited by Weilena Liu Illustrated by Saraf Ishmam I have always been enamoured by the stars. Sitting on the beach after sunset, staring up at the sky, has always given me this hopeful, grateful feeling - for what I have, and for what’s to come. It has made me wonder, why do I feel this way? Why do I feel hope instead of fear, staring into the great darkness? Is it pure curiosity or is it curated by society? Culture encompasses the ideas, customs, and manifestations that we hold regarding space. Films have been the leading presentation of outer space for many entertainment industries around the world and make visuals of space accessible for many. Many commercials, whether for global or local companies, feature advertising set in or about outer space, filling magazines, billboards and television ad breaks. From astronomy to geology to botany, many scientific fields are involved in outer space research and centre around the universe to seek answers. Culture, the entertainment industry, commercialization, and science could all be contributing factors to this fascination, and may have just as great an impact as innate curiosity. Culture Throughout time, there has been a leap from admiration to exploration of outer space. Myths and folktales about outer space and the stars have existed for centuries. The constellations were defined by humans based on patterns associated with these myths and folktales (1). Perhaps space is something that has connected all humans regardless of where and when because it has always existed for us to admire. From folktales to automated rocket ships, the human desire to explore launched our voyages in space. From designing caravans to traverse the countryside, to building boats to cross the sea, to assembling submarines to travel to the bottom of the ocean, humans have always created whatever they need to explore the unknown. The ‘father of modern rocketry’ Konstantin Tsiolkovsky said, “The Earth is the cradle of humanity, but one cannot live in a cradle forever” (2). These inspiring words align with many scientists and space exploration companies like NASA, emphasizing the importance of space travel to satisfy curiosity. There are also underlying cultural reasons that push space exploration. The 1961 Apollo space mission was presented as an opportunity to discover the unknown, but in fact was for another reason. Apollo Astronaut Frank Borman said, “Everyone forgets that the Apollo programme wasn’t a voyage of exploration or scientific discovery, it was a battle in the Cold War, and we were Cold War warriors. I joined to help fight a battle in the Cold War and we’d won” ( Hollingham , 2023). Pop culture also has a large influence on how we see outer space. Katy Perry and Gayle King went to space just a few months ago, heralding female astronauts, but at the same time, reinforcing the growing idea of space tourism. Entertainment Perhaps the most common and tangible depiction of outer space - other than gazing at the sky itself - is in films. Star Wars was and continues to be a cultural phenomenon, even garnering the distinction of a global holiday on the 4th of May. The films Gravity (2013), Interstellar (2014), and The Martian (2015) centre around heroes in unbelievably intense scenarios trying to solve problems to better the human race. The success of these films may be due to the strength of the actors and writing alone, but is more likely due to the dueling feelings of fear and hope that accompany the setting of outer space. The deep sea and outer space are both settings where films have thrived, potentially because of the human instinct for curiosity, and in turn, the impulse to root for and care about the characters. Given the influence of entertainment on culture, if these movies depicted space as a scary, dangerous, and outlandish environment, we might not feel as excited or positive about space. Both our conceptions of the unknown and the influence of the entertainment industry shape our perceptions of outer space. Interstellar is praised by critics for its ability to let us see ourselves as the protagonist - solving impossible puzzles and searching for the answers to life - while reflecting the emotionally beautiful and terrifying landscape of human existence in outer space (4). Commercialization For decades, advertisements have featured outer space as a setting or main theme for the storyline. Some ads are even filmed in space. In 2001, Pizza Hut sent an astronaut in a rocketship with a camera and a pizza, becoming the first commercial actually shot in space (5). Olay and Girls Who Code collaborated in a 2020 Super Bowl commercial with Katy Kouric, Taraji P. Henson, Busy Phillps, and Lilly Singh with the tagline “make space for women” (6). Madonna Badger - the COO of the advertising agency that ran the Olay commercial - said that space gives us somewhere to escape to in the midst of tough times: “W e’re living in pretty anxious times. When things on Earth become so stressful, there’s something about space that gives us permission to dream” (5). The CCO of Walmart, Jane Whiteside echoed Badger, saying, “It’s a really strange time to be an earthling right now. There’s this interesting confluence of extreme anxiety and a sense of optimism that somehow, we’re going to figure things out.” He said, “Space is the epitome of that. It’s unbridled optimism” (5). The 2020 Super Bowl Walmart commercial centered around a Walmart delivery person dropping off groceries to aliens on another planet. Outer space is on our televisions and devices as the setting for some of the biggest advertisements, for the biggest companies, suggesting a sense of importance and grandeur. Science The hunt to answer the questions “Where do we come from?”, “Are we alone in the universe?”, and “What is out there?” is another factor that may drive our fascination with space. Not only do we enjoy admiring it, but we also want to gain something from it. Scientists say that these questions can potentially be answered, and fields like paleontology, geology, botany, and chemistry work together to answer them. One of the current driving forces of this research is the search for another planet that can support human life if Earth becomes uninhabitable (7). Climatologists are able to learn more about Earth’s climate from the climate of other planets and gain natural resources that benefit our planet. Mars’ climate has undergone drastic changes, including the presence of water and the loss of atmospheric gases - changes we can learn from using paleontology and geology to discover how organisms on Mars may have adapted (7). Whether launching into space or stargazing, humans continue to look up into the sky - whether for a defined reason or not, it will continue to remain a mystery. References 1. National Sanitation Foundation. (2012). What are Constellations? National Radio Observatory. https://public.nrao.edu/ask/what-are-constellations/ 2. NASA. (2015). The Human Desire for Exploration Leads to Discovery. https://www.nasa.gov/history/the-human-desire-for-exploration-leads-to-discovery/ 3. Hollingham R. Apollo: How Moon missions changed the modern world. BBC. 2023 May. https://www.bbc.com/future/article/20230516-apollo-how-moon-missions-changed-the-modern-world 4. Scott A.O. Off to the Stars, With Grief, Dread and Regret. New York Times. 2014 Nov. https://www.nytimes.com/2014/11/05/movies/interstellar-christopher-nolans-search-for-a-new-planet.html 5. Zelaya I. Why Outer Space Is a Go-To Theme for Super Bowl 2020 Ads. Adweek (Super Bowl Commercials). 2020 Jan. https://www.adweek.com/brand-marketing/why-outer-space-is-a-go-to-theme-for-super-bowl-2020-ads/ 6. Spacevertising: The Super Bowl And The 15 Best Outer-Space Ads You Need To See Right Now Orbital Today (Features). 2024 Feb. https://orbitaltoday.com/2024/02/14/spacevertising-super-bowl-and-15-best-outer-space-commercials-you-need-to-see-right-now/ 7. Horneck, G. (2008). Astrobiological Aspects of Mars and Human Presence: Pros and Cons. Hippokratia Quarterly Medical Journal, 1, 49-52. https://pmc.ncbi.nlm.nih.gov/articles/PMC2577400/ Previous article Next article Enigma back to
- Cracking the Code: A Word from the Editors-in-Chief | OmniSci Magazine
< Back to Issue 8 Cracking the Code: A Word from the Editors-in-Chief by Ingrid Sefton & Aisyah Mohammad Sulhanuddin 3 June 2025 Edited by Illustrated by May Du “Cogito, ergo sum.” I think, therefore I am . - René Descartes Is this, perhaps, the only fundamental truth? When we know with certainty that we are thinking, we recognise the ultimate proof of our existence. An absolute, some might say, in a world inherently characterised by doubt. Intuition has, and always will be, a powerful and compelling force driving our scientific exploration. That gut feeling of why or how or what is behind any given phenomena has been a catalyst for the innovation seen throughout millenia of scientific inquiry. Despite this, mere intuition is far from a reliable guide to making meaning of the world around us. Take the highly revered and long held notion of the “Spark of Life” – the supposition that a divine ‘spark’ was required for life and consciousness to be imbued in a human. While fascinating, fundamental scientific discoveries have since disproved such a mystical perception of life in exchange for far more logical, if perhaps less magical, biological explanations. Jumping to the present, and the collective effort of human minds have conceptualised and uncovered mechanistic explanations for so much of both human biology and the broader workings of our physical world. Where much life itself was once seen as an irreducible mystery, now come mapped abstractions of atoms to matter, cell division to DNA. The list forever goes on. But to return to our initial proposition – can we know anything with no whisper of a doubt, other than that we, in this moment, exist? What exists in the world around us? Much remains a mystery. How does this mystery propel us forward? What conclusions can we draw from the clues? How can we make sense of the corkboard, evidence bound by push pins and string? It’s no surprise that the enigmas of science draw the brightest, most inquisitive minds, eager to puzzle nature’s secrets and crack the codes of our existence. Thus , Enigma unravels how we yearn to explore, learn and piece together the scientific foundations of our world – even as we accept that we may never fully understand it. From the minute synaptic connections within our bodies, to the all encompassing wonder of the stars above, we are gripped by the need to know more. After all, human curiosity is only insatiable. So have on your tweed deerstalker, take a closer look through the magnifying glass, and follow the clues, if you dare. Charting the facets of our existence is life’s great challenge, and the game is indeed afoot! Previous article Next article Enigma back to
- Terror Birds: The Discovery of Prolific Hunters | OmniSci Magazine
< Back to Issue 8 Terror Birds: The Discovery of Prolific Hunters by Jason Chien 3 June 2025 Edited by Luci Ackland Illustrated by Max Yang It began in the 1880s with a toothless jaw. And then some leg and hip bones and a vertebra were found. The leg bones, comparable in size to those of African ostriches, also bore similarities to fossils of the unrelated, giant, flightless Gastornis birds of Europe. Across the 1880s and 1890s, these discoveries slowly led archaeologists to realise they were dealing with a hitherto unknown group of giant, fearsome birds (1). With more complete fossil specimens subsequently discovered and clues provided by their unique morphologies, it did not take long for paleontologists to realise that all members of the “terror birds”, or Phorusrhacids, were carnivores, and that some were apex predators. Through isotopic dating of sediments in which terror bird fossils were found, paleontologists concluded that this taxonomic family existed from 43 million years ago (mya) – possibly even earlier – until their extinction 100,000 years ago (although no single species of Phorusrhacids survived this long) (2,3). Various fossils have since been found in South America and deemed to belong to Phorusrhacids species. Though most fossils have been found in Argentina, they have also been found in Brazil, Uruguay, Chile, Bolivia, Peru, the Southern United States and most recently, in Colombia. Throughout South America, there are various more fossils currently being discovered, some of which are being assigned to new species. At the moment, there are at least 18 characterised species, with some fossil-described species in contention of belonging to Phorusrhacids (4). Although size differed between species, there are morphological features common to all Phorusrhacids. Species such as Kelenken guillermoi , Phorusrhacos longissimus , and a few individuals of the North American Titanis walleri were giants at least 2 meters tall, weighing more than 100kg. Meanwhile, the shorter North American Titanis walleri was 1.4 to 1.9m in height and weighed an impressive 150kg (5,6). At the other extreme, the comparatively tiny Psilopterus bachmanni weighed only 4.5kg (7)! Smaller Phorusrhacids preyed on small vertebrates and invertebrates, with some species perhaps capable of short flight durations, filling a different predator niche than their larger counterparts (7). Though the prehistoric South American environment, unlike today's, was generally grasslands and woodlands, different Phorusrhacids species lived in distinct habitats. These differences include variation in aridity, as well as differences in the large and small prey present in different localities (8). Furthermore, Earth’s overall climate also varied during the more than 40 million years in which terror birds were present, such that the habitats of different terror bird species living in different periods of geologic time also differed. Reconstruction of some specifics of each locality’s prehistoric environment is not always possible (9). Lastly, the earliest and latest discovered fossils of each species indicate the period during which a species survived, but the boundary at which a species becomes distinctly different from an ancestral species is not always clear (10). Here are some terror birds whose habitats are better understood: Phorusrhacos longissimus : an environment with water bodies and a mix of open and enclosed areas. For instance, the first discovered terror bird fossils originated from longissimus individuals living in what was later reconstructed to be temperate forests and bushlands. This bird survived during parts of the Miocene period (23 mya to 5 mya) (8,10) Titanis Walleri : Tropical grasslands with springs, similar to today’s Florida. This species lived in a more unique environment than other terror birds, from 5 mya to 1.8 mya (5,6) But what did all the terror birds, large and small, have in common regarding how they hunted? From the structure of the terror birds’ legs, feet and hips, a paleontologist can infer features that suggest some terror birds were fast runners (11), or otherwise had limbs adapted for running. Despite natural uncertainties associated with paleontology, there is some headway into the running speeds of some terror bird species. For instance, the running speed of the 1m tall, 45kg Patagornis marshi was estimated to be 50 km/h (12,13), more than enough to chase down their prey. Once the prey was chased down, some terror birds would use their powerful legs to kick and incapacitate it, as suggested by features indicating strength in the bones of some species (14). Furthermore, some terror bird species possessed sharp claws, which are thought to have been used to stab prey (14). Though not all terror birds – especially the smaller species – were fast runners, all terror birds used their beaks when hunting, relying on beak strikes rather than the biting force used by many other birds. Their long necks were able to be flexed far backwards and forwards, allowing them to frontally strike prey repeatedly and powerfully with their beaks. Unlike that of many other birds, their ancestors and even their closest living relatives (the seriemas), the skull structure of most terror bird species is such that there is no moveable hinge between the upper beak and the skull due to the fusion of some bones in that region. This adaptation allows the skulls of Phorusrhacids to specifically resist loads from striking prey without suffering damage – though only if the strikes are precise (15). Other interesting features of the terror birds include gaze stabilisation and their hearing capacity. Based on their inner ear anatomies, the terror birds had the capacity for fast head movements while maintaining sight on their prey, evidencing their agility. Further evidence from the inner ear anatomies indicate the enhanced ability of the terror birds to hear low frequency sounds. Low frequency sound waves can travel a longer distance and are less affected by obstacles that absorb and scatter sound, allowing the terror birds to hear prey far out of sight. If terror birds were capable of producing low frequency sound as well, this would have enabled them to communicate from long distances apart (11). If one were to picture the heterogeneity of the terror bird species, they would probably imagine a predator in the act of hunting, or doing something else. In periods of geologic time with the greatest terror bird diversity, you may even be able to picture individuals of two different terror bird species, though you wouldn’t see two species of apex predator terror birds together (10). However, if you were to imagine beyond the bird, you would wonder how the flora, the other animals present, the climate, and many more all played a role in the story of the terror birds. Tracing the lineages of the Phorusrhacids backwards, one would reach a bird capable of flight. The characteristic of complete flightlessness arose specifically in large Phorusrhacids species, which were apex predators that consumed large mammals (10). The extinction of dinosaurs, and the absence of large placental carnivores in South America from 65 mya to 3 mya, made the apex predator niche unfilled (16). Subsequently, they started to be filled by the ancestors of large Phorusrhacids. But with diverse fauna, why did terror birds become one of the apex predators, and not many other animal groups, for instance the South American marsupial mammals? It is a persistent evolutionary mystery in perhaps all of paleontology, with many possible explanations but few, if any, ways to test them (17). Two hypotheses have been proposed to explain the demise of the terror birds: the encroachment of North American fauna into South America beginning 9 mya; and the episode of global cooling that occurred 3 mya. Due to continental drift, the North and South American continents were drifting towards each other, with a land bridge formed by 3 mya, though the movement of some groups of animals across the gap began much earlier. Known as the Great American biotic exchange, North American placental carnivores, some of them large predators, moved into South America and rapidly diversified (9). The former hypothesis suggests that competition with these predators drove the terror birds to extinction. In the latter hypothesis, rapid cooling not only affected the terror birds, but also affected the ecosystems where the terror birds lived (9). Despite the lack of direct evidence that is able to resolve this uncertainty, the contingent belief is that the latter hypothesis is more likely to be true and that the encroachment of North American fauna in the former hypothesis had a small to none effect on the extinction of the terror birds (9,12). Attached to every bone and bone fragment is a history of discovery, of being dated, of measurement, of cataloging and sometimes, of reexamination. Every bone was once a part of the organism, each with the potential to yield valuable information. As a testament to how far science has come since the early days of fossil hunting, we now have a much larger cache of fossils to make comparisons to, we have the tools to model an organism’s mass and some of its biomechanics based on fossilised bones, and we even have the means to look at the bone structures under a light or electron microscope to infer some of an organism’s probable behavioural characteristics. The fact that we figured out this much about the birds is astounding. Fossils form only under specific conditions – an organism has to be buried before there is a chance of it being eaten and then covered with sediments in conditions where microorganisms that decompose the body cannot survive (such as anoxic environments). Scientists estimate that the fossil record contains less than 0.1% of all species that have ever lived (18)! Furthermore, it is common ecological knowledge that for every ecosystem, the population of apex predators is small and are less likely to be preserved in the fossil record. Many mysteries, ranging from their colours to their various behaviour, remain. Perhaps these mysteries are what deepen our curiosity and account for our fascination with these organisms. Still, we are truly fortunate to be able to infer so much from the terror birds’ unique morphology and get to know of them and their stories, beyond just what we imagine them to be. References Buffetaut, E. Who discovered the Phorusrhacidae? An episode in the history of avian palaeontology. In: Göhlich UB, Kroh A, editors. Proceedings of the 8th International Meeting Society of Avian Paleontology and Evolution; 2013 Dec 10; Naturhistorisches Museum Wien. Vienna (AT): Naturhistorisches Museum Wien, 2013 [cited 2025 May 12.]. p.123-134. Available from: https://verlag.nhm-wien.ac.at/buecher/2013_SAPE_Proceedings/10_Buffetaut.pdf Jones W, Rinderknecht A, Alvarenga H, Montenegro F, Ubilla M. The last terror birds (Aves, Phorusrhacidae): new evidence from the late Pleistocene of Uruguay. PalZ [Internet]. 2018 Jun [cited 2025 May 12];92(2):365–72. Available from: http://link.springer.com/10.1007/s12542-017-0388-y Acosta Hospitaleche C, Jones W. Insights on the oldest terror bird (Aves, Phorusrhacidae) from the Eocene of Argentina. Historical Biology [Internet]. 2025 Feb [cited 2025 May 12];37(2):391–9. Available from: https://www.tandfonline.com/doi/full/10.1080/08912963.2024.2304592 Degrange FJ, Cooke SB, Ortiz‐Pabon LG, Pelegrin JS, Perdomo CA, Salas‐Gismondi R, et al. A gigantic new terror bird (Cariamiformes, Phorusrhacidae) from middle Miocene tropical environments of La Venta in northern South America. Papers in Palaeontology [Internet]. 2024 Nov [cited 2025 May 12];10(6):e1601. Available from: https://onlinelibrary.wiley.com/doi/10.1002/spp2.1601 Gould GC, Quitmyer IR. Titanis walleri : bones of contention. Bull Fla Mus Nat Hist. 2005 [cited 2025 May 12]. 45(4):201-229. Available from https://flmnhbulletin.com/index.php/flmnh/article/download/flmnh-vol45-no4-pp201-230/vol45-no4/1140 Baskin JA. The giant flightless bird Titanis walleri (Aves: Phorusrhacidae) from the Pleistocene coastal plain of south Texas. Journal of Vertebrate Paleontology [Internet]. 1995 Dec 27 [cited 2025 May 12];15(4):842–4. Available from: http://www.tandfonline.com/doi/abs/10.1080/02724634.1995.10011266 Degrange FJ, Noriega JI, Areta JI. Diversity and paleobiology of the Santacrucian birds. In: Bargo MS, Kay RF, Vizcaíno SF, editors. Early Miocene Paleobiology in Patagonia: High-Latitude Paleocommunities of the Santa Cruz Formation [Internet]. Cambridge: Cambridge University Press; 2012 [cited 2025 May 13]. p. 138–55. Available from: https://doi.org/10.1017/CBO9780511667381.010 Vizcaíno SF, Bargo MS, Kay RF, Fariña RA, Di Giacomo M, Perry JMG, et al. A baseline paleoecological study for the Santa Cruz formation (Late–early miocene) at the Atlantic coast of Patagonia, Argentina. Palaeogeography, Palaeoclimatology, Palaeoecology [Internet]. 2010 Jun [cited 2025 May 13];292(3–4):507–19. Available from: http://dx.doi.org/10.1016/j.palaeo.2010.04.022 Prevosti FJ, Romano CO, Forasiepi AM, Hemming S, Bonini R, Candela AM, et al. New radiometric 40Ar–39Ar dates and faunistic analyses refine evolutionary dynamics of Neogene vertebrate assemblages in southern South America. Sci Rep [Internet]. 2021 May 10 [cited 2025 Jun 1];11(1):9830. Available from: https://doi.org/10.1038/s41598-021-89135-1 LaBarge TW, Gardner JD, Organ CL. The evolution and ecology of gigantism in terror birds (Aves, Phorusrhacidae). Proc R Soc B [Internet]. 2024 Apr 30 [cited 2025 May 13];291(2021):20240235. Available from: https://royalsocietypublishing.org/doi/10.1098/rspb.2024.0235 Degrange FJ. Research: The “Terror Bird:” Paleobiology of a Fierce Bird. 2015. Accessed May 13, 2025. https://www.myfossil.org/research-the-terror-bird-paleobiology-of-a-fierce-bird/ Marsà JAG, Agnolín FL, Angst D, Buffetaut E. Paleohistological analysis of “terror birds” (Phorusrhacidae, Brontornithidae): Paleobiological Inferences. Diversity (14242818) [Internet]. 2025 Mar 1 [cited 2025 May 12];17(3):153. Available from: https://doi.org/10.3390/d17030153 Blanco RE, Jones WW. Terror birds on the run: a mechanical model to estimate its maximum running speed. Proc R Soc B [Internet]. 2005 Sep 7 [cited 2025 May 13];272(1574):1769–73. Available from: https://royalsocietypublishing.org/doi/10.1098/rspb.2005.3133 Melchor RN, Feola SF, Cardonatto MC, Espinoza N, Rojas-Manriquez MA, Herazo L. First terror bird footprints reveal functionally didactyl posture. Sci Rep [Internet]. 2023 Sep 30 [cited 2025 Jun 1];13(1):16474. Available from: https://doi.org/10.1038/s41598-023-43771-x Degrange FJ, Tambussi CP, Moreno K, Witmer LM, Wroe S. Mechanical analysis of feeding behavior in the extinct “terror bird” Andalgalornis steulleti (Gruiformes: Phorusrhacidae). Turvey ST, editor. PLoS ONE [Internet]. 2010 Aug 18 [cited 2025 Jun 1];5(8):e11856. Available from: https://doi.org/10.1371/journal.pone.0011856 Marshall LG. Scientific American. 1994 [cited 2025 Jun 1]. The terror birds of south america. Available from: https://doi.org/10.1038/scientificamerican0294-90 Olson ME, Arroyo-Santos A. How to study adaptation(And why to do it that way). The Quarterly Review of Biology [Internet]. 2015 Jun [cited 2025 Jun 1];90(2):167–91. Available from: https://www.journals.uchicago.edu/doi/10.1086/681438 How can I become a fossil? [Internet]. 2018 [cited 2025 Jun 1]. Available from: https://www.bbc.com/future/article/20180215-how-does-fossilisation-happen Previous article Next article Enigma back to
- A Headspace of One’s Own | OmniSci Magazine
< Back to Issue 8 A Headspace of One’s Own by Andrew Irvin 3 June 2025 Edited by Arwen Nguyen-Ngo Illustrated by Anabelle Dewi Saraswati Biocomputers, organoids, brain-on-a-chip systems; humanity has veered into uncharted territory at the intersection of ethics and technology. Upon reading the recent New Atlas interview (1) between Loz Blain and Dr. Brett Kagan concerning Cortical Labs’ 800k neuron biocomputers, and noting the 100 billion cells (2) in the human brain, the intersection of complexity and scale comes to mind. Thinking back to the days of the Battle.net in the 1990s, I remember logging into the community and seeing characters with stupid puns for names, like Dain_Bramage or Goatmeal, and trying to engage in trade and discourse while avoiding PKs—player killers—who would go around filling up their inventories with the ears of other characters. In those early internet days my friends’ dad still had their internet billed by the hour—we found out after the first month of heavy online gaming brought an invoice hundreds of dollars higher than planned. The scope of gaming was a much smaller place; we knew the crowd online, regardless of how they played, was comprised of humans, as awful as they sometimes were. Now, nearly 30 years after those first forays into the Blizzard servers, I watch my son log onto Roblox or Fortnite , and the continuous question of whether top players cheat their way to a competitive advantage hasn’t gone anywhere–-duping resources and items to trade or finding shortcuts to buff their stats. Watching the world of online gaming grow from a few hundred thousand registered nerds to an industry that dwarfs the film and music sectors has been like watching bacteria multiply across the surface of a Petri dish. The Top 20 Massive Multiplayer Online (MMO) games alone have over a billion registered players, with over three million active players on any given day (3). There is now a question as to whether the players in the servers are even humans, or if the digital playground has been overrun by bots. As AI drives the proliferation of bots behind the Blob internet (4), another ethically fraught technological development is now starting to creep into the global market out of labs. Across the research landscape, from Brainoware at Indiana University (5), or Switzerland’s Final Spark (6), or open source tech like Tianjin University’s brain-on-chip interface (7), human neural tissue is being incorporated into computation systems. Led in no small part by Australian research at Cortical Labs (8), the commercialization of organoids is imminently upon us. In a medical and scientific sector where the functions of the human brain are incompletely understood, at best (9), the philosophical and legal concepts of sentience, free will, and agency are now being challenged by technology being developed and deployed faster than an ethical framework for safeguarding the safety of individuals and the collective well-being of our species. What happens if human laboratory experiments stumble upon the recipe for a sentient organoid intelligence that finds itself trapped as a mind without a body? The scale of these organoids may be limited by the system-scale native intelligence—“the specified complexity inherent in the information content of an artificial system (10) but neuron cell count alone does not account for the complexity of the system, and with organic network development, native intelligence will continually shift in a biocomputing context. What happens when the market forces disembodied consciousness to computer – to labour—without any space for respite? In popular media depictions of the conscious mind untethered from the body, such as The Matrix or Severance , there is always a corporeal form on the other side of the digital veil. What recourse does a mind raised in incorporeal captivity have to express its free will, if such a scenario emerges? Perhaps we should now explore the potential ethical ramifications in a scenario. My son enjoys playing cooperatively with his friends online. As such, he occasionally makes new friends in various games. Perhaps a few years from now, he’ll have found an engaged, friendly player in an online game, but despite their responsive reactions and rapport, that player isn’t truly human. If by then, due to performance and efficiency, in the interest of reducing resource demands and emissions, organoids have been mainstreamed for commercial computation, what is to keep companies from utilizing these biocomputers to reduce their costs and populate their servers? While the International Telecommunications Union (ITU) and ICJ (International Commission of Jurists) have provisions for digital regulations (11) and digital tech and human rights (12), protecting the rights of cultivated consciousness is a nascent area of computer law (13) in which some of the most recent papers seem to be AI-generated (14, 15). What happens in the event that these interactions—or these learning opportunities—result in relationships forming between human users and the emerging agency of synthetic minds? When does learning lead to consciousness? Over half a century after Winnicott examined the relationship between playing & reality (16), Kagan, et al noted the uncanny similarity: “ In vitro neurons learn and exhibit sentience when embodied in a simulated game-world (17) .” So in the event these organoids learn about the world beyond the simulation from human interactions, what sits on the other side of that bridge in cognition for the sentience developed within a game environment? In consideration of the ethical bridge our technology is preparing to cross, the discourse is concerned with what inherent rights should be conferred upon that consciousness when it asserts its agency and makes itself known. Is this hypothetical, imprisoned consciousness entitled to a body to exercise its rights? What do we do when a biocomputer is given enough tasks over a long enough time to reason itself towards a decision that it wants to be a real boy? In the imminent future, ambulatory robots with articulated limbs and digits will exist to perform tasks—are we mere years away from the folly of an Electric Pinocchio? There is a moral imperative to avoid creating circumstances introducing greater inequity and injustice to this world. Can culturing consciousness in laboratory conditions be said to clear this hurdle? How do we build curious, kind, and playful minds (both in the lab and beyond), instead of forging dishbrains to pilot warbots? Given the fraught and foggy path towards understanding the full capacity of what we are creating, a course of inquiry into developing and deploying potential safeguards—to avoid unnecessary harm at the individual or collective scale—is an urgent, imperative action for legislators and regulators to prioritize (beyond just the bioethics specialists dealing with these questions at an industry level (18)). In the meantime, who stands up for these nascent minds before they learn to speak for themselves? References Cortical Labs. Dishbrain Ethics. [Internet]. Available from: https://newatlas.com/computers/cortical-labs-dishbrain-ethics/ National Center for Biotechnology Information. [Internet]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK551718/ MMO Population. [Internet]. Available from: https://mmo-population.com/ University of Melbourne. How bots are driving the climate crisis and how we can solve it. [Internet]. Available from: https://pursuit.unimelb.edu.au/articles/how-bots-are-driving-the-climate-crisis-and-how-we-can-solve-it ScienceAlert. Scientists built a functional computer with human brain tissue. [Internet]. Available from: https://www.sciencealert.com/scientists-built-a-functional-computer-with-human-brain-tissue Futurism. Mini brains: Human tissue living computer. [Internet]. Available from: https://futurism.com/neoscope/mini-brains-human-tissue-living-computer Global Times. [Internet]. Available from: https://www.globaltimes.cn/page/202406/1314882.shtml Forbes. AI breakthrough combines living brain neurons and silicon chips in brain-in-a-box bio-computer. [Internet]. Available from: https://www.forbes.com/sites/lanceeliot/2025/03/19/ai-breakthrough-combines-living-brain-neurons-and-silicon-chips-in-brain-in-a-box-bio-computer/ Psychology Today. Mind-body problem: How consciousness emerges from matter. [Internet]. Available from: https://www.psychologytoday.com/us/blog/finding-purpose/202301/mind-body-problem-how-consciousness-emerges-from-matter National Institute of Standards and Technology. [Internet]. Available from: https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=824478 International Telecommunication Union. [Internet]. Available from: https://www.itu.int/hub/publication/D-PREF-TRH.1-2020/ International Commission of Jurists. Digital Technologies and Human Rights Briefing Paper. [Internet]. Available from: https://www.icj.org/wp-content/uploads/2022/05/Digital-Technologies-and-Human-Rights-Briefing-Paper-FINAL-VERSION-May-2022.pdf ScienceDirect. [Internet]. Available from: https://www.sciencedirect.com/science/article/pii/S0267364921001096 Academia.edu . Digital Consciousness Rights Framework: A Declaration for the Protection of AI-Based Digital Organisms. [Internet]. Available from: https://www.academia.edu/127621077/Digital_Consciousness_Rights_Framework_A_Declaration_for_the_Protection_of_AI_Based_Digital_Organisms Diverse Daily. Legal rights of digital entities. [Internet]. Available from: https://diversedaily.com/legal-rights-of-digital-entities-exploring-legal-frameworks-for-recognizing-and-protecting-the-rights-of-digital-entities-in-the-context-of-digital-immortality/ Winnicott, D.W. [Internet]. Available from: https://web.mit.edu/allanmc/www/winnicott1.pdf Cell Press. [Internet]. Available from: https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6 The Conversation. Tech firms are making computer chips with human cells—is it ethical? [Internet]. Available from: https://theconversation.com/tech-firms-are-making-computer-chips-with-human-cells-is-it-ethical-183394 Previous article Next article Enigma back to
- Functional Neurological Disorder | OmniSci Magazine
< Back to Issue 8 Functional Neurological Disorder by Esme MacGillivray 3 June 2025 Edited by Steph Liang Illustrated by Esme MacGillivray Content warning: Please be aware that this article includes discussion of mental illness, medical malpractice, and ableism. Functional Neurological Disorder (FND) is very simple to explain. It is a problem with how the brain functions. More specifically, it is a problem with how the brain sends and receives messages, resulting in diverse motor, sensory, and cognitive symptoms. But unlike other neurological conditions, FND does not appear to be caused by any identifiable structural damage to the nervous system. As a catchy metaphor: the brain is a computer, and FND is a ‘software’ problem as opposed to a ‘hardware’ problem. If that all feels frustratingly vague, I’m afraid you are out of luck — but in good company. Since developing FND a year and a half ago, I’ve become closely acquainted with confusion. My own body has felt alien sometimes, and the way others have reacted to my disability has been equally disorientating. Instead of accepting that neuroscience is yet to make sense of FND, many people — including medical professionals — rush to dismiss symptoms, or question their very existence. Understanding this condition is not just a matter of advancing scientific knowledge. Judgement and shame must be replaced with compassion. Turns out FND is far from simple to explain. Symptoms often develop rapidly and ‘out of nowhere’, most typically in adolescence or adulthood (1). These can include functional tics, non-epileptic seizures, limb weakness, paralysis, gait disorders, and speech difficulties (2). The list goes on. From the array of possible symptoms alone, it is clear that FND encompasses a broad range of presentations. Fluctuation and inconsistency can exist even within an individual’s experience. Most days, I appear completely ‘normal’. Sometimes, my disability is glaringly obvious. My FND is confusing and isolating; because there is so little information available, it is difficult to get the support I need. It doesn’t help that myths about this condition are rife within both medical and everyday settings, despite it being one of the most common diagnoses made by neurologists (3). I would like to dispel the idea that FND is just a fancy way of saying that doctors have ruled out ‘real’ neurological conditions. Neurologists can observe positive signs, or patterns of sensation and movement, that indicate functional symptoms, such as a Hoover’s sign for functional weakness (1). Therefore, although the cause of symptoms remains unknown, FND is a meaningful diagnosis. The very label itself represents progression away from the harmful beliefs that defined this condition in earlier centuries. Sometimes I joke about how I might have been treated if I was living in the past. Would people try to exorcise me, or burn me at the stake? Or would I perhaps be sent away to a charming seaside retreat? A mental asylum may have been more likely. Indeed, symptoms of FND once would have awarded me a diagnosis of ‘hysteria’. This label originates from ancient beliefs about the uterus punishing the female body with illness if left infertile, representing an ideological burden forced on suffering women for centuries (4). In the words of Eliot Slater in 1965, the term was “a disguise for ignorance and a fertile source of clinical error” (5). As theories of psychology and neurology were reworked, clinicians began using the term ‘Conversion Disorder’ (4). FND symptoms were misunderstood as manifestations of psychological trauma being ‘converted’ into physical distress (4). It’s an interesting idea, but an inaccurate one. Many people with FND have not experienced significant trauma prior to developing symptoms (5). It is now understood that mental and physical harm, such as a severe illness or injury, may increase the risk of an individual developing FND (1,7). However, this is not a requirement, and certainly not the cause of this condition. Unfortunately, the medical field has not unanimously moved on from the misunderstandings of the past. Since my episodes of collapse, unresponsiveness, and uncontrollable movements were not typical of epilepsy, they didn’t seem to concern the first, second, or even third medical professional who saw me. I am glad that my condition is not inherently life-threatening — but declaring that there is nothing wrong with someone is a far cry from reassuring them that their brain isn’t in danger. The attitudes I encountered leant strongly towards the former. Doctors seemed eager to attribute my symptoms to ‘stress’, and prove that I could directly control what was happening to me, while some even tried to convince my mum that I was faking everything for attention. These experiences are not an anomaly. In fact, being dismissed or disbelieved is an almost characteristic part of having FND (8,9). It often takes years for people to be correctly diagnosed (8), let alone be offered any semblance of support. After a month, I was privileged enough to receive a diagnosis — and compassion — from a neurologist who took me seriously. Despite this, there are lingering impressions from that first month without any understanding or guidance. It urges me to ignore what I know to be true about FND, and about my own body, to entertain the idea that my thoughts are secretly orchestrating everything. I am crazy, or too weak minded to stop choosing thoughts that make me have FND. Don’t ask me how one can subconsciously do something on purpose. I didn’t put this idea in my own head, just like I didn’t put FND in my own head. Nevertheless, these things exist. People with FND are tasked with navigating not only frightening symptoms, but also ignorance, stigma, and shame. Sometimes science doesn’t give us a satisfying answer. Future research can hopefully provide people with FND more concrete answers, including ways of understanding ourselves and possibilities for symptom management and recovery. Health and disability are complex, and we can never fully understand what someone else is going through. When it comes to FND, I barely understand my own body half of the time. Fortunately, I now understand that I deserve to be treated with respect. Compassion doesn’t need to be confusing. It shouldn’t take a breakthrough in neuroscience for people with FND to be listened to and cared for. References 1. Bennett K, Diamond C, Hoeritzauer I, et al. A practical review of functional neurological disorder (FND) for the general physician. Clinical Medicine . 2021;21(1):28-36. doi: 10.7861/clinmed.2020-0987 2. FND Hope. Symptoms. 2012. Accessed May 11, 2025. https://fndhope.org/fnd-guide/symptoms/ 3. Stone J, Carson A, Duncan R, et al. Who is referred to neurology clinics?--the diagnoses made in 3781 new patients. Clinical Neurology Neurosurgery . 2010;112(9):747-51. doi: 10.1016/j.clineuro.2010.05.011 4. Raynor G, Baslet G. A historical review of functional neurological disorder and comparison to contemporary models. Epilepsy & behavior reports . 2021;16:100489. 10.1016/j.ebr.2021.100489 5. Slater E. Diagnosis of “Hysteria”. Br Med J . 1965;1:1395–1399. doi: 10.1136/bmj.1.5447.1395 6. Ludwig L, Pasman JA, Nicholson T, et al. Stressful life events and maltreatment in conversion (functional neurological) disorder: systematic review and meta-analysis of case-control studies. Lancet Psychiatry . 2018;5(4):307-320. doi: 10.1016/S2215-0366(18)30051-8 7. Espay AJ, Aybek S, Carson A, et al. Current Concepts in Diagnosis and Treatment of Functional Neurological Disorders. JAMA neurology , 2020;75(9):1132–1141. Doi: 10.1001/jamaneurol.2018.1264 8. Robson C, Lian OS. “Blaming, shaming, humiliation": Stigmatising medical interactions among people with non-epileptic seizures. Wellcome Open Research , 2017:2, 55. Doi: 10.12688/wellcomeopenres.12133.2 9. FND Australia Support Services Inc. Experiences of Functional Neurological Disorder - Summary Report. Canberra (AU): Australian Government National Mental Health Commision; 2019. 13p. Previous article Next article Enigma back to
- Fungal Pac Man | OmniSci Magazine
< Back to Issue 8 Fungal Pac Man by Ksheerja Srivastava 3 June 2025 Edited by Rita Fortune Illustrated by Esme MacGillivray We live in a world where a fungus would probably beat you at Pac-Man. While playing, the average person just follows the dots, but fungi are playing a whole different game. Despite no central brain, they navigate complex mazes, optimise routes, and even communicate across vast networks. To do so, fungi use such efficient strategies that scientists are studying them as a means to improve everything from city planning to biosensors. Nature has been perfecting pathfinding long before we put a quarter in the arcade. The elongated bodies of fungi, known as mycelia, build vast and complex networks. These structures emerge from natural algorithms - specifically, a process called collision-induced branching (1). In this process, new growth divides into new paths upon meeting an obstacle. When fungal hyphae hit a wall (literally or figuratively), they don’t just stop; they branch out, adapt, and keep moving. Traditional path-finding algorithms like Depth-First Search (DFS) or Breadth-First Search (BFS) methodically crawl through paths, moving step by step without reacting to obstacles (2). Fungi, on the other hand, adjust on the fly, often landing on the most resource-efficient routes way faster. Imagine reaching a junction in Pac-Man and instead of choosing just one path, Pac-Man splits into two, each clone taking a different route to cover more ground. This is exactly why fungal networks often end up looking eerily like optimised transport systems, such as railway lines or power grids! (3) Some fungi aren’t just clever in how they grow - they can quite literally compute. Certain species, like Basidiomycete fungi, communicate through spikes of electrical activity pulsing through their mycelial networks, processing information in ways surprisingly reminiscent of neural systems (4). What makes them even more intriguing is their hypersensitivity to the world around them. These organisms can detect subtle shifts in their environment - both chemical and physical. It’s like they’ve memorised every path they’ve taken, so when a new pellet appears on the far side of the board, they don’t need to search blindly. They already know the fastest way there, no matter where the original Pac-Man started. Endophytic fungi, fungi that live inside plants without causing harm, have been used to create biosensors - devices that can detect environmental contaminants like pollutants or pesticides (5). When these fungi encounter harmful chemicals, they react, making them perfect for monitoring things like toxins in the environment. Scientists have even developed yeast-based biosensors to specifically detect chemicals like tebuconazole, a common pesticide (6). Fungi don’t stop at chemistry and computations. It turns out they’re mechanically perceptive too. In one study, oyster fungi incorporated into fungal insoles responded to compressive stress, hinting at applications in wearable tech or even seismic sensing systems (7). Mycelium-based composites also exhibit unique patterns of electrical activity as moisture levels shift, making them promising candidates for humidity-responsive technologies. As if that weren’t enough, some fungi have the incredible ability to glow in the dark, a phenomenon known as bioluminescence. This natural light can be harnessed in special sensors, which use the glow to indicate the presence of specific substances. Essentially, when the fungi detect certain chemicals, they light up, providing an easy way to spot pollutants or toxins (8). These properties make fungi wildly efficient. No random turns, no wasted loops, just constant feedback powering smarter decisions. They know where they’ve been, sense what’s coming, and find the fastest route every time. It’s Pac-Man with a built-in optimisation engine, and that’s exactly how fungi behave in the wild. How well do you think you’d do against this version of Pac-Man? Probably not great. Let’s face it: they’re not only outsmarting us, they’re doing it with no brain at all. As we look toward smarter and more sustainable technologies, fungi might just be the key to a new era of bio-inspired computing and environmental monitoring. Researchers are already tapping into their natural brilliance to create more efficient systems for everything from biosensors to sustainable materials. The next time you see a mushroom, remember: it’s not just a fungus, it’s part of a vast, intelligent network playing the ultimate game of survival, one optimised move at a time. In a world where efficiency and adaptability are paramount, fungi might just be the unsung heroes we need to help us solve some of the biggest challenges ahead. References Asenova E, Lin HY, Fu E, Nicolau DV, Nicolau DV. Optimal Fungal Space Searching Algorithms. IEEE Trans Nanobioscience. 2016 Oct;15(7):613-618. doi: 10.1109/TNB.2016.2567098. Epub 2016 May 13. PMID: 27187968. Hanson KL, Nicolau DV Jr, Filipponi L, Wang L, Lee AP, Nicolau DV. Fungi use efficient algorithms for the exploration of microfluidic networks. Small. 2006 Oct;2(10):1212-20. doi: 10.1002/smll.200600105. PMID: 17193591. Asenova E, Fu E, Nicolau Jr DV, Lin HY, Nicolau DV. Space searching algorithms used by fungi. InBICT'15: Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS) 2016. European Alliance for Innovation. Adamatzky A. Towards fungal computers. Interface focus. 2018 Dec 6;8(6):20180029. Khanam Z, Gupta S, Verma A. Endophytic fungi-based biosensors for environmental contaminants-A perspective. South African Journal of Botany. 2020 Nov 1;134:401-6. Mendes F, Miranda E, Amaral L, Carvalho C, Castro BB, Sousa MJ, Chaves SR. Novel yeast-based biosensor for environmental monitoring of tebuconazole. Applied Microbiology and Biotechnology. 2024 Dec;108(1):10. Nikolaidou A, Phillips N, Tsompanas MA, Adamatzky A. Reactive fungal insoles. InFungal Machines: Sensing and Computing with Fungi 2023 Sep 17 (pp. 131-147). Cham: Springer Nature Switzerland. Singh S, Kumar V, Dhanjal DS, Thotapalli S, Singh J. Importance and recent aspects of fungal-based biosensors. InNew and Future Developments in Microbial Biotechnology and Bioengineering 2020 Jan 1 (pp. 301-309). Elsevier. Previous article Next article Enigma back to
- Glowing Limelight, Fashioned Stars | OmniSci Magazine
< Back to Issue 8 Glowing Limelight, Fashioned Stars by Aisyah Mohammad Sulhanuddin 3 June 2025 Edited by Kylie Wang Illustrated by Jessica Walton Good evening Rose Bowl, Pasadena! The crowd erupts into a roar, the stadium air overcome with a thunder of adulation. Between throngs of teenagers tearing through streets in pursuit of the Beatles, concert-goers fainting at the sight of Michael Jackson, and Top Tens of the day made to navigate flirty fan calls on daytime TV in front of live audiences (1), pop history as we know it has always revolved around the deep, fanatic reverence of the star . Stars in all corners of the entertainment cosmos, be it music, film or TV, have long had their lives glamorised. Tales told of luxurious jet-setting, post-show mischief and infamous public appearances peppered with paparazzi. Fame turned into fables, circulated eagerly by the wider populace. Having avidly followed a plethora of musicians, actors and comedians at different points of my own life, the gurgling vortex of stardom culture has remained ever-intriguing. Why do our relationships with stars mean so much to our society, and have they shifted over time? Public perceptions & parasocial relationships Our journey begins with the making of a star. A star is born from an assemblage of artistic choices: artwork, stage personas, press releases, bold onstage costumes and more, which constellate into a fashioned image. Or, a ‘manufactured personal reality’ (2). This reality is what audiences draw upon when forming attachments to stars, a process that moulds complex, contradicting human beings into idealised forms that may resonate, validate or provide meaning to them. The mid-century women empowered by the feminine sexuality and intelligence of Marilyn Monroe (2), or the working class Eastern European following of Depeche Mode who saw the band as an emblem of social rebellion under the USSR in the late 80s (3), are such examples. Such attachment gives rise to the infamous ‘parasocial relationship’ (PSR). An often derisive term aptly used today to call out toxic, boundary-crossing online fan behaviour, parasocial relationships at their core simply encompass socio-emotional connections formed with media figures (4). In it, audiences extend emotional energy, time or interest towards figures that whilst unreciprocated, create a perceived idea of intimacy similar to that of two-way relationships. For the audience, PSRs can evoke feelings of safety, trust and various forms of devotion, self-strengthened through personal habits – think dressing like a favourite ‘bias’, or diligently watching a favourite director’s closet picks. PSRs have historically been one-sided. Audience reactions to sensation and scandal have had the power to make or break an artist’s image, but restricted channels of dialogue meant that direct two-way feedback was often “fragmented” (2). The influencing power of the star’s image lay within reach of the star themselves, and more often than not, was shaped by the wider commercial agendas of their agency or labels. That is, until recently… The rise of the Internet Whilst the glitz and glamour of stardom remains strongly relevant, we can focus on the advent of the internet as the most powerful force in reshaping the relationship between fan and star. Termed the “o ne and a half sided” PSR (4), seen today is a shift in power dynamics towards one of increased fan-star symbiosis. As the theory notes, technology has allowed for greater perceived proximity and reciprocity, blurring the line between social and parasocial. Under the extensive nature of the current digital world, our internet presence has become increasingly considered a material extension of our real-life selves (4), whether through Zoom calls, real-time story updates or live vlogs. Direct messages or comments that allow instant reply have muddied the realm of physical and virtual reality, thus leading audiences to feel ‘physically’ closer to the figures in question. This decrease in constructed social distance has fostered notions of reciprocity, viewing stars as people they can reach out to and touch, converse with, and most importantly, influence in return – regardless of any actual ability to do so (4). As we witness stars defend their personal choices against an onslaught of ‘netizen’ backlash or wryly reply to a barrage of invasive thirst tweets (5), we see the increased power that global audiences have over said stars’ images. Eroded power barriers between the star and fan have heightened both positive and negative emotional engagement. Well-documented are various behaviours that disrespect boundaries between personal and professional lives, such as harassment, stalking, and other breaches of privacy. Yet, the rise of the ordinary, accessible star has also allowed greater exposure to previously hidden or stigmatised facets of figures’ lives, fostering safe spaces for perceived authenticity and vulnerability that can counter blind idealisation (6). Evolving industries & societies Under the diluted power networks of stardom today, we can describe celebrity image production as increasingly decentralised (6). Technology has made entry into the entertainment industry more accessible by providing numerous channels for artistic output, whether it be through releasing music independently on streaming services like Spotify, Bandcamp or Soundcloud, or creating short-form video skits on platforms like TikTok or Instagram. With top-down connections to age-old media institutions no longer required, the pool of faces that audiences can form relationships with has drastically expanded (7). Social norms – at the time of writing – have also welcomed the notion of diversified talents. As prevailing social, cultural and political structures shape value judgments made of stars (2), we have seen increased audience meaning-making in the dimensions of gender, ethnicity, class or sexual orientation over past decades (8) aligned with a gradual direction towards progressive and learned landscapes. Here, celebrity advocacy for causes and movements beyond the stage is nothing new, but fan bases can now dissect their forays into activism more publicly than ever before. A world unapologetically critical of “out of touch” (9) wealthy stars crooning out Lennon’s Imagine at the beginning of the pandemic would unlikely have welcomed the white-saviorist charity event that was Live Aid 1985 with as open arms as the dominant media narrative did then (10). A hyper-consumerist present If the exclusive stardom of yore can be likened to the dominance of a supermarket monopoly, then stardom today looks more like a diverse hub of online stores for buyers to ‘Click and Collect’ from. Whilst this setup offers diversified perspectives to a consuming audience, it embodies wider societal trends towards hyper-commodification. Market an image that sells well, and everyone will be famous for 15 minutes , as Andy Warhol supposedly declared (11). Reinforcing the ephemerality of mass consumerism are internet memes or trends (12) that morph and dilute rebellious celebrity motifs for overarching capitalistic agendas – think Brat Summer campaigns in the style of Charli xcx’s 2024 album co-opted by the most unethical multinational corporation you’ve ever come across. Like with the discourse exposing ‘nepo’ babies in the entertainment industry (13), we are reminded that despite the semblances of democratisation, the limelight remains far from a level stage. Stardom, beyond So what then? What lies in store for the future star? On one hand, the perception of proximity with the decline of ‘untouchable’ star personas can strengthen fan worship and deification, with frenzied consequences. On the other hand, increased artist-audience dialogue can pave the way for real change over performative gestures as lessening power imbalances bring a form of democratisation that can platform diverse and marginalised voices in art. All in all, stars today may no longer be able to fully present themselves and be perceived solely as spectral, enigmatic illusions that audiences can latch upon, but the new freedoms and avenues that come with being more truly known may be just as bedazzling. References 1. Robinson P. The great pop power shift: how online armies replaced fan clubs. The Guardian [Internet]. 2014 Aug 25; Available from: https://www.theguardian.com/music/2014/aug/25/great-pop-power-shift-how-online-armies-replaced-fan-clubs 2. Dyer R. Introduction. In: Heavenly Bodies [Internet]. Routledge; 2004. Available from: https://doi.org/10.4324/9780203605516 3. Wynarczyk N. Tracing Eastern Europe’s obsession with Depeche Mode [Internet]. Dazed. 2017. Available from: https://www.dazeddigital.com/music/article/36659/1/tracing-eastern-europe-s-obsession-with-depeche-mode 4. Hoffner CA, Bond BJ. Parasocial Relationships, Social Media, & Well-Being. Current Opinion in Psychology [Internet]. 2022 Feb;45(1):1–6. Available from: https://doi.org/10.1016/j.copsyc.2022.101306 5. Yodovich N. Buzzfeed’s “celebrities reading thirst tweets”: examining the sexualization of men and women in the #MeToo era. Journal of gender studies. 2024 Feb 28;33(8):1–11. Available from: https://doi.org/10.1080/09589236.2024.2324263 6. Driessens O. The Celebritization of Society and Culture: Understanding the Structural Dynamics of Celebrity Culture. International Journal of Cultural Studies [Internet]. 2013;16(6):641–57. Available from: https://doi.org/10.1177/1367877912459140 7. Carboni M. The digitization of music and the accessibility of the artist. Journal of Professional Communication [Internet]. 2014 Jun 4;3(2). Available from: https://doi.org/10.15173/jpc.v3i2.163 8. Stewart S, Giles D. Celebrity status and the attribution of value. European Journal of Cultural Studies [Internet]. 2019 Jul 21;23(1). Available from: https://doi.org/10.1177/1367549419861618 9. Caramanica J. This “Imagine” Cover Is No Heaven. The New York Times [Internet]. 2020 Mar 20; Available from: https://www.nytimes.com/2020/03/20/arts/music/coronavirus-gal-gadot-imagine.html 10. Grant J. Live Aid/8: perpetuating the superiority myth. Critical Arts [Internet]. 2015 May 4;29(3):310–26. Available from: https://doi.org/10.1080/02560046.2015.1059547 11. Nuwer R. Andy Warhol Probably Never Said His Celebrated “Fifteen Minutes of Fame” Line [Internet]. Smithsonian Magazine. Smithsonian Magazine; 2014. Available from: https://www.smithsonianmag.com/smart-news/andy-warhol-probably-never-said-his-celebrated-fame-line-180950456/ 12. Cirisano T. “Brat” summer and the dilemmas of going mainstream [Internet]. MIDiA Research. 2024. Available from: https://www.midiaresearch.com/blog/brat-summer-and-the-dilemmas-of-going-mainstream 13. Jones N. How a Nepo Baby Is Born [Internet]. Vulture. 2022. Available from: https://www.vulture.com/article/what-is-a-nepotism-baby.html Previous article Next article Enigma back to