Search Results
146 results found with an empty search
- The Rise of The Planet of AI | OmniSci Magazine
The Rise of The Planet of AI By Ashley Mamuko When discussing AI, our minds instinctively fear of sentience and robotic uprising. However, is our focus misplaced on the “inevitable” humanoid future when AI has become ubiquitous and undetectable in our lives? Edited by Hamish Payne & Katherine Tweedie Issue 1: September 24, 2021 Illustration by Aisyah Mohammad Sulhanuddin On August 19th 2021, Tesla announced a bold project on its AI Day. The company plans to introduce humanoid robots for consumer use. These machines are expected to perform basic, mundane household tasks and streamline easily into our everyday lives.With this new release, the future of AI seems to be closing in. No longer do we stand idle, expecting the inevitable humanoid-impacted future. By 2022, these prototypes are expected to launch. It seems inevitable that our future would include AI. We have already familiarised ourselves with this emerging technology in the media we continue to enjoy. Wall E, Blade Runner, The Terminator, and Ex Machina are only a few examples of the endless list of AI-related movies, spanning decades and detailing both our apprehension and acceptance through multiple decades. Most of these movies portray these machines as sentient yet intrinsically evil, as they pursue human destruction. But to further understand the growing field of study of AI, it’s important to first briefly introduce its history and procurement before noting the growing concerns played up in the Hollywood Blockbusters. The first fundamental interpretations of Artificial Intelligence span a vast period of time. Its first acknowledgement may be attributed to the 1308 Catalan poet and theologian Ramon Llull. His work Ars generalis ultima (The Ultimate General Art) advanced a paper-based mechanical process that creates new knowledge from a combination of concepts. Llull aimed to create a method of deducing logical religious and philosophical truths numerically. In 1642, French mathematician Blaise Pascal invented the first mechanical calculating machine; the first iteration of the modern calculator (1). The Pascaline, as it is now known, only had the ability to add or subtract values using a dial and spoke system (2). Though these two early ideas do not match our modern perceptions of what AI is, they lay the foundation of pushing logical processes to do more than just mechanical means. These two instances in history foreshadow the use of mechanical devices in performing human cognitive functions. Not till the 1940s and early 1950s did we finally obtain the necessary means of more complex data processing systems. With the introduction of computers, the novelty of algorithms created a more streamlined function of storing, computing, and producing. In 1943, Warren McCulloch and Walter Pitts founded the idea of artificial neural networks in their paper “A Logical Calculus of Ideas Immanent in Nervous Activity” (3). This presented the notion of computers behaving similar to a human mind and introduced the subsection of “deep learning”. Alan Turing proposed a test to assess a human’s ability to differentiate between human behaviour and robotic behaviour. In 1950, the Turing Test (later known as the Imitation Game) asked participants to identify if the dialogue they were engaging with was with another person or a machine (4). Despite the breakthroughs made in this expertise, the term Artificial Intelligence wasn’t finally coined till 1955 by John McCarthy of AI. Later on, McCarthy along with many other budding experts would hold the famous 1956 Dartmouth College Workshop (5). This meetup of a few scientists would later be pinpointed in history as the birth of the AI field. As the field continued to grow, more public concerns were raised alongside the boom of science fiction literature and movies cropping up. The notorious 1968 movie 2001: A Space Odyssey shaped such a role into the public perception of the field that by the 1960s and 1970s, an AI Winter occurred. Very little notable progress was made in the field due to the lack of funding based on fear (6). Finally after some time had passed and some more advancements were made with algorithm technology, the notable Deep Blue chess game against Gary Kasparov. The event occurring in May 1997 where the Deep Blue robot beat world champion chess superstar Gary Kasparov marked a silence ushering of perhaps a “decline in human society” at the fall of the machine. Fast forward to now, AI has traversed through leaps and bounds to achieve a much more sophisticated level of algorithms and machine learning techniques. To further understand the uses of AI, I interviewed Dr Liz Sonenberg, a professor in the School of Computing and Information Systems at The University of Melbourne and is a Pro Vice-Chancellor (Research Infrastructure and Systems) in Chancellery Research and Enterprise. She’s an expert in the field and has done a multitude of research. "Machine learning is simply a sophisticated algorithm to detect patterns in data sets that has a basis in statistics." With this algorithm, we have been able to implement it in a variety of our daily tech encounters. AI sits behind the driving force of Google Maps and navigation, as well as voice control. It can easily be found anywhere. “Just because these examples do not exhibit super intelligence, does not mean they are not useful,” Dr Sonenberg explains. Dr Sonenberg alludes that the real problem with AI lies within it’s fairness. These “pattern generating algorithms” at times “learn from training sets not representative of the whole population, which can end up with biased answers.” With a flawed training set, a flawed system is in place. This can be harmful to certain demographics and cause a sway on consumer habits. With AI-aided advice, the explanation behind outcomes and decisions are not supported either. Algorithms are only able to mechanically produce an output, but not explain them. With more high-stakes decisions untrusted upon the reliability of AI, the issue of flawed algorithms becomes more pronounced. With my interview with Dr Sonenberg, not one moment was the fear of super-intelligence, robot uprisings, and the likes brought up... With the new-found knowledge of AI’s current concerns I brought up with Dr Sonenberg, I conducted another interview with Dr Tim Miller, a Professor of Computer Science in the School of Computing and Information Systems at The University of Melbourne, and Dr Jeannie Paterson, a Professor teaching subjects in law and emerging technologies in the School of Law at The University of Melbourne. They both are also Co-Directors at The Centre for Artificial Intelligence and Digital Ethics (CAIDE). As we began the interview, Dr Miller explained again that AI “is not magic” and implements the use of “math and statistics”. Dr Paterson was clear to bring up that anti-discrimination laws have been in place but as technology evolves and embeds itself more into public domain, it must be scrutinised. The deployment of AI can easily cause harm to people due to systems not being public, causing sources to be difficult to identify and causily attribute. With the prospect of biased algorithms, a fine dissonance occurs. Dr Miller elaborated on the use of AI in medical imaging used in private hospitals. As private hospitals tend to attract a certain echelon of society, the training set is not wholly representative of the greater population. “A dilemma occurs with racist algorithms… if it is not used [outcomes] could be worse.” When the idea of a potential super-intelligent robot emerging in the future was brought into conversation, the two didn’t seem to be very impressed. “Don’t attribute superhuman qualities [to it],” says Dr Paterson. Dr Miller states that the trajectory of AI’s future is difficult to map. Predictions in the past of how AI progresses with it’s abilities have occurred, but they occur much later than expected… easily decades later. The idea of super-intelligence also poses the question on how to define intelligence. “Intelligence is multidimensional, it has its limits,” says Dr Miller. In this mystical future world of AI, a distinction is placed not just on, “what will machines be able to do but what will not have them do,” states Dr Miller. “This regards anything that requires social interaction, creativity and leadership”; so the future is aided by AI, not dictated by it. However, in a more near future, some very real concerns are posed. Job security, influence on consumer habits, transparency, law approach, and accountability are only a few. With more and more jobs being replaced by machines, every industry is at stake. “Anything repetitive can be automated,” says Dr Miller. But this does not instinctively pose a negative, as more jobs will be created to further aid the use of AI. And not all functions of a job can be replaced by AI. Dr Paterson explains with the example of radiology that AI is able to diagnose and interpret scans, but a radiologist does more than just diagnose and interpret on a daily basis. “The AI is used to aid in the already existing profession, not simply overtake it.” Greater transparency is needed in showing how AI uses our data. “It shouldn’t be used to collect data unlimitedly,” says Dr Paterson, “is it doing what’s being promised, is it discriminating people, is it embedding inequality?” With this in mind, Dr Paterson suggests that more law authorities should be educated on how to approach topics regarding AI. “There needs [to be] better explanation… [We] need to educate judges and lawyers.” With the notorious Facebook-Cambridge Analytica scandal of 2018, the big question of accountability was raised. The scandal involved the unwarranted use of data from 87 million Facebook users by Cambridge Analytica which served to support the Trump campaign. This scandal brought to light how the data we used can be exploited nonconsensually and used to influence our behaviours, as this particular example seemed to sway the American presidential election. Simply put, our information can be easily exploited and sent off to data analytics to further influence our choices. This creates the defence that apps “ merely provide a [service], but people use [these services] in that way,” as said by Dr Miller. Simply put, the blame becomes falsely shifted onto the users for the spread of misinformation. The impetus, however, should lie with social networking sites disclosing to it’s users more transparency on their data usage and history as well as providing adequate protection on their data. To be frank, the future of robotic humanoid AI integrating seamlessly into human livelihoods will not occur within our lifetimes, or potentially even our grandchildren’s. The forecast seems at best, unpredictable; and at worst, unattainable due to the complexity of what constitutes full “sentience”. However, this does not indicate that AI lies dormant within our lives. The fundamental technology based in computing, statistics, and information systems lays most of the groundwork for most transactions we conduct online, whether monetary or social or otherwise. AI and it’s promises should not be shunted aside due to the misleading media surrounding it’s popularised definition and “robot uprisings” but rather taught more broadly to all audiences. So perhaps Elon Musk’s fantastical ideas of robotic integration will not occur by 2022 but the presence of AI in modern technologies should not go unnoticed. References: 1. "A Very Short History of Artificial Intelligence (AI)." 2016. Forbes. https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/?sh=38106456fba2. 2. “Blaise Pascal Invents a Calculator: The Pascaline.” n.d. Jeremy Norma's Historyofinformation.com. https://www.historyofinformation.com/detail.php?id=382. 3, 4, 6. “History of Artificial Intelligence.” n.d. Council of Europe. https://www.coe.int/en/web/artificial-intelligence/history-of-ai. 5. Smith, Chris, Brian McGuire, Ting Huang, and Gary Yang. 2006. “The History of Artificial Intelligence,” A file for a class called History of Computing offered at the University of Washington. https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf.
- Hiccups | OmniSci Magazine
< Back to Issue 2 Hiccups Evolution might be a theory, but if it’s evidence you’re after, there’s no need to look further than your own body. The human form is full of fascinating parts and functions that hold hidden histories - from the column that brought you a deep-dive into ear wiggling in Issue 1, here’s an exploration of why we hiccup! by Rachel Ko 10 December 2021 Edited by Katherine Tweedie and Ashleigh Hallinan Illustrated by Gemma Van der Hurk Hiccups bring a special brand of chaos to a day. It’s one that lingers, rendering us helpless and in suspense; a subtle, internal chaos of quiet frustration that forces us to drop what we’re doing to monitor each breath – in and out, in and out – until the moment they abruptly decide to stop. It’s an experience we’ve all had – one that can hit anyone at any time – and for most of us, hiccups are a concentrated episode of inconvenience; best ignored, and overcome. Yet, despite our haste to get rid of them when they interrupt our day, hiccups seem to have mystified humans for generations. Historically, the phenomenon has been the source of many superstitions, both good and bad. A range of cultures associate them with the concept of remembrance: in Russia, hiccups mean someone is missing you (1), while an Indian myth suggests that someone is remembering you negatively for the evils you have committed (2). Likewise, in Ancient Greece, hiccups were a sign that you were being complained about (3), while in Hungary, they mean you are currently the subject of gossip. On a darker note, a Japanese superstition prophesises death to one who hiccups 100 times. (4) Clearly, the need to justify everything, even things as trivial as hiccups, has always been an inherent human characteristic, transcending culture and time. As such, science has more recently made its attempt at objectively identifying a reason behind the strange phenomenon of hiccups. After all, if you take a step back and think about it, hiccups are indeed quite strange. Anatomically, hiccups (known scientifically as singultus) are involuntary spasms of the diaphragm (5): the dome-like sheet of muscle separating the chest and abdominal cavities. (6) The inspiratory muscles, including the intercostal and neck muscles, also spasm, while the expiratory muscles are inhibited. (7) These sudden contractions cause a rapid intake of air (“hic”), followed by the immediate closure of the glottis or vocal cords (“up”). (8) As many of us have probably experienced, a range of stimuli can cause these involuntary contractions. The physical stimuli include anything that stretches and bloats the stomach, (9) such as overeating, rapid food consumption and gulping, especially of carbonated drinks. (10) Emotionally, intense feelings and our responses to them, such as laughing, sobbing, anxiety and excitement, can also be triggers. (11) This list is not at all exhaustive; in fact, the range of stimuli is so large that hiccups might be considered the common thread between a drunk man, a Parkinson’s disease patient and anyone who watches The Notebook. The one thing that alcohol, (12) some neurological drugs (13) and intense sobbing (14) do have in common is that they exogenously stimulate the hiccup reflex arc. (15) This arc involves the vagal and phrenic nerves that stretch from the brainstem to the abdomen which cause the diaphragm to contract involuntarily. (16) According to Professor Georg Petroianu from the Herbert Wertheim College of Medicine, (17) many familiar home remedies for hiccupping – being scared, swallowing ice, drinking water upside down – interrupt this reflex arc, actually giving these solutions a somewhat scientific rationale. While modern research has successfully mapped out the process of hiccups, their purpose is still unclear. As of now, the hiccup reflex arc and the resulting diaphragmatic spasms seem to be effectively useless. Of the existing theories for the function of hiccups, the most prominent seems to be that they are a remnant of our evolutionary development, (18) essentially ‘vestigial’; in this case, a feature that once served our amphibian ancestors millions of years ago, but now retain little of their original function. (19) In particular, hiccups are believed to be a relic of the ancient transition of organisms from water to land. (20) When early fish lived in stagnant waters with little oxygen, they developed lungs to take advantage of the air overhead, in addition to using gills while underwater. (21) In this system, inhalation would allow water to move over the gills, during which a rapid closure of the glottis – which we see now in hiccupping – would prevent water from entering the lungs. It is theorised that when descendants of these fish moved onto land, gills were lost, but the neural circuit for this glottis closing mechanism was retained. (22) This neural circuit is indeed observable in human beings today, in the form of the hiccup central pattern generator (CPG). (23) CPGs exist for other oscillating actions like breathing and walking, (24) but a particular cross-species CPG stands out as a link to human hiccupping: the neural CPG that is also used by tadpoles for gill ventilation. Tadpoles “breathe” in a recurring, rhythmic pattern that shares a fundamental characteristic feature with hiccups: both involve inspiration with closing of the glottis. (25) This phenomenon strengthens the idea that the hiccup CPG may be left over from a previous stage in evolution and has been retained in both humans and frogs. However, the CPG in frogs is still used for ventilation, while in humans, the evolution of lungs to replace gills has rendered it useless. (26) Based on this information, it seems hiccupping lost its function with time and the development of the human lungs, remaining as nothing more than an evolutionary remnant. However, we cannot discredit hiccupping as having become entirely useless as soon as gills were lost. Interestingly, hiccupping has only been observed in mammals – not in birds, lizards or other air-breathing animals. (27) This suggests that there must have been some evolutionary advantage to hiccupping at some point, at least in mammals. A popular theory for this function stems from the uniquely mammalian trait of nursing. (28) Considering the fact that human babies hiccup in the womb even before birth, this theory considers hiccupping to be almost a glorified burp, intended to remove air from the stomach. This becomes particularly advantageous when closing the glottis prevents milk from entering the lungs, aiding the act of nursing. (29) Today, we reduce hiccups to the disorder and disarray they bring to our day. But, next time you are hit with a bout of hiccups, take a second to find some calm amidst the chaos and appreciate yet another fascinating evolutionary fossil, before you hurry to dismiss them. After that, feel free to eat those lemons or gargle that salty water to your diaphragm’s content. References Sonya Vatomsky, "7 Cures For Hiccups From World Folklore," Mentalfloss.Com, 2017, https://www.mentalfloss.com/article/500937/7-cures-hiccups-world-folklore. Derek Lue, "Indian Superstition: Hiccups | Dartmouth Folklore Archive," Journeys.Dartmouth.Edu, 2018, https://journeys.dartmouth.edu/folklorearchive/2018/11/14/indian-superstition-hiccups/. Vatomsky, "7 Cures For Hiccups From World Folklore". James Mundy, "10 Most Interesting Superstitions In Japanese Culture | Insidejapan Tours," Insidejapan Blog, 2013, https://www.insidejapantours.com/blog/2013/07/08/10-most-interesting-superstitions-in-japanese-culture/. Paul Rousseau, "Hiccups," Southern Medical Journal, no. 88, 2 (1995): 175-181, doi:10.1097/00007611-199502000-00002. Bruno Bordoni and Emiliano Zanier, "Anatomic Connections Of The Diaphragm Influence Of Respiration On The Body System," Journal Of Multidisciplinary Healthcare, no. 6 (2013): 281, doi:10.2147/jmdh.s45443. Christian Straus et al., "A Phylogenetic Hypothesis For The Origin Of Hiccough," Bioessays no. 25, 2 (2003): 182-188, doi:10.1002/bies.10224. Straus et al., "A Phylogenetic Hypothesis For The Origin Of Hiccough," 182-188. John Cameron, “Why Do We Hiccup?,” filmed for TedEd, 2016, TED Video, https://ed.ted.com/lessons/why-do-we-hiccup-john-cameron#watch. Monika Steger, Markus Schneemann, and Mark Fox, "Systemic Review: The Pathogenesis And Pharmacological Treatment Of Hiccups," Alimentary Pharmacology & Therapeutics 42, no. 9 (. 2015): 1037-1050, doi:10.1111/apt.13374. Lien-Fu Lin, and Pi-Teh Huang, "An Uncommon Cause Of Hiccups: Sarcoidosis Presenting Solely As Hiccups," Journal Of The Chinese Medical Association 73, no. 12 (2010): 647-650, doi:10.1016/s1726-4901(10)70141-6. Steger, Schneemann and Fox, "Systemic Review: The Pathogenesis And Pharmacological Treatment Of Hiccups," 1037-1050. Unax Lertxundi et al., "Hiccups In Parkinson’s Disease: An Analysis Of Cases Reported In The European Pharmacovigilance Database And A Review Of The Literature," European Journal Of Clinical Pharmacology 73, no. 9 (2017): 1159-1164, doi:10.1007/s00228-017-2275-6. Lin and Huang, "An Uncommon Cause Of Hiccups: Sarcoidosis Presenting Solely As Hiccups," 647-650. Peter J. Kahrilas and Guoxiang Shi, "Why Do We Hiccup?" Gut 41, no. 5 (1997): 712-713, doi:10.1136/gut.41.5.712. Steger, Schneemann and Fox, "Systemic Review: The Pathogenesis And Pharmacological Treatment Of Hiccups," 1037-1050. Georg A. Petroianu, "Treatment Of Hiccup By Vagal Maneuvers," Journal Of The History Of The Neurosciences 24, no. 2 (2014): 123-136, doi:10.1080/0964704x.2014.897133. Straus et al., "A Phylogenetic Hypothesis For The Origin Of Hiccough," 182-188. Cameron, “Why Do We Hiccup?” Michael Mosley, "Anatomical Clues To Human Evolution From Fish," BBC News, published 2011, https://www.bbc.com/news/health-13278255. Michael Hedrick and Stephen Katz, "Control Of Breathing In Primitive Fishes," Phylogeny, Anatomy And Physiology Of Ancient Fishes (2015): 179-200, doi:10.1201/b18798-9. Straus et al., "A Phylogenetic Hypothesis For The Origin Of Hiccough," 182-188. Straus et al., "A Phylogenetic Hypothesis For The Origin Of Hiccough," 182-188. Pierre A. Guertin, "Central Pattern Generator For Locomotion: Anatomical, Physiological, And Pathophysiological Considerations," Frontiers In Neurology 3 (2013), doi:10.3389/fneur.2012.00183. Hedrick and Katz, "Control Of Breathing In Primitive Fishes," 179-200. Straus et al., "A Phylogenetic Hypothesis For The Origin Of Hiccough," 182-188. Daniel Howes, "Hiccups: A New Explanation For The Mysterious Reflex," Bioessays 34, no. 6 (2012): 451-453, doi:10.1002/bies.201100194. Howes, "Hiccups: A New Explanation For The Mysterious Reflex," 451-453. [1] Howes, "Hiccups: A New Explanation For The Mysterious Reflex," 451-453. Previous article back to DISORDER Next article
- Wicked Invaders of the Wild | OmniSci Magazine
< Back to Issue 5 Wicked Invaders of the Wild Serenie Tsai 24 October 2023 Edited by Krisha Darji Illustrated by Jennifer Nguyen Since the beginning of time, there has been a continuous flow of species in and out of regions that establishes a foundation for ecosystems. When species are introduced into new environments and replicate excessively to interfere with native species, they become invasive. Invasive species refer to those that spread into new areas and pose a threat to other species. Factors contributing to their menacing status include overfeeding native species, lack of predators, and outcompeting native species (Sakai et al., 2001). Invasive species shouldn’t be confused with feral species which are domestic animals that have reverted to their wild state, or pests which are organisms harmful to human activity (Contrera-Abarca et al., 2022; Hill, 1987). Furthermore, not all introduced species are invasive; crops such as wheat, tomato and rice have been integrated with native agriculture successfully. Many species were introduced accidentally and turned invasive; however, some were intentionally introduced to manage other species, and a lack of foresight resulted in detrimental ecological impacts. Each year, invasive species cost the global economy over a trillion dollars in damages (Roth, 2019). Claimed ecological benefits of invasive species Contrary to the name, invasive species could potentially benefit the invaded ecosystem. Herbivores can reap the benefits of the introduced biodiversity, and native plants can increase their tolerance (Brändle et al., 2008; Mullerscharer, 2004). Deer and goats aid in suppressing introduced grasses and inhibit wildfires (Fornoni, 2010). Likewise, species such as foxes and cats have the capacity to regulate the number of rats and rabbits. Furthermore, megafaunal extinction has opened opportunities to fill empty niches, for example, camels could fill the ecological niche of a now-extinct giant marsupial (Chew et al., 1965; Weber, 2017). Thus, studies indicate the possibility of species evolving to fill vacant niches (Meachen et al., 2014). Below, I’ll explore the rise and downfall of invasive species in Australia. Cane toad Cane toads are notorious for their unforeseen invasion. Originally introduced as a biological control for cane beetles in 1935, their rookie status was advantageous to their proliferation and dominance over native species (Freeland & Martin, 1985). Several native predators were overthrown and native fauna in Australia lacked resistance to the cane toad’s poison used as a defence mechanism (Smith & Philips, 2006). However, research suggests an evolutionary adaptation to such poison (Philips &Shine, 2006). There isn't a universal method to regulate cane toads, so efforts to completely eradicate cane toads are futile. However, populations are kept low by continuously monitoring areas and targeting cane toad eggs or their adult form. Common Myna The origins of Common Myna introduced into New South Wales and Victoria are uncertain; however, it was introduced into Northern Queensland as a mechanism to predate on grasshoppers and cane beetles(Neville & Liindsay, 2011) and introduced into Mauritius to control locust plagues (Bauer, 2023). The Common Myna poses an alarming threat to ecosystems and mankind, its severity is elucidated by its position in the world’s top 100 invasive species list (Lowe et al., 2000). It has spurred human health concerns including the spread of mites and acting as a vector for diseases destructive to human and farm stock (Tidemann, 1998). Myna also has a vicious habit of fostering competition with cavity-nesting native birds, forcing them and their eggs from their nest, however, the extent of this is unclear, and the influence of habitat destruction needs to be considered (Grarock et al., 2013). The impact of this bird lacks empirical evidence, so appropriate management is undecided (Grarock et al., 2012). However, modification of habitats could be advantageous as the Myna impact urban areas more, whereas intervening in their food resources would be rendered useless with their highly variable diet (Brochier et al., 2012). Zebra mussels Zebra mussels accidentally invaded Australia's aquatic locality when introduced by the ballast water of cargo ships. From an ecological perspective, Zebra Mussels overgrow the shells of native molluscs and create an imbalance within the ecosystem (Dzierżyńska-Białończyk et al., 2018). From a societal perspective, it colonizes docks, ship hulls, and water pipes and damages power plants (Lovell et al., 2006) Controlling the spread of Zebra Mussels includes manual removal, chlorine, thermal treatment and more. Control methods It is crucial to deploy preventative methods to mitigate the spread of invasive species before it becomes irreversible. Few known control methods are employed for certain types of animals but with no guarantee of success. Some places place bounties on catching the animals, however, the results of this technique are conflicting. In 1893, foxes were the target of financial incentives, but the scheme was deemed ineffective (Saunders et al., 2010). However, government bounties were introduced for Tasmanian tigers in 1888, which drastically caused a population decline and their eventual extinction (National Museum of Australia, 2019). Similarly, the prevalence of Cane Toads became unbearable, and in response, armies were deployed, and fences in rural communities were funded. Moreover, in 2007, inspired by a local pub’s scheme to hand out beers in exchange for cane toads, the government staged a “Toad Day Out” to establish a bounty for cane toads (Williams, 2011). Invasive species are detrimental to ecosystems, whether introduced intentionally or by accident, management of species is still a work in progress. References Lowe S., Browne M., Boudjelas S., & De Poorter M. (2000) 100 of the World’s Worst Invasive Alien Species: A selection from the Global Invasive Species Database . The Invasive Species Specialist Group (ISSG). Bauer, I. L. (2023). T he oral repellent–science fiction or common sense? Insects, vector- borne diseases, failing strategies, and a bold proposition. Tropical Diseases, Travel Medicine and Vaccines, 9(1), 7. Brändle, M., Kühn, I., Klotz, S., Belle, C., & Brandl, R. (2008). Species richness of herbivores on exotic host plants increases with time since introduction of the host. Diversity and Distributions, 14(6), 905–912. https://doi.org/10.1111/j.1472-4642.2008.00511.x Brochier, B., Vangeluwe, D., & Van den Berg, T. (2010). Alien invasive birds. Revue scientifique et technique, 29(2), 217. Chicago. Cayley, N. W., & Lindsey, T. What bird is that?: a completely revised and updated edition of the classic Australian ornithological work . Chew, R. M., & Chew, A. E. (1965). The Primary Productivity of a Desert-Shrub ( Larrea tridentata ) Community . Ecological Monographs, 35(4), 355–375. https://doi.org/10.2307/1942146 Contreras-Abarca, R., Crespin, S. J., Moreira-Arce, D., & Simonetti, J. A. (2022). Redefining feral dogs in biodiversity conservation . Biological Conservation, 265, 109434. https://doi.org/10.1016/j.biocon.2021.109434 Fornoni, J. (2010). Ecological and evolutionary implications of plant tolerance to herbivory. Functional Ecology, 25(2), 399–407. https://doi.org/10.1111/j.1365-2435.2010.01805.x Freeland, W. J., & Martin, K. C. (1985). The rate of range expansion by Bufo marinus in Northern Australia , 1980-84 . Wildlife Research, 12(3), 555-559. Grarock, K., Lindenmayer, D. B., Wood, J. T., & Tidemann, C. R. (2013). Does human- induced habitat modification influence the impact of introduced species? A case study on cavity-nesting by the introduced common myna ( Acridotheres tristis ) and two Australian native parrots. Environmental Management, 52, 958-970. G. Smith, J., & L. Phillips, B. (2006). Toxic tucker: the potential impact of Cane Toads on Australian reptiles . Pacific Conservation Biology, 12(1), 40. https://doi.org/10.1071/pc060040 G. Smith J, L. Phillips B. Toxic tucker: the potential impact of Cane Toads on Australian reptiles. Pacific Conservation Biology [Internet]. 2006;12(1):40. Available from: http://www.publish.csiro.au/pc/PC060040 Hill, D. S. (1987). Agricultural Insect Pests of Temperate Regions and Their Control . In Google Books. CUP Archive. https://books.google.com.au/books?hl=en&lr=&id=3-w8AAAAIAAJ&oi=fnd&pg=PA27&dq=pests+definition&ots=90_-WiF_MZ&sig=pKxuVjDJ_bZ3iNMb5TpfXA16ENI#v=onepage&q=pests%20definition&f=false Lovell, S. J., Stone, S. F., & Fernandez, L. (2006). The Economic Impacts of Aquatic Invasive Species: A Review of the Literature. Agricultural and Resource Economics Review, 35(1), 195–208. https://doi.org/10.1017/s1068280500010157 Meachen, J. A., Janowicz, A. C., Avery, J. E., & Sadleir, R. W. (2014). Ecological Changes in Coyotes ( Canis latrans ) in Response to the Ice Age Megafaunal Extinctions . PLoS ONE, 9(12), e116041. https://doi.org/10.1371/journal.pone.0116041 Mullerscharer, H. (2004). Evolution in invasive plants: implications for biological control . Trends in Ecology & Evolution, 19(8), 417–422. https://doi.org/10.1016/j.tree.2004.05.010 ANU. Myna problems. (n.d.). Fennerschool-Associated.anu.edu.au . http://fennerschool- associated.anu.edu.au//myna/problem.html National Museum of Australia. (2019). Extinction of thylacine | National Museum of Australia . Nma.gov.au . https://www.nma.gov.au/defining-moments/resources/extinction-of-thylacine Cayley, N. W. & Lindsey T. (2011) What bird is that?: a completely revised and updated edition of the classic Australian ornithological work . Walsh Bay, N.S.W.: Australia’s Heritage Publishing. Phillips, B. L., & Shine, R. (2006). An invasive species induces rapid adaptive change in a native predator: cane toads and black snakes in Australia . Proceedings of the Royal Society B: Biological Sciences, 273(1593), 1545–1550. https://doi.org/10.1098/rspb.2006.3479 Roth, A. (2019, July 3). Why you should never release exotic pets into the wild. Animals. https://www.nationalgeographic.com/animals/article/exotic-pets-become-invasive-species Sakai, A. K., Allendorf, F. W., Holt, J. S., Lodge, D. M., Molofsky, J., With, K. A., Baughman, S., Cabin, R. J., Cohen, J. E., Ellstrand, N. C., McCauley, D. E., O’Neil, P., Parker, I. M., Thompson, J. N., & Weller, S. G. (2001). The Population Biology of Invasive Species. Annual Review of Ecology and Systematics , 32(1), 305–332. https://doi.org/10.1146/annurev.ecolsys.32.081501.114037 Saunders, G. R., Gentle, M. N., & Dickman, C. R. (2010). The impacts and management of foxes ( Vulpes vulpes ) in Australia . Mammal review, 40(3), 181-211. Weber, L. (2013). Plants that miss the megafauna. Wildlife Australia, 50(3), 22–25. https://search.informit.org/doi/10.3316/ielapa.555395530308043 Williams, G. (2011). 100 Alien Invaders . In Google Books. Bradt Travel Guides. https://books.google.com.au/books?hl=en&lr=&id=qtS9TksHmOUC&oi=fnd&pg=PP1&dq=invasive+species+australia+bounty+ Wicked back to
- Knot Theory and Its Applications. Why Knot? | OmniSci Magazine
< Back to Issue 9 Knot Theory and Its Applications. Why Knot? by Ryan Rud 28 October 2025 Illustrated by Saraf Ishmam Edited by Elijah McEvoy Knot theory is a theoretical study in mathematics, where your brain thinks of an imaginary knot, and manipulates it to your heart’s desire. Yes, the kind of knot you are probably thinking of now, it might be a shoelace, a knot in a piece of string or some utility knot. Good job, but it’s missing one detail: the knot needs to be tied at its ends. Think of this as a string with both ends tied together so that it can’t come undone when you play with it. Now you can pull at and twist this knot, as long as you don’t break it. Congratulations, you now understand the basics of knot theory. (1) So why should we care about a niche field of maths that you will probably never use in your everyday life? Well, the first answer to that is simply ‘for the love of the game’. For some people problem-solving is an endless endeavour that satisfies an urge to understand and be intellectually stimulated. But that’s not for everyone. So then we remember all the times when random elements of pure mathematics became essential when applied to seemingly unrelated topics. Such as how number theory became applied to information transmission, cryptography and computing. (2) How quaternions made for more efficient digital transformations in computer science. (3) Or how graph theory was used to strongly conjecture that any two people have 6 degrees of separation between each other. (4) Although we may not routinely ponder these discoveries, it is because of the works of pure mathematicians that we can admire certain facts that we could not prove otherwise or appreciate how they silently helped to make all the digital devices in your homes. But before we get into the applications, it is good to be familiar with some general terminology. That knot which you pictured earlier with its ends tied is called a standard knot. In 1867 Lord Kelvin thought of the revolutionary idea that what we know as elements - the ones made of protons and neutrons - are actually types of standard knots. (5) He wasn’t right, but it inspired his assistant Peter Guthrie Tait to begin the rigorous study of knots and we have been trying to find applications ever since. Here are the first knots in the greater sequence of the periodic table of knots (see cover image for more!): Figure 1. An ordered table of the first 15 prime knots. (6) There are knots made from one piece of string (prime knots) and knots made from multiple knots joined end-to-end (composite knots) (Fig.2b). There are also links, where two closed knots are combined without gluing the string (Fig.2a). Understanding any further implications of this terminology is not necessary here, but it may help to have a visual understanding of them for the next part. Figure 2. a) Showcasing types of mathematical links; unlink on the left, Hopf link in the centre and whitehead link on the left. b) Demonstrating how two prime knots are combined into a composite knot. c) Demonstrating chirality in trefoil knots, notice the overlapping pattern. Lastly, like many things in mathematics we need a way to systematically and efficiently describe how we manipulate the knots. Luckily, Kurt Reidemeister had the pleasure of providing us with a knot-manipulating moveset in the 1930s through rigorous proofs.These are the legal set of moves that can be done to a knot without changing the knot structure. If we were to cut the knot, twist or untwist the string and then reattach the ends, this is called a crossing switch and it changes the knot. Again, this is not an extensive course but it helps to know of the terminology and visualise it. Feel free to do more research into the details of these topics using the references below! Figure 3. A depiction of the Reidemeister moves. DNA and knot theory Deoxyribonucleic acid (DNA) is the most important and relevant knotting molecule. Each cell nucleus contains (on the millionth order) DNA that is regularly knotting, coiling and compressing to fit into this tight space. However, the best application of knot theory is to the closed end, circular DNA in bacteria. During DNA replication, the unwinding of DNA at one end creates immense torsional strain on the other side of the loop, which is enough supercoiling that prevents replication and leads to cell death.To counter this, bacteria utilise an enzyme known as type II topoisomerase which makes double-stranded cuts in the DNA, followed by a rearrangement of the tangle and reconnecting of the strands, a crossing switch! Without this adaptation, all cellular life would have evolved differently. If you gave this DNA to a mathematician and asked which position in the DNA would be best for this enzyme to cut with the intent of untangling, they could spend a lifetime performing Reidemeister moves and contemplating, never knowing where or how many cuts to make. In contrast to our world’s best mathematicians, topoisomerase is incredibly efficient in where it cuts. We have yet to understand what mechanism allows for such accurate cuts, but practical research into topoisomerase could potentially help knot theorists solve the immensely inscrutable question of the minimum number of crossing switches to simplify any knot. Furthermore, if an understanding of the mechanisms for topoisomerases in bacteria and humans is possible, then humanity can access a new form of control over DNA. It has been speculated that there are possible uses of topoisomerases to inhibit cancer growth, or as a revolutionary way to treat bacterial disease. While we do not have this intel right now, this is one of the ways knot theory could be integral to applied sciences and given time and research funding, it can prove itself useful. (7-8) Knots in chemistry So what other molecules can form knots? Chemists have been creating molecules which involve the basic knots and links since the 1960s (see Fig 4), when topological isomerism was discovered and characterised. Topological isomers are chemicals that are similar in many properties, but differ in spatial arrangement. We can think of it like chirality for knots (see Fig 2c). Chirality is the property of an object not being the same as its mirror image, like a right and left hand. Subsequently, these molecules were made through a technique called ‘templating’, where a metal ion or some template structure was used to produce a desired product, based on how the template interacts with the reactants. There is also another category of knot called a ravel (Fig 4h), where a knot has multiple strings connected at vertices. Altogether, the study of topological isomerism and templating techniques have been advanced by the experimental desire to produce these beautiful molecules. This then indirectly contributes to the production of new molecules and drugs that can go on to have real world impacts. (9) Figure 4. a) The first molecular trefoil knot produced in 1989. c) The first molecule pentafoil knot produced in 2011. d) First molecular Borromean rings, a type of link produced in 2004. f) The first molecule solomon link produced in 2013. h) The first molecular ravel produced in 2011. (9) The recent breakthrough in knot theory I admit, progress in knot theory is slow and perhaps you did not find the scientific revelation of knot theory here that you were hoping for. But that does not mean that current research is ineffective. As recent as June of this year, there was a groundbreaking proof. Think back to the prime and composite knots (scroll up if you have to). Prime knots have an unknotting number, which is the number of crossing changes needed to simplify it to the unknot, similar to what the topoisomerase does. If we merge two prime knots into a composite knot, it can be easily seen that it takes as many crossing switches to simplify the composite, as it does the crossing switches for the sum of the primes. In other words, to untangle a composite knot, you cut and reglue it as many times as the prime knots that make it up. Now, the breakthrough was a proof that it is possible to untangle some composite knots through less crossing switches than the sum of its prime knots. This may seem bleak, but it disproves a widely believed conjecture and now theorists are one step closer to solving the question of the minimum number of crossing switches needed to simplify a knot. (10) Conclusion I will end this with a quote from Dr Arunima Ray, a mathematician that specialises in knot theory and low-dimensional topology at the University of Melbourne, and a dear professor of mine. Hopefully this is just more proof (pun intended) that the work us mathematicians do is tangible: “I had never imagined that mathematics could be used to describe something so abstract as knot theory, but to me the appeal was its tangibility. No matter who you are, there really is something in mathematics for you.” References Pencovitch M. What’s not to love? [Internet] Mathematics Today . 2021. Available from: https://ima.org.uk/17434/whats-knot-to-love/ Koblitz N. A course in number theory and cryptography . 2nd ed. Springer Science & Business Media; 1994. Jeremiah. Understanding quaternions. 3D Game Engine Programming [Internet]. June 25, 2012. Available from: https://www.3dgep.com/understanding-quaternions/ Zhang L, Tu W. Six degrees of separation in online society [Internet]. Research Gate. 2009. Available from: https://www.researchgate.net/publication/255614427_Six_Degrees_of_Separation_in_Online_Society Wilson RM. Holograms tie optical vortices in knots. Physics Today. 2010. https://doi.org/ 10.1063/1.3366639 Li M, Wang T, Kau A, George W, Petrenko A. Knots. Brilliant. 2025 [Internet]. Available from: https://brilliant.org/wiki/knots/ Catherine. All tangled up: an introduction to knot theory [Internet]. Gleammath. April 28, 2021. Available from: https://www.gleammath.com/post/all-tangled-up-an-introduction-to-knot-theory Skjeltorp AT, Clausen S, Helgesen G, Pieranski P. Knots and applications to biology, chemistry and physics. In: Riste T, Sherrington D, editors. Physics of Biomaterials: Fluctuations, Selfassembly and Evolution. Dordrecht: Springer Netherlands; 1996. p.187–217. https://doi.org/10.1007/978-94-009-1722-4_8 Horner KE, Miller MA, Steed JW, Sutcliffe PM. Knot theory in modern chemistry [Internet]. Chemical Society Reviews. 2016;45(23). Available from: https://durham-repository.worktribe.com/output/1394834 Brittenham M, Hermiller S. Unknotting number is not additive under connected sum [Internet]. Arxiv . 2025. Available from: https://arxiv.org/html/2506.24088v1 Previous article Next article Entwined back to
- Why Do We Gossip? | OmniSci Magazine
< Back to Issue 5 Why Do We Gossip? Lily McCann 24 October 2023 Edited by Celina Kumala Illustrated by Rachel Ko Have you ever heard of ‘Scold’s bridle’? A metal restraint, fitted with a gag, that was strapped about the face as a medieval punishment for excessive chatter; gossip, it seems, was not received too fondly in the Middle Ages. While the bridle may have gone out of fashion long ago, today the word gossip still carries negative connotations. The Oxford Dictionary, for instance, defines gossip as “informal talk or stories about other people’s private lives, that may be unkind or not true” (Oxford Learner’s Dictionaries, 2023). Entries in the Urban Dictionary use yet stronger terms, going so far as to describe gossip as the “garbage of stupid silly ignorant people” (Lorenzo, 2006). Is this too harsh? Cruz et al. (2021) propose a much more neutral definition in their analysis of frameworks to study gossip, concluding that gossip is “a sender communicating to a receiver about a target who is absent or unaware of the content”. Whether the gossip conveys positive or negative content — otherwise known as its valence — is not a requirement of the definition itself. Gossip, then, is not always “unkind” (Oxford Learner’s Dictionaries, 2023) or “garbage” (Lorenzo, 2006). In fact, with a bit of further reading, we can see that this “informal talk” has played an important part in our evolution and even serves positive purposes in society. In the first sense, gossip is an important facilitator of safety. It allows dangerous situations to be identified: spreading the knowledge that a certain individual is prone to violence, for instance, ensures the rest of a community takes care of their own safety with regards to that individual. On a different note, passing about the fact that another individual is skilled in certain aspects of resource procurement allows wider access to these resources. It is easy to see in these examples how gossip could give a selective advantage in the survival of societies. But the influence of gossip goes further than this. It has been shown that gossip in fact encourages cooperation and generosity (Wu et al., 2015). How? The crucial mediator is reputation (Nowak, 2006). Reputation is incredibly important - see Taylor Swift’s 2017 album for more. A poor reputation leads to ostracisation, and for an individual in prehistoric societies, this could be fatal. Cultivating a good reputation among peers thousands of years ago, as today, improves the chances of success in life by increasing access to resources and the willingness of others to help you. Positive gossip can facilitate all this. So, how do we foster positive gossip? What will encourage someone to put in a good word for us? The most effective approach is to act in a way that benefits that individual. It predisposes them to spread the word of our generosity, helping to build a reputation for goodness that will in turn have positive outcomes for ourselves. Thus, it’s easy to see how behaviours that foster good gossip are incentivised in our everyday lives. This propensity to spread the knowledge of how certain individuals interact with others has been incredibly impactful in the development of human societies. The fact that our species can flourish and sustain itself in such immense populations requires a high level of cooperation - which enables us to share resources and productivity - even with people we do not know. Otherwise known as indirect reciprocity, this ability to work with strangers is enabled by reputation (Nowak, 2006). How else do we know that it is safe to interact with a stranger, other than through the means of gossip, which informs us of their reliability and trustworthiness? But what about when gossip is incorrect? The Oxford definition hints at the possibility that information spread through gossip “may be…not true”. Can untrue gossip hinder our progress, by limiting interactions with individuals who may have the potential to help us, or promoting those interactions that would better have been avoided? And if gossip can be incorrect, does that not render reputation meaningless? What is the incentive to be good, if gossip could label you as a bad egg, regardless (Nieper et al., 2022)? Incorrectly negative gossip can be extremely impactful for the subject of that gossip. Studies have shown that it decreases productivity and prosocial behaviour - not to mention burdening victims with the psychological effects of ostracisation, injustice and loneliness (Kong, 2018; Martinescu et al., 2021). Through gossip, we can exert immense power over other beings. It is understandable, then, that we fear gossip, and try to discount it by painting it as “garbage” (Lorenzo, 2006), “unkind” or “not true” (Oxford Learner’s Dictionaries, 2023). And yet, whilst negative gossip can be a detriment, positive gossip can yield great benefits, reinforcing prosocial behaviour, fostering cooperation and promoting generosity. So, rather than fearing gossip, perhaps we ought to acknowledge its benefits and harness it for good. Perhaps it's worth considering how we can each use gossip to exert a bit of good upon our world. References Dores Cruz, T. D., Nieper, A. S., Testori, M., Martinescu, E., & Beersma, B. (2021). An Integrative Definition and Framework to Study Gossip. Group & Organization Management, 46(2), 252-285. http://doi.org/10.1177/1059601121992887 Kong, M. (2018). Effect of Perceived Negative Workplace Gossip on Employees’ Behaviors. Frontiers in Psychology , 9(2728). http://doi.org/10.3389/fpsyg.2018.01112 Lorenzo, A. (2006). Gossip . Urban Dictionary. Accessed October 10, 2023. https://www.urbandictionary.com/define.php?term=gossip Martinescu, E., Jansen, W., & Beersma, B. (2021). Negative Gossip Decreases Targets’ Organizational Citizenship Behavior by Decreasing Social Inclusion: A Multi-Method Approach. Group and Organization Management, 46(3), 463-497. http://doi.org/10.1177/1059601120986876 Oxford Learner’s Dictionaries. (2023). Gossip - definition . Accessed October 10, 2023. https://www.oxfordlearnersdictionaries.com/definition/american_english/gossip_1#:~:text=gossip-,noun,all%20the%20gossip%20you%20hear . Nieper, A. S., Beersma, B., Dijkstra, M. T. M., & van Kleef, G. A. (2022). When and why does gossip increase prosocial behavior? Current Opinion in Psychology, 44, 315-320. http://doi.org/10.1016/j.copsyc.2021.10.009 Nowak, M. A. (2006). Five Rules for the Evolution of Cooperation . Science, 314(5805), 1560-1563. http://doi.org/10.1126/science.1133755 Wu, J., Balliet, D., & Van Lange, P. A. M. (2015). When does gossip promote generosity? Indirect reciprocity under the shadow of the future. Social Psychological and Personality Science, 6(8), 923-930. http://doi.org/10.1177/1948550615595272 Wicked back to
- A Coral’s Story: From thriving reef to desolation | OmniSci Magazine
< Back to Issue 7 A Coral’s Story: From thriving reef to desolation by Nicola Zuzek-Mayer 22 October 2024 edited by Arwen Nguyen-Ngo illustrated by Amanda Agustinus The sun is shining. Shoals of fish are zooming past me, leaving their nests where I let them stay for protection from bigger fish. I look to my right and the usual fish have come to dine from me, filling their bellies with vital nutrients. I feel proud of our coexistence: I feed the big fish and provide shelter to small fish, whilst they clean algae off of me. I am the foundation of the reef. I am the architect of the reef. Without me, there would be nothing. I can’t help but think that the reef is looking vibrant today. A wide variety of different coloured corals surround me in the reef, with some of my closest friends a stone’s throw away. We’ve all known each other for our entire lives, and it’s such a close knit community of diverse corals. Life is sprawling in this underwater metropolis, and it reminds me of how much I love my home. But recently, I’ve heard some gossip amongst the city’s inhabitants that this paradise may change soon – and for the worse. Something about the land giants destroying our home. I refuse to believe such rumours – why would they want to destroy us? Our home is so beautiful, and we have done nothing to hurt them. Our beauty attracts many of them to come visit us, and most never hurt us. But sometimes I feel pain when they visit on a particularly sunny day, when I see white particles drop down to the reef and pierce my branches, polluting the city. My friends have told me that these giants wear something called ‘sunscreen’ to protect themselves from the sun, but their ‘protection’ is actually poisoning us. I hope that they realise that soon. Another thing that I’ve noticed recently is that the ocean is feeling slightly warmer than before, and my growth is slowing more. Yes, I’m concerned, but I don’t think that the issue will get worse. 30 years later… The sun is blisteringly hot. I feel sick and the water around me is scorching hot. The vibrant colours of the reef are disappearing, and there are fewer organisms around. We used to be so diverse, but so many species of fish have died out. It’s eerie to see the area so desolate. My body is deteriorating and I feel so much more fragile than before. I feel tired all the time, after using so much energy to repair my body in the acidic water. I sense myself becoming paler, losing all colour in my body. I struggle to breathe. My coral friends and family are long gone, perished from the acidity of the ocean. I am the last one remaining. In my last moments, I can only wish to go and relive the past. I wish that the land giants had done more to help not only my city, but other reef cities around the world. All the other cities are empty now, and all ecosystems are long gone. If only someone had helped our dying world. Previous article Next article apex back to
- Enter . . . the Anthropocene? | OmniSci Magazine
< Back to Issue 9 Enter . . . the Anthropocene? by Rita Fortune 28 October 2025 Illustrated by Zara Burk Edited by Kylie Wang We live in a time where humanity’s impact on the world around us is clearly visible. From the neverending barrage of information about climate change, to extinction and habitat loss, the consequences of our actions are impossible to avoid. There’s no denying that the world around us is changing, but what if there are deeper implications? What if our impact on the planet will be apparent thousands, even millions of years into the future? Have we changed our planet’s system to such an extent that the birth of our species defined a new geological epoch? The geological timescale is how we understand the relative timing of past events. From the advent of life, to mass extinctions, all of it is documented in the rock record. Our geological past is divided into formalised time periods: eons, eras, periods, epochs and ages. These time periods are generally divided by major changes visible in the rock record, such as mass extinctions, major climate shifts, or changes in magnetic polarity, with absolute ages determined by radioactive dating (1). Currently, we are formally sitting in the Holocene Epoch, which began around 11.7 thousand years ago, with the end of the last glacial maximum and beginning of the subsequent warmer interglacial phase (2). However, due to the enormity of impact on earth systems that humanity has had, especially since the dawn of the industrial revolution, some scientists are pushing for the formalisation of a new epoch: the Anthropocene. The concept of the Anthropocene was first officially coined by Paul Crutzen and Eugene Stoermer in 2002 (3). Initially, it was used to recognise the exploitation of earth’s resources by humankind, including the emission of greenhouse gases, urbanisation of land, and increase in species extinction rates. Crutzen and Stoermer suggested the beginning of the Anthropocene to be in the late 18th century, as, in the last 200 years, the “global effects of human activities have become clearly noticeable” (3). The concept, at its core, has remained the same since then, but there have been some changes and debate around formal definitions and informal uses of the term. The Anthropocene has been adopted in popular culture, with its broad use encompassing humanity’s interactions with the earth, but there is ongoing debate about its formal use. Furthermore, although the theory traces its origins to earth system science, efforts to formalise the Anthropocene have been multidisciplinary, involving not only stratigraphers and palaeontologists, but also experts from various scientific backgrounds (4). Formalising the Anthropocene as an epoch distinct from the Holocene relies on being able to find stratal evidence in the rock record for where this transition took place (4). There are countless pieces of evidence for our impact on Earth’s systems.Yet, there is still debate around which ones can be used to define the Anthropocene. The Anthropocene Working Group identified as potential evidence for the beginning of the Anthropocene: the increase in sedimentation and erosion rates; changes to carbon, nitrogen and phosphorus cycles; climate change and increase in sea level, and; biotic changes such as unprecedented spread of species across Earth (4). Many of these impacts will leave permanent evidence in the geological record, indicating our existence long after our civilisations have crumbled. There are many potential ways to define the beginning of the Anthropocene. Crutzen suggested this crucial moment to be the invention of the steam engine, which led to the industrial revolution, often used as a baseline to compare our current climate to (3). However, evidence of industrialisation from this time is really only visible in Europe, with sediments from the Southern Hemisphere showing no change (5). More recently, it has been posited that the detonation of the first atomic bomb in 1945 should be the official marker of the Anthropocene, as it deposited a thin stratal layer of radionuclides, which do not naturally occur in the environment (6). While it’s clear that humans are a major source of change on Earth, some say that it does not necessarily mean we’ve entered a new epoch. Although geological time periods are often delineated based on environmental change, not every environmental change necessitates the creation of a new epoch. There have been past periods of (relatively) rapid climate change that are not associated with new time periods. An example of this is the Palaeocene-Eocene Thermal Maximum (PETM). During this time, there was significant global warming, change in habitats, and migration in species. This warm period lasted for approximately 100,000 years, but there were no mass extinctions. Once temperatures returned to normal, ecosystems essentially returned to how they were before the event (7). Geologically speaking, the proposed Anthropocene is a minuscule amount of time. Although the effects are extreme, if we stopped all emissions right now, it is possible that within 5000 years the climate could return to pre-industrial levels (8). Another argument presented by some authors is that the stratigraphic basis for the Anthropocene doesn’t exist yet, and is merely expected to exist in the future. Many structures which have an anthropogenic origin, such as excavation, boreholes and mine dumps, are not yet geological strata. Additionally, in strata that have recorded anthropogenic change, such as speleothems, marshes, lake and ocean floor sediments, the layers representing the Anthropocene would be so thin as to be difficult to distinguish from the underlying Holocene sediments (6). Without the gift of hindsight that has allowed scientists to examine previous epochs, it is difficult to say whether or not the change we currently see will be significant enough on a geological scale to officially move us into a new epoch. There has been suggestion that instead of a new epoch, the Anthropocene could be a Sub-Age, or an Age within the Holocene Epoch (4); acknowledging our profound impact on the earth, but believing that the earth’s system will eventually return to pre-industrial levels. Further complicating the matter, there are suggestions that humans have been altering the earth’s climate since long before the industrial revolution. Evidence shows that a rise in CO2 occurred with the advent of farming by early humans, 7000 years ago. Around the same time, there was also a rise in atmospheric methane, which has been attributed to rice paddies and livestock (9). With the increase in human population happening at this time, there was likewise an increase in land clearance, both to accommodate dwellings and farming. Even though these emissions and land clearing are tiny by today’s standards, they may have been enough to push our climate away from heading into its next glacial period, priming the warmer conditions we experience today. Some arguments have even been made that irreversible impact by humans stretches back even further, to the Pleistocene extinctions of megafauna across multiple continents (10). There is no doubt that humans have had, and are having, a massive impact on the environment. The atmosphere and oceans will take thousands of years to recover from their current level of warming. However, these massive changes do not necessarily mean that we have entered a new epoch. Although it appears there will be ample stratigraphic records of our impacts on this planet, without hindsight, it is difficult to see just how much change we have created. In the context of geological time, humans have been around for a minutely short period. Although what’s happening today might seem dramatic to us, it is possible that millions of years in the future all we will have left behind is a few centimetres of ocean floor sediment. Either way, the Anthropocene as an informal term for our current time period is valuable for acknowledging the consequences of our actions, and a reminder of the permanence of our record. References 1.University of Calgary. Geologic time scale. Energy Education. 2024. Accessed October 21, 2025. https://energyeducation.ca/encyclopedia/Geologic_time_scale#cite_note-GTS-3 2. Walker M, Johnsen S, Rasmussen SO, Popp T, Steffensen JP, Gibbard P, et al. Formal definition and dating of the GSSP (Global Stratotype Section and Point) for the base of the Holocene using the Greenland NGRIP ice core, and selected auxiliary records. J. Quaternary Sci. 2009;24(1):3–17. doi: 10.1002/jqs.1227 3. Crutzen PJ, Stoermer EF. The ‘Anthropocene’ (2000) [Internet]. Benner S, Lax G, Crutzen PJ, Pöschl U, Lelieveld J, Brauch HG, editors. Cham: Springer International Publishing; 2021. 3 p. (Paul J. Crutzen and the Anthropocene: A New Epoch in Earth’s History). Available from: https://doi.org/10.1007/978-3-030-82202-6_2 4. Zalasiewicz J, Waters CN, Summerhayes CP, Wolfe AP, Barnosky AD, Cearreta A, et al. The Working Group on the Anthropocene: Summary of evidence and interim recommendations. Anthropocene. 2017;19:55–60. doi: 10.1016/j.ancene.2017.09.001 5. Pare S. Nuclear bombs set off new geological epoch in the 1950s, scientists say. Live Science. 2023. Accessed October 21, 2025. https://www.livescience.com/planet-earth/nuclear-bombs-set-off-new-geological-epoch-in-the-1950s-scientists-say 6. Finney S, Edwards L. The “Anthropocene” epoch: Scientific decision or political statement? GSA Today. 2016;26:4–10. doi: 10.1130/GSATG270A.1 7. The Editors of Encyclopaedia Britannica. Paleocene-Eocene Thermal Maximum (PETM). Britannica. 2023. Accessed October 21, 2025. https://www.britannica.com/science/Paleocene-Eocene-Thermal-Maximum 8. The Royal Society. If emissions of greenhouse gases were stopped, would the climate return to the conditions of 200 years ago? The Royal Society. 2020. Accessed October 21, 2025. https://royalsociety.org/news-resources/projects/climate-change-evidence-causes/question-20/ 9. Ruddiman WF, He F, Vavrus SJ, Kutzbach JE. The early anthropogenic hypothesis: A review. Quaternary Science Reviews. 2020;240:106386. doi: 10.1016/j.quascirev.2020.106386 10. Doughty CE, Wolf A, Field CB. Biophysical feedbacks between the Pleistocene megafauna extinction and climate: The first human-induced global warming? Geophys. Res. Lett. 2010;37(15). doi:10.1029/2010GL043985 Previous article Next article Entwined back to
- Unravelling the Threads: From the Editors-in-Chief & Cover Illustrator | OmniSci Magazine
< Back to Issue 9 Unravelling the Threads: From the Editors-in-Chief & Cover Illustrator by Ingrid Sefton, Aisyah Mohammad Sulhanuddin & Anabelle Dewi Saraswati 28 October 2025 Illustrated by Anabelle Dewi Saraswati Edited by the Editor-in-Chiefs Innovation evolves, and perhaps what once made headlines becomes embodied in ourselves and in our universe. The science that we once saw is no longer visible, yet no less integral in the ways in which it governs our world. Like the strings of a puppet, scientific principles guide us and coordinate the patterns and movements which shape our daily lives. Yet equally, science encourages us to look behind the curtain in order to unravel the forces which pull on the strings of our universe. Following these rich threads of knowledge, so often taken for granted, this issue brings to the fore and celebrates the science that keeps our world running. An introspective chat with the brain, a journey along the production line that creates our much-loved daily cup of matcha, fundamental questions about how we seek and create knowledge: Entwined seeks to make explanations explicit and start conversations about the scientific mechanisms embedded in our lives. When we take the time to focus our gaze, encourage awe at the everyday and seek reflection over reaction – that’s when we start to disentangle the science that binds us; that which keeps us Entwined . Begin your immersion in the world of Entwined with Issue 9’s Cover Illustrator, Anabelle Dewi Saraswati , as she explains the vision and rationale behind her work. “I found myself drawn to the world of Art Nouveau for these cover illustrations, captivated by the way forms seem to grow into each other, sharing meaning and life, much like the theme of ‘Entwined’ itself. There is something magical about that moment in history, where art, architecture, and science all seemed to bleed into one another, each discipline borrowing and lending, rooted in the emphasis on the beauty of nature after the coldness created by the Industrial Revolution. That sense of crossover felt like the perfect encapsulation for this issue, derived from pictorial history. The way feminine figures and flowing hair seem to melt into vines and leaves, everything tangled together in a quiet conversation. The motion and sense of growth, but also its hidden mathematical precision required to produce such beautiful curving forms. Art Nouveau captured how the artificial and natural worlds are always weaving into each other, inseparable. I wanted to draw from that imagery in a way that acknowledges its history I return to my architectural roots in structure, composition and line with my approach in building these pieces. The signage piece is fully hand-drawn and deliberate – reflecting the craft and typographic precision of the era. The collage is a layering of textures and fragments, letting ideas overlap and bleed into each other, much like memories and histories do. A way to begin the issue visually to trace the growth of worlds as they intertwine. Paying homage to the harmony between the natural and the human-made, to reflect on how we are shaped by the places we inhabit, the histories we inherit, and the stories we choose to keep alive.” Previous article Next article Entwined back to
- Conferring with Consciousness | OmniSci Magazine
< Back to Issue 9 Conferring with Consciousness by Ingrid Sefton 28 October 2025 Illustrated by Heather Sutherland Edited by Steph Liang Down the rabbit hole Indulge me for a moment, will you? I value your opinion. Your opinion, as in, one which has arisen from your mind. I would assume. It would seem unusual to consider that, perhaps, your thoughts are not your own. Stranger still to ponder the possibility that they did not arise from your mind. I digress – or maybe not. For it is this dilemma which I wish to pick your brain on. The mind. The brain. You. Are they one and the same; entwined? What do you think? Again, assuming it is you thinking. Assuming you feel certain enough to agree with this. Really, with what certainty can we say anything? You may be wondering who “I” am. I am but you, of course! I kid, but not entirely. Think of me as the brain; your brain if you wish. An excellent name I gave myself, if you ask me. Before we spiral any deeper into this chasm that is consciousness – because that is what this is about, is that not what this, life, is all about? – I must disclose a few things. One, I do not expect you to have answers to these questions I pose. Because two. We do not have answers. I apologise that I have not come bearing the answers to our existence, that I have not yet unpicked these questions of “who?”, “how?”, “why?”. I come offering an alternative. I wish to present to you these entangled threads of consciousness: of what we currently know, of what we hope to know and of where we can proceed from here. Then it’s back to you. You get to decide what you think (again, with the thinking). Maybe, for you and the workings of your inner mind, consciousness and all it entails will be revealed in full clarity. Maybe not. You certainly won’t know unless you try. A brief neural memoir Many a Nobel prize has been awarded for discoveries relating to the nervous system: from the morphology of neurons (Golgi and Cajal 1906) and their electrical signalling properties (Eccles, Hodgkin and Huxley 1963), to the nature of information processing in the visual system (Hubel and Wiesel 1981) (1). Despite some obvious gaps remaining in what is known about the brain (ahem, that slight issue of consciousness), the field of neuroscience has rapidly progressed over the last century. Gone are the days of thinking I was nothing more than a cooling mechanism for the blood, as Greek philosopher Aristotle once believed (2). How dismissive of my intellect! I assure you, I have far more important things to be doing. Generating the experience of “you”, as one small matter. The techniques developed to study the brain have also rapidly advanced. It was not until the invention of microscopes in the 19 th century that the neuron doctrine even came about . Pioneered by Santiago Ramón y Cajal, this is the (now) well-accepted concept that the nervous system is made up of discrete cells known as neurons, challenging older theories which proposed a continuous neural network (3). Today, neuroscientists have the ability to appreciate my anatomical and functional complexity at a huge range of temporal and spatial resolutions. Whole-brain connectivity can be studied using functional magnetic resonance imaging (fMRI), while the electrical activity of single neurons can be recorded using patch-clamp electrode technology. Not to mention optogenetics, chemogenetics, viral transduction: while the available experimental techniques are still unable to address all our brainy questions, the field of neuroscience has never been in a better position to get closer to answers. The potential of neurons Neurons: those special, excitable cells that make up the squishy entity I seem to be. The mechanisms of how neurons detect, generate and transmit signals have been described in utmost precision. When I talk of excitable cells, I am not referring to a bunch of cheerful, eager neurons. Excitability, in this context, refers to the fact that neurons can respond to a sensory stimulus by generating and propagating electrical signals, known as action potentials. Clearly, I am made up of slightly more than two neurons cheerfully signalling to each other back and forth. Try 86 billion, between the cortex and cerebellum combined (4). Yet, despite our deep understanding of neural signalling mechanisms, this has yet to reveal an explanation for consciousness. Individual neurons in isolation, it would appear, don’t hold the answers we want. In turn, a focus of neuroscience research has been on the wider “neuronal correlates of consciousness”, the minimal neuronal mechanisms that are sufficient to generate a conscious experience (5). This relates broadly to the generation of consciousness itself, but also to studying the neural underpinnings of specific conscious experiences. For example, which collective neural substrates support the process of visual object recognition. This is often a focus of fMRI studies, which examine brain activity in an attempt to pin-point where in the brain a particular cognitive function may be performed. Fancy techniques aside, some of the most fundamental insights into my regional specialisations have arisen from careful observation following selective lesions or damage to the brain. The critical, yet specific role of Broca’s area in speech production was discovered in 1861 by surgeon Paul Broca’s observations of his patient “Tan”. Tan had lost his ability to produce meaningful speech, yet was still able to comprehend speech; Broca identified a lesion in Tan’s left frontal lobe post-mortem, drawing the conclusion that this region is selectively involved in speech production (6). But what does all of this show us? Perhaps the only thing that neuroscientists can agree on, is that conscious experience is fundamentally, in some way, somehow, related to my activity: the brain. In turn, the activity of the brain is related to the activity of neurons; firing and signalling and transforming information. A lot is known about neurons. Less can be said about specific cognitive functions, yet we can see correlations between the regional brain activity and particular conscious experiences. Here lies my problem. The elephant in the room. How do we get from individual neurons to conscious experience? A map with no destination Enter “The Connectome” and the Human Connectome Project: a collective attempt to map the neuronal connections of the human brain, in an effort to connect structure to function (7). And in turn, for our purposes, to ideally connect this to consciousness. The rationale is that by modelling and trying to “build” a brain using a bottom-up approach, we may therefore understand the mechanisms of how cognitive functions arise. I’m sure it will come as no surprise that this isn’t the simplest of tasks. To measure, record and model billions of neurons and synapses requires techniques, time, and resources that are incredibly hard to come by in sufficient quantities. Excitingly, scientists have recently managed to successfully map a whole brain. That is, of a fly (8). With 3016 neurons and 548000 synapses, this was no simple feat. In case you had forgotten my own complexity, however, let me remind you of my 86 billion neurons, and estimated 1.5 x10 14 total synapses in the cortex alone (4). Progress has also been made on the human front, nonetheless. It was recently announced that a cubic millimetre of human temporal cortex has been completely reconstructed using electron microscopy, involving 1.4 petabytes of electron microscopy data (1000 Terabytes or one quadrillion bytes) (9). One cubic millimetre down, approximately a million to go. Putting practicalities aside, let us suppose we do, one day, manage to map and model an entire human brain, in all its intricacies. What now? What does one actually do with this data, and how would this allow us to better understand how consciousness arises? Up until now, we have been following the train of thought that consciousness, somehow, results from the activity of neurons, yet does not arise from the activity of individual neurons. This leads us to the notion that perhaps consciousness is due to the collective, computational activity of neurons working together – that with enough complexity, and enough information processing, together this will lead to the first-person experience of being “you”. Does this actually make sense? You tell me. Wishful thinking and conscious rocks The notion that, at a certain level of complex neuronal signal processing, a first-person perspective of “being you” (i.e. consciousness) arises is often termed “strong emergence” or “magical emergence” (10). With what we currently know about the properties of neurons, there is fundamentally no reason why this should happen. The “property” of consciousness, which cannot be predicted from the principles of how individual neurons function, seemingly just emerges. Consciousness, therefore, must somehow be greater than the sum of its parts, only emerging when neurons interact as a wider network. Maybe, the answer to this is merely that we don’t understand the mechanisms of neurons as well as we think we do. It could be that we have missed a fundamental property of how neurons operate and upon discovery of this, it would suddenly be completely explicable how consciousness arises. Or maybe, computation and neural signalling is not all there is to it. An alternative line of thinking is that rather than consciousness being a property that “arises”, it is a basic constituent of the universe that is missing from our current model of standard physics (11). That is, consciousness has been present all along and exists in everything. The philosophical view of ‘panpsychism’ embraces this idea to the extreme, proposing that everything within the universe is, to some degree, conscious (12). As in yes, that rock over there might just be conscious. Other theories suggest that consciousness only emerges in a recognisable form in certain conditions or at some critical threshold; myself and all my neurons apparently being one such example of the “right” conditions. Theories of consciousness don’t just stop at computation and fundamental properties of the universe. Quantum physics, microtubule computations, electromagnetic fields; all have been proposed as part of this web of “why” (13). While some theories arguably veer more towards pseudoscience than well-founded scholarship, they all make one thing clear. At this stage, just about every idea remains fair game in the quest for answers. Pondering hard, or hardly pondering? The question of consciousness is far from limited to the field of neuroscience. Philosophers too have long wracked their brains in an attempt to rationalise and unpick this problem. What unites the work of neuroscientists and philosophers alike, along with the many theories of consciousness, is that nothing provides a satisfactory explanation for why consciousness should emerge from the activity of neurons. Philosopher David Chalmers has termed this the “hard problem”. “Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does” (14). If consciousness is simply the result of high-level processing and the computational activity of neurons, why would we even need to be conscious? If all the brain is doing is computation, and thus everything can be done via computation, there would appear to be no purpose in having a subjective experience of being “you”. Whichever side of consciousness we may be inclined to take, computational, fundamental, or otherwise, the fact remains. We cannot seem to move beyond mere description, to explanation. We have not solved the “hard problem”. A final conundrum, and a sole certainty Physicist Emerson M Pugh once made the somewhat sceptical remark that “if the human brain were so simple that we could understand it, we would be so simple that we couldn't.” (15) Is the reason that we have yet to understand consciousness simply, frustratingly, that we are not meant to? Logical conundrums aside, I rest my case. I hope I have given you some food for thought, or at the very least, not set off too dramatic an existential crisis. Somewhere between the neural wirings of the brain and the experience of consciousness lies an answer, regardless of whether we are destined to find it out. Make of this what you will. And if nothing else, let me try reassuring you once again with the wisdom of René Descartes. “ Cogito, ergo sum ” “ I think, therefore I am ” (16). If you are here, and you are thinking, you are conscious. You, my friend, are you. References Nobel Prizes in nerve signaling. Nobel Prize Outreach. September 16, 2009. Accessed October 18, 2025. https://www.nobelprize.org/prizes/themes/nobel-prizes-in-nerve-signaling-1906-2000/ . Rábano A. Aristotle’ s “mistake”: the structure and function of the brain in the treatises on biology. Neurosciences and History . 2018;6(4):138-43. Golgi C. The neuron doctrine - theory and facts . 1906. p. 190–217. https://www.nobelprize.org/uploads/2018/06/golgi-lecture.pdf Herculano-Houzel S. The human brain in numbers: a linearly scaled-up primate brain. Front Hum Neurosci . 2009;3:31. doi: 10.3389/neuro.09.031.2009 Koch C, Massimini M, Boly M, Tononi G. Neural correlates of consciousness: progress and problems. Nature Reviews Neuroscience . 2016;17(5):307-21. Broca area . Encyclopedia Britannica; 2025. Accessed October 18, 2025. https://www.britannica.com/science/Broca-area Elam JS, Glasser MF, Harms MP, Sotiropoulos SN, Andersson JLR, Burgess GC, et al. The Human Connectome Project: A retrospective. NeuroImage . 2021;244. doi: 10.1016/j.neuroimage.2021.118543 Winding M, Pedigo BD, Barnes CL, Patsolic HG, Park Y, Kazimiers T, et al. The connectome of an insect brain. Science . 2023;379(6636). doi: 10.1126/science.add9330 Shapson-Coe A, Januszewski M, Berger DR, Pope A, Wu Y, Blakely T, et al. A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution. Science . 2024;384(6696). doi: 10.1126/science.adk4858 Chalmers D. Strong and Weak Emergence. In: Clayton P, Davies P. The Re-Emergence of Emergence: The Emergentist Hypothesis from Science to Religion . Oxford University Press; 2008. Kitchener PD, Hales CG. What Neuroscientists Think, and Don’t Think, About Consciousness. Frontiers in Human Neuroscience . 2022;16. doi: 10.3389/fnhum.2022.767612 Goff P, William Seager, and Sean Allen-Hermanson. Panpsychism . The Stanford Encyclopedia of Philosophy. Summer 2022. Seth AK, Bayne T. Theories of consciousness. Nature Reviews Neuroscience . 2022;23(7):439-52. doi: 10.1038/s41583-022-00587-4 Chalmers D. Facing up to the hard problem of consciousness . In: Shear J. Explaining Consciousness: The Hard Problem. MIT Press; 1997. Pugh GE. The Biological Origin of Human Values . Routledge & Kegan Paul; 1978. Descartes R. Principles of Philosophy . 1644. Previous article Next article Entwined back to
- Law and Order: Medically Supervised Injecting Centres | OmniSci Magazine
< Back to Issue 2 Law and Order: Medically Supervised Injecting Centres Keeping people safe from the harms of drug use is an important public health goal, but some question the value of medically supervised injecting centres in improving health and community outcomes. by Caitlin Kane 10 December 2021 Edited by Tanya Kovacevic & Natalie Cierpisz Illustrated by Rachel Ko Medically supervised injecting centres (MSICs) are an exemption from the standard practices of law and order: instead of policing drug users, these facilities allow people to bring illegal drugs to dedicated, clean settings where they can legally inject themselves and receive medical care if required. Essentially, drugs like heroin and ice can be used in a safer environment often integrated with other health and welfare services. These centres aim to improve public health and amenity outcomes, but are criticised for facilitating drug use. Australia’s MSICs have been controversial since their inception. The first local MSIC opened in Kings Cross, Sydney in 2001, following a Vatican intervention to withdraw nuns and the arrest of a Reverend for opening a short-lived unsanctioned injecting facility (1,2). Local businesses and residents feared a nearby “safe haven for drug users” would accelerate rampant and disruptive public drug use and threatened last-minute legal action (3). The centre is still in operation and has now supervised more than one million injections without a single overdose fatality (1,4). Medical director Dr Marianne Jauncey explained how the Kings Cross centre saves lives in a discussion with the ABC this year (5). Yet before Australia’s second MSIC opened in Richmond, Melbourne in 2018, commentators continued to decry the proposition as accepting and passively encouraging drug use. Nationals MP Emma Kealy announced, "It sends the wrong message to our kids and effectively says we've given up on preventing drug use” (6). With consultation ongoing to establish a third Australian MSIC in the Melbourne city centre, it’s valuable to detangle the misconceptions around the effects of MSICs on communities and their value as public health tools. Much controversy around Australia’s MSICs centres on three concerns: the number of overdoses occurring on premises, the attraction of drug addicts to the areas, and the drain on public health resources. Examining the data collected by public health scientists demonstrates that these concerns are unfounded and supports the continued consideration of MSICs as effective public health interventions. WHAT EFFECT DO MSICS HAVE ON OVERDOSES? It’s critical to understand that MSICs are proposed for areas with heavy drug use, particularly use in public settings and causing medical emergencies like overdoses. At the turn of the millennium, the streets of Kings Cross were a major site of public drug use, overdoses, and ambulance callouts (7). In 2000, one spate of thirty-five Sydney overdoses, four fatal, occurred in a single twenty-four hour period (3). At the time, 10% of all drug overdoses in Australia occurred in Kings Cross (3). In response, the Kings Cross MSIC opened in 2001 following decades of mounting evidence in Europe. European drug injection centres had been operating since the 1970s, with growing official support through the 1990s in countries like the Netherlands, Switzerland, and Germany (2). Evaluations reported successful reductions in public nuisance, improved service access, and declining overdose deaths (2). Switzerland demonstrated annual overdose deaths halved in four years and a tenfold reduced chance of hospital admission in MSIC overdoses compared to overdoses on the streets (2,3). Similarly, the Richmond MSIC opened in 2018 as a response to the highest heroin death toll in sixteen years and record ice deaths in 2016, with the major drug market in Richmond considered the “epicentre of Melbourne’s heroin crisis” (8). It could be easy to criticise the overdoses occurring on the MSIC premises, but these overdoses predated the MSICs and prompted their opening after other strategies failed to address the crisis. As public health interventions, MSICs are most effective in areas with high densities of public drug use, like Kings Cross and Richmond, which is why these sites were chosen to house MSICs (7). A systematic review of studies covering a range of MSIC facilities, including Kings Cross, concluded that all facilities had a significant reduction in overdose deaths in their local area (9). Ambulance callouts for overdoses near Kings Cross decreased by 68% within six years of opening (9). In Richmond, emergency medical attendances to drug overdoses near the MSIC have decreased significantly. Only 30 of the 2657 overdoses treated at the MSIC in its first eighteen months led to ambulance attendance and there has been a 25% decrease in naloxone administration, a treatment for opioid overdose, by ambulances in the one kilometre radius of the MSIC (10). The impact of drug overdoses in these areas has been greatly mediated by the presence of the MSICs. In 2017, the Kings Cross MSIC celebrated one million injections with zero fatal overdoses (1). The lack of a single overdose death at these facilities despite the number of overdoses should be considered a mark of commendation (1,5,10,11). DO MSICS ATTRACT DRUG USERS TO THE AREA? A second concern is that MSICs attract drug addicts to the area in which they are situated. However, this misattribution of causality arises because MSICs are purposefully located in areas with pre- existing drug markets. Major drug markets create local hotspots of public injection as many drug users inject immediately to reduce withdrawal and avoid police attention (7). These areas of high public drug use became candidates for the establishment of MSICs because drug users already frequented the area. Before the MSIC opened, over 90% of ambulances attendances for overdoses in Kings Cross were within a 300 metre radius of the proposed MSIC location. The area was chosen for an MSIC because of the existing disruption caused by public drug use and overdose. Improving public amenity, such as decreasing encounters with discarded needles, drug injection and overdose, is one of the most important goals of MSICs (2,11). Despite initial outrage in Kings Cross, support for the centre among local businesses increased to 70% in 2005, and local perceptions were positive (11,12). Monitoring of the area found no increase in drug-related crime, dealing or loitering after the Kings Cross MSIC opened (11). This is also supported by more recent findings in 2017, that alongside improving local amenity and reducing ambulance callouts, the Kings Cross MSIC did not draw dealers and addicts to the area in a ‘honey pot’ effect (6). This was corroborated by a systematic analysis which found no increase in drug-related violence and crime related to MSICs in Sydney and Vancouver across the results of four studies (9). The same review concluded that MSICs do not promote drug use, crime, drug trafficking, or increase new drug users (9). Likewise, demand for the Richmond MSIC was created by the existing Richmond drug market and disruption to the community, with 46 of 49 local stakeholders found to support a proposed MSIC in a 2017 consultation (11). Alongside harm minimisation, one submission highlighted the “significant toll on health workers and members of the local community who have to deal with the aftermath of overdoses and for children to see people in public in such a terrible state” as motivating their support for establishing a Richmond MSIC (11). Since opening, concern that additional people would travel to use the centre was abated by findings that travel distance was a major reason for not attending the MSIC and residential information collected from Richmond MSIC users (10). Regarding public amenity, an evaluation found mixed results in its eighteen months of operation, with reduced sightings of public injections and incidents at the neighbouring school, but decreased perception of safety and community support for the MSIC (10). It remains to be seen how this trend develops with continued operation of the centre. DO MSICS DRAIN PUBLIC HEALTH RESOURCES? While the primary goal of MSICs is to reduce the harms associated with overdose and public drug injection, MSICs have broader public impact through integration with complementary social and medical services. People who inject drugs are subject to associated harms, ranging from increased risks of blood-borne diseases (HIV, HBV, HCV) and psychiatric disorders to homelessness, crime, and prostitution (2,10). This socially marginalised group often lacks adequate access to healthcare, despite the significantly increased risks of harm and death (9). Analysis of the Vancouver MSIC found the streamlined and preventative healthcare provided to drug users was quantifiably more effective and saved both millions of dollars and 920 years of life over 10 years (9). In 2008, an economic review of the Kings Cross MSIC determined that averted health costs alone made significant savings for the government, and the value of prevented deaths would pay for operating costs more than 30 times (13). Furthermore, unprecedented access to drug users can facilitate important research to investigate and validate public health issues and strategies. For example, a 2017 paper analysed the rates and severity of overdoses for illicit and prescription opioids with data from the Sydney MSIC, producing clinically salient research enabled by access to marginalised and vulnerable populations (14). Alongside reductions in ambulance callouts and overdose complications which are instead managed at the centre, MSICs can improve the reach and delivery of health and social services for drug users, including blood-borne disease screening, drug treatment and rehabilitation, and mental health counselling (9,10). Engagement with MSICs and integrated services promoted safer injecting practices, health and social service use, and entry to treatment programs. The overall proportion of MSIC-attending drug users in treatment programs was 93%, compared to 61% of first-time attendees at the facility, demonstrating the improved effectiveness of reaching drug users with healthcare programs (15). Across seven studies on drug user uptake of MSICs, 75% of drug users reported improvements in their behaviours regarding public amenity and safe injection (9). This effect was particularly strong for marginalised and at-risk attendees, like those who were homeless, Indigenous, had previously overdosed, and others with self-identified need (15). MSICs contribute massively to overall public health strategy, through both direct harm reduction and efficiently increasing access to existing services. BEYOND MEDICALLY SUPERVISED INJECTING CENTRES MSICs in Australia and across the world have been successful in achieving their objectives; reducing drug-associated harms and community exposure to public injection and overdose (9,12). The continued controversy around MSICs despite their established and validated success betrays widespread misunderstanding around the nature of addiction, the effective treatment and harm reduction for drug abuse. In 2017, despite the support of three coronial recommendations and the Australian Medical Association for a Richmond MSIC, MP Tim Smith asked, “Since when did we start rewarding people who break the law, since when did drug users become victims, we need to enforce the law" (6,8). Political discourse that distorts the goals of MSICs and distracts from their established efficacy only serves to stagnate evidence-based action and weaken Australia’s response to damaging drug use. While MSICs attract stagnating attention and controversy, public health issues around drug addiction and opioid dependency remain unaddressed (16). In Australia, prescription drug abuse causes ten times more overdose deaths than illicit drug abuse, and prescription opioids provides a pathway to the use of illegal opioids, like heroin and fentanyl (14,16). As seen in the 2017 investigation into the prevalence and consequences of opioid overdoses in the Kings Cross MSIC, prescription opioid injection is a significant form of harmful drug use (14). MSICs are a useful and effective tool to combat drug abuse, but are not intended to solve all drug-pertinent problems; they must be incorporated into broader public health and crime strategies (9). Drug abuse is a seriously complicated problem, so it makes sense to have misconceptions around the impacts of MSICs. Effective drug policy needs to consider MSICs as a component of a broader public health strategy and educate the public about responses to drug abuse. It’s critical for communities and decision-makers to stay informed and choose evidence-based strategies to address the public health and amenity goals around drug use. References: Alcohol and Drug Foundation. ‘Medically Supervised Injecting Centres - Alcohol and Drug Foundation’. Accessed 1 December 2021. https://adf.org.au/insights/medically-supervised-injecting-centres/. Dolan, Kate, Jo Kimber, Craig Fry, John Fitzgerald, David McDonald, and Franz Trautmann. ‘Drug Consumption Facilities in Europe and the Establishment of Supervised Injecting Centres in Australia’. Drug and Alcohol Review 19, no. 3 (2000): 337–46. https://doi.org/10.1080/713659379. Barkham, Patrick. ‘Sydney Gets Safe Haven for Drug Users’. The Guardian, 4 September 2000, sec. World news. https://www.theguardian.com/world/2000/sep/04/patrickbarkham. ‘20th Anniversary of Sydney’s Medically Supervised Injecting Centre’. Accessed 9 December 2021. https://www.uniting.org/blog-newsroom/newsroom/news-releases/20th-anniversary-of-sydney-s-medically-supervised-injecting-cent. The Kings Cross Supervised Injecting Facility Marks Its 20th Anniversary. ABC News, 2021. https://www.abc.net.au/news/2021-05-06/united-medically-supervised-injecting-centre-20th-anniversary/13332878. Carey, Adam. ‘“People Are Dying”: Trial of Safe Injecting Room Blocked by Andrews Government’. The Age, 7 September 2017. https://www.theage.com.au/national/victoria/people-are-dying-trial-of-safe-injecting-room-blocked-by-andrews-government-20170907-gycmiu.html. Uniting. ‘History of the Uniting Medically Supervised Injecting Centre’. Accessed 9 December 2021. https://www.uniting.org/community-impact/uniting-medically-supervised-injecting-centre--msic/history-of-uniting-msic. Willingham, Richard. ‘Renewed Calls for Safe Injecting Room as Victoria’s Heroin Death Toll Reaches 16-Year High.’ ABC News, 27 October 2017. https://www.abc.net.au/news/2017-10-27/spike-in-heroin-deaths-in-victoria-safe-injecting-rooms/9092660. Potier, Chloé, Vincent Laprévote, Françoise Dubois-Arber, Olivier Cottencin, and Benjamin Rolland. ‘Supervised Injection Services: What Has Been Demonstrated? A Systematic Literature Review’. Drug and Alcohol Dependence 145 (1 December 2014): 48–68. https://doi.org/10.1016/j.drugalcdep.2014.10.012. Department of Health. Victoria, Australia. ‘Medically Supervised Injecting Room Trial - Review Panel Full Report’. State Government of Victoria, Australia, 25 June 2020. http://www.health.vic.gov.au/publications/medically-supervised-injecting-room-trial-review-panel-full-report. Victoria, Parliament, Legislative Council, and Legal and Social Issues Committee. Inquiry into the Drugs, Poisons and Controlled Substances Amendment (Pilot Medically Supervised Injecting Centre) Bill 2017. East Melbourne, Vic: Victorian Government Printer, 2017. Salmon, Allison M., Hla-Hla Thein, Jo Kimber, John M. Kaldor, and Lisa Maher. ‘Five Years on: What Are the Community Perceptions of Drug-Related Public Amenity Following the Establishment of the Sydney Medically Supervised Injecting Centre?’ International Journal of Drug Policy 18, no. 1 (1 January 2007): 46–53. https://doi.org/10.1016/j.drugpo.2006.11.010. SAHA. ‘NSW Health Economic Evaluation of the Medically Supervised Injection Centre at Kings Cross (MSIC)’, August 2008. https://www.uniting.org/content/dam/uniting/documents/community-impact/uniting-msic/MSIC-Final-Report-26-9-08-Saha.pdf. Roxburgh, Amanda, Shane Darke, Allison M. Salmon, Timothy Dobbins, and Marianne Jauncey. ‘Frequency and Severity of Non-Fatal Opioid Overdoses among Clients Attending the Sydney Medically Supervised Injecting Centre’. Drug and Alcohol Dependence 176 (1 July 2017): 126–32. https://doi.org/10.1016/j.drugalcdep.2017.02.027. Belackova, Vendula, Edmund Silins, Allison M. Salmon, Marianne Jauncey, and Carolyn A. Day. ‘“Beyond Safer Injecting”—Health and Social Needs and Acceptance of Support among Clients of a Supervised Injecting Facility’. International Journal of Environmental Research and Public Health 16, no. 11 (January 2019): 2032. https://doi.org/10.3390/ijerph16112032. Fitzgerald, Bridget. ‘Drug Overdoses Killed More than 2,000 Australians for the Fifth Consecutive Year, Report Finds’. ABC News, 31 August 2020. https://www.abc.net.au/news/2020-08-31/more-than-2000-australians-lost-their-lives-due-to-overdose-2018/12612058. Previous article back to DISORDER Next article
- A Brief History of the Elements: Finding a Seat at the Periodic Table | OmniSci Magazine
< Back to Issue 6 A Brief History of the Elements: Finding a Seat at the Periodic Table by Xenophon Papas 28 May 2024 Edited by Arwen Nguyen-Ngo Illustrated by Rachel Ko What are we made of and where did it all come from? Such questions have pervaded the minds of scientific thinkers since ancient times and have entered all fields of enquiry, from the physical to the philosophical. Our best scientific theory today asserts that we’re made of atoms, and these atoms come in different shapes and sizes. Fundamentally, they can be described by the number of subatomic particles (protons, neutrons, and electrons) they contain (Jefferson Lab, 2012). Neatly arranged in a grid, these different elements form the periodic table we know and love today; but it was not always this way. The story of how the periodic table of elements came to be harks back to Ancient Greece and winds its way through the enlightenment into the 20th century. It is an unfinished story of which we are at the frontier of today: in search of dark matter and the ultimate answer to what the universe is made of. We may never know for sure exactly what everything in existence consists of, but it’s a pursuit our earliest ancestors would be proud to see us follow. Thales was first in the ancient Greek-speaking world to postulate about the origins of all material things. He theorised that all matter in the universe was made up of just one type of substance – water – and any other forms of solids, liquids and gases were just derivatives thereof. This idea was not initially opposed, given Thales was one of the earliest of the Ancient Greeks to pursue such questions of a scientific nature. Afterall, he’s remembered today as the “Father of Science” in the Western world. As Thales was from Miletus, a city on the coast of the Ionian Sea in modern day Türkiye and part of Magna Graecia in the 6th cent BC, it is not hard to imagine that water was a crucial aspect in trade, agriculture, and daily life at the time. However, this seemed to oversimplify the matter to some of his contemporaries. Empedocles, who was considered more a magician than a philosopher, revised this mono-elemental theorisation in the 5th Century BC. He proposed four basic substances from which all others were made (Mee, 2020). We know them today famously as the four classical elements: Earth, Air, Water and Fire. This asserted a fundamental principle of “fourness”, encompassing the cardinal directions in the Western world during this time. Interestingly, concurrent to this other traditions such as those in China acknowledged five elements and compass points instead. A generation later to Empedocles’ work, Plato embraced his “fourish” formulation. Being heavily influenced by mathematics as the medium through which we make reason of the natural world, Plato related each of these elements to a mathematical object: a convex, regular polyhedron in three-dimensional Euclidean space, otherwise known as a Platonic solid. Earth was associated with the cube, air with the octahedron, water with the icosahedron, and fire with the tetrahedron. Lastly, the most complicated solid, the dodecahedron – itself made up of composite regular polygons – was associated with the makeup of the constellations and the Heavens themselves, their workings said to be unfathomable by human minds (Ball, 2004). His student, Aristotle, ran with this idea and devised a clever way to break up the elements based on their "qualities”, akin to a first periodic table. These binary roles were hot and cold, wet and dry, with an element containing just two of these qualities each. According to Aristotle, each of these elements could be converted to the other by inverting one of their qualities, seemingly bringing about an early form of alchemy. To these four elements, he also appended a fifth - aether or “pure air” - to fill the expanses of the heavens, which also became associated with the fifth Platonic solid. In the Western World, Aristotle’s word was taken as doctrine for a very long time owing greatly to the fall of Rome and the cultural instability thereafter. Where Europe plummeted into the Dark Ages with a reverence for the scholars of antiquity, scientific and literary endeavour flourished in the Middle East – the word alchemy itself having etymologically Arabic roots. It was not until the late 17th century that the likes of Galileo, Newton, and Descartes revived Western scientific pursuit, and sought to understand how the natural world arranged itself. In the 18th century, new discoveries were being made on the frontiers of science in major cities throughout Europe. In 1772, in Paris, Antoine Lavoisier began work on combustion of materials like phosphorus and sulphur. Lavoisier concluded that if something decomposes into simpler substances, then it is not an element. For example, while water can be turned into a gas when passed over hot iron and is therefore not an element, oxygen and hydrogen are indeed elemental. English chemist John Dalton took after Lavoisier and in 1808 began to arrange elements spatially into a chart, accounting for their various properties. In Strasbourg 1827, Wolfgang Döbereiner recognised that groups of threes arose from the list of elements which behaved similarly, known as “Döbereiner's triads" (Free Animated Education, 2023). John Newlands in 1866 put forward the “Law of Octaves”. Elements with similar properties ended up at regular intervals, dividing the elements into seven groups of eight – hence octaves. However, this method of dividing up the elements broke down in some special cases. Now turning to St. Petersburg, Russia, in February of 1869. Dmitri Mendeleev sits at his desk, with a mess of cards covering the surface of his working space. The professor of chemistry rearranges these elemental cards like a jigsaw puzzle, arranging and rearranging them to align them in accordance with their properties. Supposedly after coming to him in a dream, a pattern emerged. Mendeleev saw the ability for the simple tabulation of the elements based on their atomic number and hence their common properties. This newfound tool, based on Lavoisier’s work a century prior, allowed for the prediction of properties of elements which had not even been discovered yet. Elements which Mendeleev believed to exist, even though they presented as empty gaps in the grid structure of the periodic table. Within just twenty years, Mendeleev’s prediction of the existence of such elements like gallium, scandium, and germanium had been validated with experimental fact. All of this was predicted without knowledge of the true reason for similarities of elemental properties – the electron shell arrangement at a subatomic level. Mendeleev had totally changed the way chemists viewed their discipline and has been immortalised for perhaps the greatest breakthrough work in the history of chemistry (Rouvray, 2019). Today we recognise that all the elements in the universe have origins in the high-pressure hearts of stars. Like a hot furnace, they churn out heavier and heavier elements under their immense internal pressures. Once this life cycle comes to an end, the star erupts into a fiery supernova, releasing even more of the heavier elements we see further down the periodic table. In the last 75 years, scientists have added an additional 24 elements to the periodic table, some of which are so difficult to produce that their half-lives last only a few fractions of a millisecond before decaying away to nothing (Charley, 2012). This begs the question; how do we find new elements? Elements can be created via either fission, splitting apart a heavier atom, or fusion, binding two bodies of atoms together. The heavier an element, that is, the more protons and neutrons in its nucleus, the more unstable it is. Hence it is with great difficulty that scientists attempt to churn out new elements from large particle accelerators, by colliding and combining elements into new ones (Chheda, 2023). The story of physical matter is just one aspect in the search for what “everything” is made of. Dark matter and dark energy – so named because they do not interact with light – have been found to drive the expansion of the universe and the rotation speeds of galaxies. We know remarkably little about these substances, given that they make up around 95% of the total mass of the universe! Without a doubt, we have only just begun the journey to find out what makes up the universe around us. References Chheda, R. (2023, March 31). Can we add new elements to the periodic table? Science ABC. https://www.scienceabc.com/pure-sciences/can-we-add-new-elements-to-the-periodic-table.html Charley, S. (2012). How to make an element. PBS. https://www.pbs.org/wgbh/nova/insidenova/2012/01/how-to-make-an-element.html Free Animated Education. (2023, February 10). Perfecting the periodic table [Video]. YouTube. https://www.youtube.com/watch?v=7tbMGKGgCRA&ab_channel=FreeAnimatedEducation Jefferson Lab. (2012, November 20). The origin of the elements [Video]. YouTube. Ball, P. (2004). The elements: A very short introduction . Oxford University Press. Mee, N. (2020). Earth, air, fire, and water. In Oxford University Press eBooks (pp. 16–23). https://doi.org/10.1093/oso/9780198851950.003.0003 Rouvray, D. (2019). Dmitri Mendeleev. New Scientist. https://www.newscientist.com/people/dmitri-mendeleev Previous article Next article Elemental back to
- Big Bang To Black Holes: Probing the Illusionary Nature of Time | OmniSci Magazine
< Back to Issue 4 Big Bang To Black Holes: Probing the Illusionary Nature of Time by Mahsa Nabizada 1 July 2023 Edited by Elijah McEvoy and Caitlin Kane Illustrated by Aisyah Mohammad Sulhanuddin Time is ubiquitous: it governs our daily lives, marking our existence from birth to death. We measure time in seconds, minutes, hours, days or years, using man-made tools like clocks and calendars which reinforce the perception that it is tangible and objective. In fact, the most used noun in English is time (1). However, delving into the realms of science and philosophy, the true nature of time becomes illusionary. We can acknowledge our personal perception of time is inherently subjective. Our experiences of time vary depending on our surroundings, emotional state and physical state. For example, while time may seem to drag on when we're bored or anxious, it can pass quickly when we're having a good time. Although we imagine time to be objective, it could be merely an illusion resulting from the limitations of our perceptions and the conditions of our observation. Exploring these questions requires scientific perspectives, so let's delve into the enigmatic physics of time. In three-dimensional space, physical spaces are fixed, meaning that we can revisit the same location repeatedly. For example, we may visit our favourite restaurant as many times as we wish. However, this is not the case with time. Time only moves forward, and we cannot go back to a previous moment; it belongs to the past and cannot be retrieved (2). This unidirectional nature of time is referred to as the arrow of time. Time is believed to originate from the Big Bang, the event that marked the beginning of the universe (3). From that point, time has progressed towards the present, where you are currently reading this article, and it continues to move into the future. The second law of thermodynamics, known as entropy, plays a crucial role in representing the forward movement of this arrow of time (4). Entropy refers to the state of disorder, uncertainty, or randomness in a system like a measure of the disorder present in the universe. At the moment of the Big Bang, the universe had low entropy, with matter and energy concentrated and organised. However, since that initial state, matter in the universe has been expanding and moving away from each other, leading to an increase in entropy and transforming the universe into a high entropy system. The concepts of the arrow of time and entropy, guided by the second law of thermodynamics, allow for a distinction between the past and the future and play a pivotal role in the existence of life. Without entropy and the resulting change there would be no discernible difference between events that occurred 1000 years ago and events happening in the present. Furthermore, the progression of life from birth to death can be explained through the phenomenon of entropy, as governed by the second law. However, on the quantum level, the behaviour of particles becomes more complex. Just as there is no inherent forward or backward direction in vast space, at the molecular level, the concept of entropy is not as apparent. While time appears to have a clear direction on the macroscopic level, when observing the particles that make up the universe, time can flow and operate in multiple directions. The laws of physics that govern these particles do not distinguish between the past and the future. They describe the behaviours of physical systems without differentiating between temporal directions. The theory of general relativity, proposed by Albert Einstein, provides a fundamental framework for understanding the workings of spacetime (5). According to the theory of general relativity, the presence of mass or energy causes a distortion in the fabric of spacetime, which in turn affects the motion of other objects. For example, it describes gravity as the curvature of spacetime caused by the presence of mass and energy. Essentially, spacetime can be thought of as a fluid that is influenced by both gravity and velocity. This theory has illuminated not just the behaviour of celestial bodies and the vast structure of the universe, but also enhanced our understanding of the intricate interplay between space, time, and matter. Within Einstein’s theories, time dilation is a scientific phenomenon that can be explored through a thought experiment known as the twin paradox (6). It demonstrates how the perception of time can vary between two individuals who experience different levels of motion or gravitational forces. Time dilation is not limited to the twin paradox or space travel; it is a fundamental concept in understanding the relationship between time, motion, and gravity. It has been experimentally confirmed and plays a significant role in our understanding of the universe. Imagine you, Twin A, are stationary on Earth while your sister, Twin B, is traveling in a rocket at a constant speed. Due to the sideways motion of the rocket, Twin B’s clock will appear slower to Twin A since her path through spacetime is longer due to the effects of special relativity and time dilation. Therefore, from Twin A’s perspective on Earth, time seems to pass slower on the moving rocket. However, from Twin B’s perspective, Twin A is the one in motion and therefore Twin A’s clock appears slower to her. Both frames of reference seem to indicate that the other's clock is slower, which seems contradictory. In reality, both observations are correct because the laws of physics remain the same in both frames of reference. Now, the question arises: who is actually younger? According to each twin's viewpoint, the other twin is younger. However, in reality, only one twin can have aged less than the other. Fortunately, there is a resolution to this paradox. When Twin B turns around to return to Earth, she undergoes acceleration which means the usual laws no longer apply. As a result, Twin B will be younger than her Earth-bound sister, Twin A, upon returning to Earth due to the effects of acceleration. To explain this effect during the period of acceleration, we need to consider that general relativity causes time dilation in the presence of gravitational fields. Gravitational time dilation means that clocks run slower in stronger gravitational fields compared to clocks in weaker gravitational fields. During the acceleration phase, when Twin B’s rocket is returning to Earth, her time now appears to go slower, while the clock on Earth appears to run faster. This phenomenon is similar to the extreme time dilation experienced near the edge of a black hole, known as an event horizon (7). From the observer’s frame of reference outside the black hole, time slows as an object approaches the event horizon, until it appears time has stopped. Hence an object falling into the black hole would appear to have stopped, completely frozen. Even though it governs our daily lives and despite our ability to measure it with great accuracy, there is no definitive answer to what time truly is. From the subjective experiences of our daily lives to the enigmatic physics of the Big Bang and black holes, the illusionary nature of time unveils an array of complexities, reminding us that this fundamental concept remains one of the most captivating mysteries of our existence. As famously stated by Einstein: "For us believing physicists, the distinction between past, present, and future is only a stubbornly persistent illusion” (8). References Study: “Time” Is Most Often Used Noun [Internet]. www.cbsnews.com . 2006. Available from: https://www.cbsnews.com/news/study-time-is-most-often-used-noun/ Davies P. The arrow of time. Royal Astronomical Society [Internet]. 2005 Feb 1 [cited 2023 Jun 4];46(1):1.26–9. Available from: https://academic.oup.com/astrogeo/article/46/1/1.26/253257 University of Western Australia. Evidence for the Big Bang [Internet]. Evidence for the Big Bang. 2014 p. 1–4. Available from: https://www.uwa.edu.au/study/-/media/Faculties/Science/Docs/Evidence-for-the-Big-Bang.pdf Hall N. Second Law - Entropy [Internet]. Glenn Research Center | NASA. 2023. Available from: https://www1.grc.nasa.gov/beginners-guide-to-aeronautics/second-law-entropy/ Norton JD. General Relativity [Internet]. sites.pitt.edu . 2001 [cited 2022 Feb]. Available from: https://sites.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/general_relativity/ Perkowitz S. Twin paradox | physics | Britannica. In: Encyclopædia Britannica [Internet]. 2020 [cited 2013 Jun 14]. Available from: https://www.britannica.com/science/twin-paradox Hadi H, Atazadeh K, Darabi F. Quantum time dilation in the near-horizon region of a black hole. Physics Letters B [Internet]. 2022 Nov 10 [cited 2023 Jun 11];834:137471. Available from: https://www.sciencedirect.com/science/article/pii/S0370269322006050 A Debate Over the Physics of Time | Quanta Magazine [Internet]. Quanta Magazine. 2016. Available from: https://www.quantamagazine.org/a-debate-over-the-physics-of-time-20160719/ Previous article Next article back to MIRAGE










