top of page

The Rise of The Planet of AI

By Ashley Mamuko​

When discussing AI, our minds instinctively fear of sentience and robotic uprising. However, is our focus misplaced on the “inevitable” humanoid future when AI has become ubiquitous and undetectable in our lives?​

Edited by Hamish Payne & Katherine Tweedie

Issue 1: September 24, 2021

rise-of-planet-of-ai-aisyah-01_orig.png

Illustration by Aisyah Mohammad Sulhanuddin

On August 19th 2021, Tesla announced a bold project on its AI Day. The company plans to introduce humanoid robots for consumer use. These machines are expected to perform basic, mundane household tasks and streamline easily into our everyday lives.With this new release, the future of AI seems to be closing in. No longer do we stand idle, expecting the inevitable humanoid-impacted future. By 2022, these prototypes are expected to launch.

It seems inevitable that our future would include AI. We have already familiarised ourselves with this emerging technology in the media we continue to enjoy. Wall E, Blade Runner, The Terminator, and Ex Machina are only a few examples of the endless list of AI-related movies, spanning decades and detailing both our apprehension and acceptance through multiple decades. Most of these movies portray these machines as sentient yet intrinsically evil, as they pursue human destruction. But to further understand the growing field of study of AI, it’s important to first briefly introduce its history and procurement before noting the growing concerns played up in the Hollywood Blockbusters. 

The first fundamental interpretations of Artificial Intelligence span a vast period of time. Its first acknowledgement may be attributed to the 1308 Catalan poet and theologian Ramon Llull. His work Ars generalis ultima (The Ultimate General Art) advanced a paper-based mechanical process that creates new knowledge from a combination of concepts. Llull aimed to create a method of deducing logical religious and philosophical truths numerically. In 1642, French mathematician Blaise Pascal invented the first mechanical calculating machine; the first iteration of the modern calculator (1). The Pascaline, as it is now known, only had the ability to add or subtract values using a dial and spoke system (2). Though these two early ideas do not match our modern perceptions of what AI is, they lay the foundation of pushing logical processes to do more than just mechanical means. These two instances in history foreshadow the use of mechanical devices in performing human cognitive functions. 

Not till the 1940s and early 1950s did we finally obtain the necessary means of more complex data processing systems. With the introduction of computers, the novelty of algorithms created a more streamlined function of storing, computing, and producing. In 1943, Warren McCulloch and Walter Pitts founded the idea of artificial neural networks in their paper “A Logical Calculus of Ideas Immanent in Nervous Activity” (3). This presented the notion of computers behaving similar to a human mind and introduced the subsection of “deep learning”. 

Alan Turing proposed a test to assess a human’s ability to differentiate between human behaviour and robotic behaviour. In 1950, the Turing Test (later known as the Imitation Game) asked participants to identify if the dialogue they were engaging with was with another person or a machine (4).

Despite the breakthroughs made in this expertise, the term Artificial Intelligence wasn’t finally coined till 1955 by John McCarthy of AI. Later on, McCarthy along with many other budding experts would hold the famous 1956 Dartmouth College Workshop (5). This meetup of a few scientists would later be pinpointed in history as the birth of the AI field.

As the field continued to grow, more public concerns were raised alongside the boom of science fiction literature and movies cropping up. The notorious 1968 movie 2001: A Space Odyssey shaped such a role into the public perception of the field that by the 1960s and 1970s, an AI Winter occurred. Very little notable progress was made in the field due to the lack of funding based on fear (6). Finally after some time had passed and some more advancements were made with algorithm technology, the notable Deep Blue chess game against Gary Kasparov. The event occurring in May 1997 where the Deep Blue robot beat world champion chess superstar Gary Kasparov marked a silence ushering of perhaps a “decline in human society” at the fall of the machine.

Fast forward to now, AI has traversed through leaps and bounds to achieve a much more sophisticated level of algorithms and machine learning techniques. 

To further understand the uses of AI, I interviewed Dr Liz Sonenberg, a professor in the School of Computing and Information Systems at The University of Melbourne and is a Pro Vice-Chancellor (Research Infrastructure and Systems) in Chancellery Research and Enterprise. She’s an expert in the field and has done a multitude of research. 

"Machine learning is simply a sophisticated algorithm to detect patterns in data sets that has a basis in statistics."

With this algorithm, we have been able to implement it in a variety of our daily tech encounters. AI sits behind the driving force of Google Maps and navigation, as well as voice control. It can easily be found anywhere. “Just because these examples do not exhibit super intelligence, does not mean they are not useful,” Dr Sonenberg explains. 

Dr Sonenberg alludes that the real problem with AI lies within it’s fairness. These “pattern generating algorithms” at times “learn from training sets not representative of the whole population, which can end up with biased answers.” With a flawed training set, a flawed system is in place. This can be harmful to certain demographics and cause a sway on consumer habits. 

With AI-aided advice, the explanation behind outcomes and decisions are not supported either. Algorithms are only able to mechanically produce an output, but not explain them. With more high-stakes decisions untrusted upon the reliability of AI, the issue of flawed algorithms becomes more pronounced. 

With my interview with Dr Sonenberg, not one moment was the fear of super-intelligence, robot uprisings, and the likes brought up...

With the new-found knowledge of AI’s current concerns I brought up with Dr Sonenberg, I conducted another interview with Dr Tim Miller, a Professor of Computer Science in the School of Computing and Information Systems at The University of Melbourne, and Dr Jeannie Paterson, a Professor teaching subjects in law and emerging technologies in the School of Law at The University of Melbourne. They both are also Co-Directors at The Centre for Artificial Intelligence and Digital Ethics (CAIDE). 

As we began the interview, Dr Miller explained again that AI “is not magic” and implements the use of “math and statistics”.

Dr Paterson was clear to bring up that anti-discrimination laws have been in place but as technology evolves and embeds itself more into public domain, it must be scrutinised. The deployment of AI can easily cause harm to people due to systems not being public, causing sources to be difficult to identify and causily attribute.

With the prospect of biased algorithms, a fine dissonance occurs. Dr Miller elaborated on the use of AI in medical imaging used in private hospitals. As private hospitals tend to attract a certain echelon of society, the training set is not wholly representative of the greater population. “A dilemma occurs with racist algorithms… if it is not used [outcomes] could be worse.”

When the idea of a potential super-intelligent robot emerging in the future was brought into conversation, the two didn’t seem to be very impressed. 

“Don’t attribute superhuman qualities [to it],” says Dr Paterson.

Dr Miller states that the trajectory of AI’s future is difficult to map. Predictions in the past of how AI progresses with it’s abilities have occurred, but they occur much later than expected… easily decades later. The idea of super-intelligence also poses the question on how to define intelligence. “Intelligence is multidimensional, it has its limits,” says Dr Miller. 

In this mystical future world of AI, a distinction is placed not just on, “what will machines be able to do but what will not have them do,” states Dr Miller. “This regards anything that requires social interaction, creativity and leadership”; so the future is aided by AI, not dictated by it. 

However, in a more near future, some very real concerns are posed. Job security, influence on consumer habits, transparency, law approach, and accountability are only a few.

With more and more jobs being replaced by machines, every industry is at stake. “Anything repetitive can be automated,” says Dr Miller. But this does not instinctively pose a negative, as more jobs will be created to further aid the use of AI. And not all functions of a job can be replaced by AI. Dr Paterson explains with the example of radiology that AI is able to diagnose and interpret scans, but a radiologist does more than just diagnose and interpret on a daily basis. “The AI is used to aid in the already existing profession, not simply overtake it.”

Greater transparency is needed in showing how AI uses our data. “It shouldn’t be used to collect data unlimitedly,” says Dr Paterson, “is it doing what’s being promised, is it discriminating people, is it embedding inequality?” With this in mind, Dr Paterson suggests that more law authorities should be educated on how to approach topics regarding AI. “There needs [to be] better explanation… [We] need to educate judges and lawyers.”

With the notorious Facebook-Cambridge Analytica scandal of 2018, the big question of accountability was raised. The scandal involved the unwarranted use of data from 87 million Facebook users by Cambridge Analytica which served to support the Trump campaign. This scandal brought to light how the data we used can be exploited nonconsensually and used to influence our behaviours, as this particular example seemed to sway the American presidential election. 

Simply put, our information can be easily exploited and sent off to data analytics to further influence our choices. This creates the defence that apps “ merely provide a [service], but people use [these services] in that way,” as said by Dr Miller. Simply put, the blame becomes falsely shifted onto the users for the spread of misinformation. The impetus, however, should lie with social networking sites disclosing to it’s users more transparency on their data usage and history as well as providing adequate protection on their data.

To be frank, the future of robotic humanoid AI integrating seamlessly into human livelihoods will not occur within our lifetimes, or potentially even our grandchildren’s. The forecast seems at best, unpredictable; and at worst, unattainable due to the complexity of what constitutes full “sentience”. However, this does not indicate that AI lies dormant within our lives. The fundamental technology based in computing, statistics, and information systems lays most of the groundwork for most transactions we conduct online, whether monetary or social or otherwise. AI and it’s promises should not be shunted aside due to the misleading media surrounding it’s popularised definition and “robot uprisings” but rather taught more broadly to all audiences. 

So perhaps Elon Musk’s fantastical ideas of robotic integration will not occur by 2022 but the presence of AI in modern technologies should not go unnoticed.

References:
1. "A Very Short History of Artificial Intelligence (AI)." 2016. Forbes. https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/?sh=38106456fba2.

2. “Blaise Pascal Invents a Calculator: The Pascaline.” n.d. Jeremy Norma's Historyofinformation.com. https://www.historyofinformation.com/detail.php?id=382.

3, 4, 6. “History of Artificial Intelligence.” n.d. Council of Europe. https://www.coe.int/en/web/artificial-intelligence/history-of-ai.

5. Smith, Chris, Brian McGuire, Ting Huang, and Gary Yang. 2006. “The History of Artificial Intelligence,” A file for a class called History of Computing offered at the University of Washington. https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf.

bottom of page