Anticipation by Neil Taylor
AI is certainly no longer a hype but something that both intrigues and concerns the world. Author, Neil Taylor offers some insight into AI as it features in his novel, Anticipation.
Myth Busting AI
My name is Neil Taylor, and I recently wrote a fictional novel called Anticipation in which Artificial Intelligence (AI) plays a key role. This doesn’t make me an expert, but it has been interesting to observe differences between conversations in the AI community and the way AI is portrayed in the media. Therefore, when I was asked to write a piece on popular AI misconceptions a few things sprang to mind.
At its core, AI has the ability to ‘learn’ from experience. Traditional computing applications perform set routines; you push a button and for a given set of conditions it will churn out the same result, it has no ability to ‘learn’. However, with AI, you are giving the application a ‘goal’, and rather than just repeating a set of steps predefined by a human, it ‘learns’ how to achieve that goal. The misconceptions arise when ‘humanisation’ starts to creep in. For instance…
Confusing robots with AI: News articles about AI often involve a human-looking robot performing a task humans would normally do. Hollywood’s interpretation often ends up with humanoid ‘Terminator’ style robots leaving a wake of death and destruction in their path (after all, where would be the story in a robot that quietly makes you a cup of tea?). However, most AIs today are disembodied applications that find patterns in large data sets. These are the applications that decide if you will get a car loan or a mortgage, or what you see on social media. Robotics and AI are complimentary, but separate fields. Simplistically, robotics is about simulating human movement, AI simulates human thinking. Yes, you can apply AI to robots, but only as much as you can apply AI to anything from playing Chess to writing an email. Most robots follow pre-programmed routines to perform repetitive tasks. When you see assembly lines of robots putting cars together, their movements look eerily human, but they are following pre-programmed routines, there is no real intelligence – no decision-making, no problem-solving.
Assigning human characteristics to AI (anthropomorphising): One of the biggest distractions in AI discussions is when people start assigning human qualities to AI such as feelings, emotions, and consciousness. We tend to look at everything through the lens of being human and attach human traits to non-human entities. We used to think bad weather was the gods expressing their ‘anger’. We stub our toe and think the bed is a vindictive malicious force out to get us. So, when scientists tell us AI can ‘mimic human intelligence’ we think of our own intelligence which comes with all the baggage of human feelings and emotions. However, there is no reason to think AI will spontaneously develop the same feelings, emotions, and motivations that humans developed to survive on the plains of Africa 300,000 years ago. For instance, consider our deeply engrained survival instinct, AI has not been developed in an environment where only organisms with a strong survival instinct get to pass on their genes to the next generation. Why should an AI developed to answer questions about the weather and remind you to pick up the dry cleaning develop a survival instinct? Or, why should it develop the feelings and emotions that humans developed to live in large social groups?
There are many valid concerns about AI and its direction. However, we need to be careful we are worrying about the ‘right’ problems, and ultimately, we probably need to be more worried about the people and companies who are wielding AI than AI itself.
Views expressed do not necessarily reflect those of the Federation.