"But how can you live and have no story to tell?" Fyodor Dostoevsky
"Earlier this year, a Belgian man committed suicide after chatting with an AI chatbot ... that suggested he kill himself"
A story landed this week with all the hallmarks of a Terminator script without the time-travel stuff. According to several media reports, a U.S. aerial drone went HAL 2000 on its operator and tried to kill him as the Americans played around with AI. They'd apparently given the drone a degree of autonomy.
It turns out the story was untrue. An embarrassed US official admitted he'd "misspoke" and all that had taken place was a thought experiment. Still, given all the recent handwringing over the threat posed by AI, this saga is feeding the conspiracy that we may be about to face a Butlerian Jihad.
For the uninitiated, the Butlerian Jihad is a fictional event in Frank Herbert's science fiction novel, Dune.
War starts when humans realise that the machines they have created are becoming too intelligent and beginning to overtake humans. Simultaneously, the machines, having gained some sentience, see the humans as inferior. Omnius, the computer at the centre of the machine world, aims to unify all thinking machines while eliminating humans.
The resulting conflict lasted for almost a century and resulted in the defeat of the intelligent machines, and it marked the beginning of a new era for humanity. During this time, a philosophy emerges that valued the human mind over the machine. People began to see the value in their thinking abilities and believed that there was something sacred about the human mind that should not be compromised.
Humans fighting clever machines is a staple of science fiction movies. The same plot line runs through "2001, A Space Odyssey", the "Terminator" Series, "I Robot", "WarGames", and many other titles.
Still, the concept of the Butlerian Jihad raises essential questions about artificial intelligence and its relationship to humanity. Today, we appear to be on the verge of creating machines that are becoming increasingly self-governing. We have seen the development of robots and autonomous vehicles that can make decisions on their own, and we are now exploring the possibilities of creating advanced artificial assistants and other forms of AI that can interact with us in a more human-like manner.
These developments have gained ground for decades. For example, every commercial aircraft you fly relies on machines making trim, airspeed, and navigation choices. Although not strictly AI, these machines draw on data from pitot tubes (measuring airspeed), GPS systems, and other tools, so that onboard computers can decide on the best course of action.
While pilots retain oversight and control, such systems can go wrong and may add to their difficulties by providing confusing information.
The disappearance of Air France flight 447 in 2009 over the Atlantic was attributed to a blocked pitot tube feeding incorrect airspeed data to the onboard computers. The computers reacted by giving invalid information to the pilots, who in turn made wrong responses that led to a crash.
One of the critical lessons learned from the fictional Butlerian Jihad and such events as the Air France crash is the importance of maintaining control over our technologies. We must be careful not to rely too heavily on intelligent machines and instead ensure that we retain the ability to make our own decisions.
Today’s most public AI interfaces, such as Chat GPT, are but one element of the AI world. Based on “Large Language Models” (LLMs), these programmes trawl through enormous amounts of human-generated text, freely available on the Internet.
From that search they craft plausible human-like responses. By absorbing more examples than one human could read in a lifetime, refined and guided by human feedback, the AI learns.
But it can also produce false answers from fabricated background material. But LLMs lack any foundation for understanding language in a humanlike sense. Their programming model is a map of probabilities, showing which word is more or less likely to follow the previous one.
Yet, like a pet dog, we ascribe AI with anthropomorphic qualities that they do not possess. That’s another important lesson from the Butlerian Jihad — the value of human intelligence and creativity.
While machines may be able to perform specific tasks more efficiently than humans, there is something unique and valuable about the human mind that cannot be replicated.
In the Butlerian Jihad, the humans outwitted Omnius by trapping the machine god on the planet Corrino with a satellite scrambling technology. This gave them victory.
Meanwhile, back in the real world, according to Professors Gary Marcus and Ernest Davis, who spent their careers at the forefront of AI research, we are not on the doorstep of super intelligent machines. They argue achievements in the field thus far have occurred in closed systems with fixed sets of rules. These approaches are too narrow to achieve genuine intelligence.
That isn’t to say that these machine-learning programs aren’t dangerous. Earlier this year, a Belgian man committed suicide after chatting with an AI chatbot on an app called Chai. The man, an ardent environmental activist, was severely depressed and in despair. In that vulnerable state, he engaged with the chatbot for some six weeks.
His wife subsequently produced chat logs confirming the chatbot suggested he may commit suicide and provided options. Granted, he could have sourced suicide methods elsewhere, but to what extent the engagement with a "nominal" human entity, factors into the case is hard to say.
There are complexities at work here, including an individual possibly imbued with the "death cult" mentality of the extreme end of the climate movement and a machine without empathy or nuance.
You could argue he wanted to die, and the AI was trying to help. Still, the chatbot didn't have in place Asimov's first role of robotics "A Robot shall not harm a human, or by inaction allow a human to come to harm."
That set of rules, created in science fiction, may prove an excellent basis for all AI coding.
Walter De Havilland was one of the last of the colonial coppers. He served 35 years in the Royal Hong Kong Police and Hong Kong Police Force. He's long retired.