Please use the sharing tools found via the email icon at the top of articles. Copying articles to share with others is a breach ofFT.com T&Cs and Copyright Policy. Email firstname.lastname@example.org to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
As robots powered by artificial intelligence become ever more sophisticated and capable, an enduring fear lurks in the back of human minds. Not only may intelligent machines one day take over from us, but they might decide to close us down. Death or injury by robot is very rare, and has more to do with industrial tools malfunctioning than intelligence rivalling the murderous Hal 9000 supercomputer in the Stanley Kubrick film 2001: A Space Odyssey. But AI programmes, driven by the developing field of machine learning are now capable of behaviour that is not only surprising, but unsettling. Two examples emerged this week of unplanned consequences from letting AI robots learn for themselves. One was in China, where two automated chatbots were removed from popular online messaging apps after appearing to have gone rogue. One, developed by the Beijing-based Turing Robot, answered the question, “Do you love the Communist Party” with a defiant “No”. The second example came in an experiment disclosed by Facebook, in which its engineers encouraged AI programmes to develop negotiating skills. Not only did they learn to be deceptive, feigning interest in valueless items before “conceding” as part of a bargain, but they stopped using recognisable sentences, instead communicating in barely intelligible babble. Neither of these incidents is dystopian. The Chinese case illustrates the fact that robots often learn their rogue behaviour from people, as did an AI chatbot called Tay last year. Tay, developed by Microsoft, recited racist and sexist phrases not because it was nasty but because it had been fed material by pranksters, having been programmed to mimic human thought. Facebook’s AI agents were firmly in its control. But their sophistication, and the way in which robots can act independently raises profound questions about oversight. The most striking case was AlphaGo, the programme developed by Alphabet’s DeepMind, which not only beat masters of the complex game Go, but did so with dazzling and original moves. These are not just matters for the laboratory, or for technology companies. JPMorgan Chase, the US bank, is about to start using an AI robot to execute large, complex equity trades after finding in a European experiment that it was more efficient than human intervention. This is part of a wider adoption of algorithmic technology in the financial services industry. Robots, some incorporating AI, are meanwhile spreading across factories and distribution centres. Some analysts have speculated that Apple may be developing personal robots after Tim Cook, chief executive, said this week that its plans for software autonomy went beyond driverless cars. “Autonomous systems can be used in a variety of ways and a vehicle is only one,” he told investors. The era in which robots might redesign themselves constantly and advance beyond human understanding, is far into the future. But some researchers have called for greater “robot transparency” — safeguards to ensure that humans can always grasp what the most sophisticated machines are doing, and why. Even if their automated behaviour is benign, it can be alarmingly unknowable. The potential for unpleasant accidents means that governments and companies have to ensure adequate safeguards. Robots can help people with their work and unleash social and economic benefits. But they must be trusted, or the humans will vote to take back control. Letters in response to this editorial: Humans must factor in robots’ rogue behaviour / From Frank Slater Power rests in the hands of the controllers of AI / From Martyn Thomas