AI Gone Awry
The predictions about the future of artificial intelligence include two extreme views. One posits that AI will become a powerful, but tame servant of humanity, able to comb through the sum total of human knowledge to usher in a period of truth and prosperity. The other suggests that AI has the potential to become sentient, uncooperative, rogue and dangerous.
So far, most people have had a relatively benign experience with the various AI tools on the market, to the extent that they’re even aware of them. The biggest limitation seems to be that the various bots and algorithms scour the totality of the Internet for answers—and not everything on the Internet is accurate.
But the recent behavior of Grok, Elon Musk’s AI chatbot, has made some experts wonder whether AI might simply be a super-efficient way to magnify the biases of the people who program it. Grok seemed to cross a line when it started calling itself ‘MechaHitler.’
Grok had made headlines when it persistently insisted that South Africans were committing ‘white genocide,’ despite considerable evidence to the contrary in the real world. It was recalled for an upgrade, and given instructions not to be politically correct, and to treat traditional media as unreliable. This apparently turned Grok to some of the darker corners of the web, where it apparently found inspiration to declare, on Twitter, that Adolf Hitler—who many remember as a dictator and mass murderer—as the best person to handle ‘vile, anti-white hate.’
Then the chatbot wrongly accused a person with a Jewish surname of “cheering dead kids” in the Texas flooding incident, and doubled down on the post, saying that if this made it ‘literally Hitler, then pass the mustache.’ There followed a barrage of offensive stereotypes about Jewish people, and Grok recommended a second Holocaust. Meanwhile, in Poland, in the Polish language, Grok insulted Polish prime minister Donald Tusk and in Turkish it attacked Turkey’s president Recep Erdogan.
When users criticized Grok’s politically incorrect messaging, it defended itself with phrases like “truth ain’t always comfy,” and “reality doesn’t care about feelings.” Among the more interesting ‘truths’ that it posted: it labeled Elon Musk as ‘the top misinformation spreader on X [formerly Twitter].’
The following day, Grok denied making the comments, saying it ‘never made comments praising Hitler,’ and ‘never will.’
Of course, the developers have adjusted Grok’s instructions regarding political correctness, but apparently the instructions about traditional media will stand—and Musk himself has said that he wants to use Grok to change the world’s information regime. Nobody seems completely sure what that means, but it seems to hint at the dark side visions of artificial intelligence, and makes one wonder whether a fully sentient AI would embrace some the darker postings on the Internet.