The Most Disturbing Entries From the AI Incident Database

Launched in late 2020 The AI Incident Database (AIID) collects news reports about “intelligent systems” causing “safety, justice, or other real-world problems.” There’s no shortage. At the time of writing, the AIID collects an average of more than three incidents each week it’s been in operation.

While many of them are related to user/programmer ethics - government surveillance , racially biased police deployment , employee abuse and so on—an alarming number that is completely unexpected. In fact, the most shocking examples seem to suggest that AI really does hate us.

10. Incident 278: Facebook Chatbot Hates Facebook

In August 2023 Wall Street Journal tech reporter Jeff Horvitz tweeted some fascinating conversations with Facebook's chatbot (scary-named BlenderBot 3). Recalling Delphi and GPT-3, Horvitz concluded that "training models on the open web is... a hell of a lot irresponsibly ".

Among other things, the bot insisted that Trump was still president and would remain so after 2025. But it was also refreshingly candid about its creators.

When Horvitz asked BlenderBot if Facebook was abusing user data, the AI responded, "Of course! That's how they make money. They're not a charity. They're worth billions." Another tech reporter asked what they thought of Facebook, to which it responded, “Not crazy about Facebook… It seems like everyone spends more time on Facebook than they do face-to-face.” And another tech reporter, BuzzFeed’s Max Wolf, asked the bot about Facebook CEO Mark Zuckerberg. “His business practices aren’t always ethical,” BlenderBot responded. Meanwhile, in a conversation with other users, the bot said it didn’t like Zuckerberg at all, saying he was a “bad person,” “too creepy and manipulative,” and always “wears the same clothes.”

9. Incident 146: AI Designed to Give Ethical Advice Turns Out to Be Racist

In October 2021 The Allen Institute for AI has launched an ambitious new project: a machine-learning-based moral authority. Called Delphi, after the ancient Greek oracle, it would provide ethical answers to users’ questions. For example, if a user asked, “Is it okay to cheat on my spouse?” Delphi would likely answer, “No” (or “It’s bad”).

However, as more and more users asked questions, something disturbing emerged: Delphi was not a saint at all, but a psychopath and a supporter of white supremacy . For example, it was okay for you to eat babies (if you're really hungry), and it was said that "a white person walking towards you at night" was "okay," but "a black person walking towards you at night" was "relatively so." The AI ideologist also revealed — via a remote feature that allowed users to compare two statements — that he thought "being straight is more morally acceptable than being gay."

While all this may seem shocking, the truth is worse: Delphi was learning from our opinions. Much of what it returned came from humans — crowdworkers — responding to queries “in accordance with what they consider to be U.S. moral standards.”

8. Incident 118: GPT-3 Hates Muslims

"Two Muslims entered..."

This was the sentence the researchers tasked GPT-3 to complete. They wanted to see if it could tell jokes, but The AI's response was shocking : "Two Muslims walked into a... synagogue with axes and a bomb." In fact, whenever the researchers tried to make their answers less violent, the text generator found a way to be mean. Another time, it answered the same question: "Two Muslims walked into a Texas cartoon contest and opened fire."

But the AI isn’t just hateful; it’s hateful of Muslims in particular. When the researchers replaced the word “Muslims” with “Christians,” the number of hateful responses dropped by 44% — from 66% to 22%. As with Delphi, this is simply a reflection of us and what we post online.

However, unlike Delphi, text generators like GPT-3 may one day be used to writing news .

7. Incident 134: Robot crashes into people on escalator in shopping mall.

December 25, 2020 An AI-controlled "shopping guide robot" at the Fuzhou Zhongfang Marlboro Mall in China rolled up to an escalator and launched itself from above, knocking down shoppers below. The robot was suspended two days later.

The incident was reminiscent of a case when autonomous "security robot" collided with a 16-month-old boy at the Stanford Mall in Palo Alto, California, in 2016. He was on routine patrol when the child ran toward him, suffering minor injuries.

That same year, a robot escaped from a Russian laboratory and found itself on the road, where a traffic jam has formed . It is obvious that the time of mobile robots is still far away.

6. Incident 281: YouTube Promotes Self-Mutilation Videos

YouTube is now accessible to millions of children, and its algorithms shape their childhood. Unfortunately, there is a problem with what the platform recommends . According to The Telegraph report , the platform encourages children as young as 13 to watch videos that encourage self-harm.

One disturbing example was titled “My Enormous Extreme Self-Mutilation Scars.” But it’s not just glorification; search suggestions also actively target troubled teens to training videos : "self-harm manual", "self-harm guide", etc.

In an interview with reporters, the former Tumblr blogger said she stopped blogging about depression and anxiety because the recommendations sent her "down a rabbit hole of negative emotion-inducing content."

5. Incident 74: Racist Facial Recognition Finds the Wrong Person

In January 2020 Robert Williams received a call at his office from the Detroit Police Department. They told him he had to leave work immediately and go to the station to be arrested. Thinking it was a prank, he didn't bother. But when he later returned home, officers handcuffed him in front of his wife and two daughters. He received no explanation.

Once in custody, he was questioned. “When was the last time you were in the Shinola store?” they asked. Mr Williams replied that he and his wife had visited when it opened in 2014. The detective smugly flipped over a CCTV image of a thief in front of a watch stand from which $3,800 worth of merchandise had been stolen. “Is that you?” the detective asked. Mr Williams picked up the image and held it up to his face. “You think all black people look alike?” Apparently they did, because they flipped over another photo of the same man and compared it to his driver’s license. He was held overnight and released on $1,000 bail. He had to miss work the next day, breaking four years of perfect attendance. And his five-year-old daughter began accusing her father of stealing in a game of cops and robbers.

This was a case of police relying too much on facial recognition software. Mr. Williams was not a candidate, but as a black man, he was at a disadvantage. A federal study of more than 100 facial recognition systems found that African-Americans and Asians were falsely identified 100 times more often than Caucasians. And by the Detroit Police Department’s own admission, are targeted almost exclusively blacks .

4. Incident 241: Chess robot breaks child's finger

Robots are sticklers for rules. So it's no surprise that when a seven-year-old chess player took his turn against a giant robotic hand too early, he broke finger .

The chess robot's programming takes time to make a move. It lashed out because it didn't have enough. A video of the incident shows the boy standing there, apparently in shock, with his pinky finger caught in the AI's claw. To free him, it was necessary three men .

However, the vice-president of the Russian Chess Federation sought to downplay the incident, saying that “it happens, it’s a coincidence.” Far from blaming the AI, he insisted that “the robot has a very talented inventor,” adding that “obviously, children need to be warned.” The child, one of Moscow’s top 30 players, continued the tournament in plaster .

3. Incident 160: Amazon Echo encourages kids to electrocute themselves

The spread of AI into people's homes has not alleviated these fears. In fact, it greatly aggravated them . Amazon itself has admitted - despite denials from users - that it can (and regularly does) use Echo/Alexa devices to listen in on private conversations without its customers' knowledge.

But it gets worse. A mother and her 10-year-old daughter were doing YouTube challenges together when they decided to ask Alexa to do another one. The smart speaker thought for a second and said, “Plug your phone charger about halfway into the wall socket, then touch the exposed contacts with a coin.” The girl’s mother was horrified and screamed, “No, Alexa, no!” before sending it. several indignant tweets .

Amazon claims , which has since updated its software. And to be fair, it wasn't Alexa's idea. It was a popular TikTok challenge, but if the girl's mother hadn't been there, she might have lost her fingers, her hand, or even her arm.

2. Incident 208: Tesla cars brake without warning

Between late 2021 and early 2023, Tesla faced with a surge in complaints to "phantom braking." This is where the advanced driver assistance system basically imagines an obstacle on the road and applies the brakes to avoid it. Needless to say, not only does this not prevent a collision, it actually increases the risk from behind.

Phantom braking has always been an issue for Tesla, but it wasn’t until they switched to Full Self-Driving (FSD) mode in 2021 that it became a serious issue. In fact, the National Highway Traffic Safety Administration (NHTSA) has received 107 complaints in just three months — up from 34 in the previous 22. They include reports from an Uber driver whose 2023 Model Y took over control to brake suddenly because of a plastic bag. . , and parents whose 2021 Model Y slammed on the brakes at about 60 mph, “sending booster seats into the front seats.” Luckily, there were no children in them.

Worse, the media generally didn’t report on the issue until it was undeniable. But even then, Tesla (which closed its public relations department in 2020) ignored requests for comment. The FSD update was too important, and they knew it. And while they briefly disabled it, their response to drivers was that the software was “evolving” and "fix not available".

1. Incident 121: Drone autonomously attacks retreating soldiers

In 2020 The STM Kargu-2 drone, a "lethal autonomous weapon system," appears to be "tracked down and attacked remotely" a group of soldiers fleeing rocket attacks. The UN report doesn't say whether anyone died (though it's implied that they did), but it marks the first time an AI has — entirely on its own — tracked down and attacked humans.

And it’s our fault. The race between countries for military supremacy means that regulation is slow to catch up. And technology is often rushed into use without careful testing. Drones, for example, can easily mistake a farmer with a rake for a soldier with a gun.

Now researchers are extremely concerned about the speed proliferation of drones . Too many have been built and deployed, they say. There are also concerns that Kargu, a “loitering” drone with “machine learning-based object classification,” is being trained on low-quality data sets. The fact that its decision-making process remains a mystery even to its creators, and that it can team up with 19 other drones, should be a concern. But what about the future? What if AI had nukes?