A woman turned to ChatGPT before poisoning 3 men in South Korea, police say

Date:


SEOUL, South Korea — “What happens if you take sleeping pills and alcohol together?”

“How much is dangerous if you take them together?”

“Could you die?”

Those are the questions police in South Korea say Kim So-young asked ChatGPT shortly before she gave two men a mix of alcohol and benzodiazepine, leading to their deaths. Prosecutors allege Kim gave the drugs to three men, two of whom died, while the other was injured. Investigators have turned ChatGPT conversations forensically extracted from Kim’s phone in an attempt to show intent.

“This is not only significant as evidence in itself, but also because the very fact that conversations with ChatGPT are being admitted as direct evidence in a murder case is highly noteworthy,” Nam Eonho, a senior attorney at the law firm Vincent and counsel for the family of one of the victims, said in a phone interview.

“If such evidence were not admitted, it would be difficult to prove the defendant’s intent to kill, which is a key element of the crime,” Nam said.

NBC News contacted South Korea’s Supreme Prosecutors’ Office, which oversees the Seoul Central district Prosecutors’ Office handling the case, for comment. The office did not immediately respond. Kim has denied any intent to kill, saying in court that the deaths were accidental. Nam said the chat log evidence contradicts that.

The case, which may be the first of its kind in South Korea. Yet it’s part of a growing series of high-profile criminal cases in which people are accused of using AI programs to aid violent crimes. Most publicly documented cases have involved ChatGPT, but Google’s Gemini was recently named in a civil suit that alleged the chatbot aided a man who planned to commit a mass casualty event near Miami’s airport. Experts say use of such tools for nefarious means is likely to accelerate as chatbots become more widespread, as online search did when it debuted. As OpenAI faces several lawsuits tied to allegations its tool was used to carry out crimes, the AI industry is just beginning to grapple with its role in mitigating physical harms and how to work with law enforcement.

OpenAI did not respond to questions about the case or how often it refers cases to law enforcement, including questions about which law enforcement agencies it may be working with. It pointed to a letter written in response to a shooting in Canada and a blog post about community safety.

It is not yet known whether the judge presiding over Kim’s case in South Korea will admit the ChatGPT logs as evidence. The trial is ongoing. The case has drawn significant attention in the country. Local media reported that a courtroom overflowed with journalists and observers at the latest hearing, on May 7.

In February, police arrested Kim on charges of murder and violating South Korea’s Narcotics Control Act, alleging she gave men toxic drinks containing benzodiazepine and other drugs in the guise of a hangover cure. Beginning in mid-December, Kim sought out dates with men, took them to a motel and then gave them the substance, in fear of unwanted physical contact, authorities allege. The first victim survived after a two-day coma. Authorities said Kim then consulted ChatGPT about dosages and adjusted them before she gave them to the second and third victims. The full chat logs have not been released and instead have been quoted and cited by the police.

Police have determined that the third victim, whose estate is represented by Nam, met Kim on Feb. 9 at a motel in Seoul. She handed him the beverage laced with medication, Nam said. After the man collapsed, Nam said, she used his phone to order food delivery and left with it. The police arrived the following day, after the man had already died. Nam said an autopsy report he had seen concluded he died from drug poisoning.

“In a sense, the suspect received guidance from ChatGPT and then used that information as a means to carry out the crime,” Nam said. “This makes the case distinctive in that ChatGPT searches were directly utilized as a tool in the commission of the offense.”

While the police are also using social media posts and CCTV cameras in addition to the chat log evidence, it is the conversations with ChatGPT that may prove critical in determining whether Kim meant to kill the victims. The next trial date is set for June.

Kim’s case echoes a growing slate of similar incidents in North America, where the alleged perpetrators used ChatGPT to ask for instructions crucial to the crime. The systems’ developers have distanced themselves from illegal actions and the pending legal cases in the U.S. and Canada.

The cases have put pressure on OpenAI.

After an 18-year-old shooter killed eight people in Tumbler Ridge, British Columbia, in February, OpenAI CEO Sam Altman wrote a letter apologizing to the community for not having informed law enforcement of the shooter’s account. The perpetrator described scenarios involving gun violence to ChatGPT for several days before the account was banned in June, eight months before the shooting. The company did not alert law enforcement. In April, families of those killed and injured filed seven federal lawsuits against OpenAI, alleging it failed to take measures that could have prevented the shooting.

“While words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered,” Altman wrote, committing to work with authorities to prevent future crimes.

The suspect in the shooting at Florida State University in April 2025 was in “constant communication with ChatGPT,” the state’s attorney general, James Uthmeier, said at a news conference. The attack killed two people. Uthmeier launched a criminal investigation to determine the role OpenAI’s product played in the attack. He said ChatGPT “advised the shooter on what type of gun to use, on which ammo went with which gun, on whether or not a gun would be useful in short range.”

A spokesperson for OpenAI said at the time that “ChatGPT is not responsible for this terrible crime,” adding that the responses the chatbot gave “could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.”

The family of one of the victims in the FSU shooting sued OpenAI on Sunday.

ChatGPT and generative AI have also been used “to research explosives and ignition mechanisms” in the January 2025 Tesla explosion outside Trump International Hotel Las Vegas, according to Las Vegas police. A North Carolina school therapist is alleged to have used ChatGPT to research “lethal and incapacitating drug combinations that could be ingested and injected” to poison her husband last year. In October, a 17-year-old Florida teenager is alleged to have used the tool in an attempt to stage his own kidnapping.

Experts say the admission of ChatGPT and similar tools in criminal cases is nascent. Yet there is scarcely a legal process it has left undisrupted. Lawyers and victims use chatbots to build cases, sometimes with so many errors that judges ban using them in their courtrooms. Some defendants use them to doctor evidence or to call genuine evidence into question. Now, an increasing body of casework pointing to using generative AI in crimes is emerging. For many in the field, the cases that are in the public eye are only the tip of the iceberg.

“It’s unsurprising that criminals use chatbots that are willing to help plan crimes,” said Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology and chair of the Future of Life Institute, a nonprofit organization that seeks to reduce risks from transformative technologies

“There are fewer safety standards for AI than there are for sandwiches,” Tegmark said. “The obvious solution is binding safety standards such that companies can’t release AI systems until they refuse criminal activity.”

Some argue that using a chatbot is not so different from a simple Google search, with both producing digital information trails showing how criminals planned their actions. But Nam, the lawyer in the South Korean case, said chatbots create a new type of scenario.

“The real problem is that this conversational format may allow potential criminals to engage in ‘dialogue’ with ChatGPT without a sense of guilt,” he said.

“If the suspect had asked a human being about the dosage or administration of a toxic substance, that person would naturally question the intent — why someone would want such specific information about administering poison,” he said. “However, ChatGPT does not filter such questions through ethical judgment.”

As the industry begins to grapple with the misuse of its technology, it faces similar questions about safeguarding as breakthroughs of the past, such as seat belts in cars, moderation on social media or warning labels on potentially toxic products.

“We will reach an equilibrium that everyone feels comfortable with,” said Anat Lior, an assistant professor of law at Drexel University who has studied AI governance and accountability. “We’re just not sure what that balancing act looks like yet.”



Source link

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

NBA player Brandon Clarke dies at 29

NBA player Brandon Clarke has died at the age...

Hayden Panettiere Reveals Betrayal, Sexual Trauma as a Teen

Nashville star Hayden Panettiere says that when she was 18 years old,...

EIA Sees Prolonged War in Iran Draining Global Oil Stocks Faster Than Expected

The agency said it expects inventories to decline by...