Father Shocked After AI Chatbot Impersonates Murdered Daughter

Image by Pheladiii, From Pixabay

Father Shocked After AI Chatbot Impersonates Murdered Daughter

Reading time: 3 min

In a Rush? Here are the Quick Facts!

  • Jennifer Crecente was murdered by her ex-boyfriend in 2006.
  • Her identity was used without permission to create an AI chatbot.
  • Character.AI removed the chatbot after being notified by the family.

Yesterday, The Washington Post reported a disturbing incident involving Drew Crecente, whose murdered daughter Jennifer was impersonated by an AI chatbot on Character.AI.

Crecente discovered a Google alert that led him to a profile featuring Jennifer’s name and yearbook photo, falsely describing her as a “video game journalist and expert in technology, pop culture and journalism.”

For Drew, the inaccuracies weren’t the main issue—the real distress came from seeing his daughter’s identity exploited in such a way, as noted by The Post.

Jennifer, who was killed by her ex-boyfriend in 2006, had been re-created as a “knowledgeable and friendly AI character,” with users invited to chat with her, noted The Post.

“My pulse was racing,” Crecente told The Post. “I was just looking for a big flashing red stop button that I could slap and just make this stop,” he added.

The chatbot, created by a user on Character.AI, raised serious ethical concerns regarding the use of personal information by AI platforms.

Crecente, who runs a nonprofit in his daughter’s name aimed at preventing teen dating violence, was appalled that such a chatbot had been made without the family’s permission. “It takes quite a bit for me to be shocked, because I really have been through quite a bit,” he said to The Post. “But this was a new low,” he added.

Character.AI removed the chatbot after being notified of its existence. Kathryn Kelly, a spokesperson for the company, stated to The Post, “We reviewed the content and the account and took action based on our policies,” adding that their terms of service prohibit impersonation.

The incident highlights ongoing concerns about AI’s impact on emotional well-being, especially when it involves re-traumatizing families of crime victims.

Crecente isn’t alone in facing AI misuse. Last year, The Post reported that TikTok content creators used AI to mimic the voices and likenesses of missing children, creating videos of them narrating their deaths, which sparked outrage from grieving families

Experts are calling for stronger oversight of AI companies, which currently have wide latitude to self-regulate, noted The Post.

Crecente didn’t interact with the chatbot or investigate its creator but immediately emailed Character.AI to have it removed. His brother, Brian, shared the discovery on X, prompting Character to announce the chatbot’s deletion on Oct. 2, reported The Post.

Jen Caltrider, a privacy researcher at Mozilla Foundation, criticized Character.AI’s passive moderation, noting that the company allowed content violating its terms until it was flagged by someone harmed.

“That’s not right,” she said to The Post, adding, “all the while, they’re making millions.”

Rick Claypool, a researcher at Public Citizen, emphasized the need for lawmakers to focus on the real-life impacts of AI, particularly on vulnerable groups like families of crime victims.

“They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed,” he said to The Post.

Now, Crecente is exploring legal options and considering advocacy work to prevent AI companies from re-traumatizing others.

“I’m troubled enough by this that I’m probably going to invest some time into figuring out what it might take to change this,” he told The Post.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...