Sunday, December 1, 2024

AI Voice Cloning Phone Scams Fuels Rise in Kidnapping Cases Across the US

The rapid advancement of artificial intelligence (AI) has given rise to a disturbing new form of phone scam—AI voice cloning scams—where fraudsters replicate the voices of victims’ family members to stage fake kidnapping scenarios. These AI-driven schemes have proven especially effective, preying on the emotional vulnerability of individuals, who are often coerced into sending ransom payments for the supposed safe return of their loved ones.

In a recent incident, Highline Public Schools in Burien, Washington, issued a community alert on 25 September 2024, warning residents about AI voice cloning scams. Two individuals had been targeted by scammers who falsely claimed they had kidnapped a family member.

The criminals used AI-generated audio to mimic the voice of the family member and then demanded a ransom. The FBI has noted a nationwide surge in these scams, especially among non-English-speaking families, as these communities are particularly vulnerable to such advanced tactics.

One Arizona mother, Jennifer DeStefano, shared her traumatic experience during a congressional hearing. She recounted receiving a call from an unknown number, where she heard what she believed to be her daughter Briana sobbing in distress.

Initially dismissive, the situation escalated when a man’s threatening voice took over the call, demanding a $1 million ransom for her daughter’s safe return. Meanwhile, DeStefano’s husband confirmed their daughter was safe at home. The chilling realization that scammers had used AI to clone her daughter’s voice left DeStefano in disbelief.

Experts are sounding the alarm over the ease with which scammers can execute AI voice cloning scams. Fraudsters use two main methods to capture voice data: extracting audio from social media videos or collecting voice samples during unsolicited phone calls.

Beenu Arora, CEO of cybersecurity firm Cyble, emphasized the growing sophistication of these techniques, warning, “The intent is to gather the right data through your voice… and this is becoming increasingly common.”

As AI technology continues to evolve, the threat of AI voice cloning scams looms larger. The National Institutes of Health (NIH) has issued guidelines to help potential victims protect themselves, including verifying any claims of kidnapping by directly contacting the supposed victim and being wary of unfamiliar calls demanding ransom. Arora’s advice to the public is simple but crucial: “When faced with alarming messages, stop and think before acting.”

Authorities urge victims of these scams to report incidents to their local police, as the relentless development of AI continues to challenge law enforcement’s ability to prevent this form of exploitation.

The US Federal Trade Commission (FTC) has expressed serious concerns about AI data collection by major social media platforms, criticizing their lack of transparency. The FTC’s report highlights that companies like Meta, TikTok, and Twitch collect and process vast amounts of user data with minimal oversight. These platforms, along with others like YouTube, Snap, and X (formerly Twitter), have inadequate data management policies, described as “woefully insufficient.” The FTC also noted that these companies gather data not only from their users but from non-users as well, often through tracking technologies and third-party brokers, raising significant privacy concerns.

Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News