DIGITAL GOLD FOR PREDATORS: Deepfakes are making dating scams harder to spot
Romance scams are entering a more dangerous phase as Valentine’s Day approaches, with criminals using artificial intelligence to scale deception and push victims toward fraudulent investments, cybersecurity company Tenable said. Tenable’s Satnam Narang, a senior staff research engineer, warned that what were once scattered, individual schemes have evolved into “industrialised” operations that can run at

By Staff Writer

Romance scams are entering a more dangerous phase as Valentine’s Day approaches, with criminals using artificial intelligence to scale deception and push victims toward fraudulent investments, cybersecurity company Tenable said.
Tenable’s Satnam Narang, a senior staff research engineer, warned that what were once scattered, individual schemes have evolved into “industrialised” operations that can run at massive volume, including inside specialized forced-labor compounds where people are coerced into scamming targets.
“2026 marks our entry into a dark age of romance scams,” Narang said. “The availability of powerful frontier AI models has provided digital gold for scammers. For the price of a cup of coffee, predators can now leverage these tools to generate linguistically perfect, emotionally resonant messages designed to ensnare victims across the globe.”
The financial payoff often comes when online relationships pivot into investment fraud—an “endgame” that regulators say is driving staggering losses.
According to the Federal Trade Commission, consumers reported losing USD 5.7 billion to investment scams in 2024, and experts believe the figure is conservative because many victims do not report losses due to the stigma tied to romance-based deception.
Narang said the scams now rely on a set of repeatable tactics that use AI to remove classic warning signs such as broken grammar, inconsistent stories, or awkward messaging that once made fraud easier to spot.
One trend Narang described as the “AI ‘Frontier’” uses large language models to automate the “grooming” phase, allowing scammers to sustain dozens of persona-driven conversations at the same time while sounding fluent and emotionally attuned.
A second trend he labeled the “AI Room” focuses on defeating video verification through deepfake-enabled calls, undermining the long-standing advice to “just hop on a video call” to confirm a match’s identity.
In Tenable’s description, these setups use “virtual camera” software to intercept video feeds on apps such as WhatsApp or FaceTime, then apply real-time face-swapping tools—such as DeepFaceLive—to map a scammer’s facial movements onto a high-resolution “target” persona, with glitches masked by excuses like poor internet or low light.
A third trend is what Narang calls the “investment pivot,” arguing that “romance” is increasingly just the hook for “pig butchering” schemes in which victims are “fattened” with trust and staged success on fraudulent platforms before being “slaughtered” for their life savings.
“These scams are the engine of a multi-billion-dollar industry often built on the backs of trafficked individuals,” Narang added. “Inside these compounds, victims are forced to work ‘sales floors’ governed by strict quotas. They even ring bells and gongs to celebrate when a victim’s life savings are stolen. While the technology is new, the psychological manipulation is as old as time, it just happens at a scale we’ve never seen before.”
A fourth trend highlighted by Tenable is the rapid spread of open-source AI models that can be run privately, reducing the effectiveness of safety filters used by major commercial systems.
While “Frontier” models—including Google’s Gemini, OpenAI’s ChatGPT and Anthropic’s Claude—have safety controls designed to refuse requests tied to wrongdoing, Narang said scammers can shift to free, open-source alternatives like DeepSeek and Qwen, which he said now operate at near-parity with paid tools.
“We naturally focus on romance scams around Valentine’s Day, but the reality is far grimmer. This is a 365-day-a-year industry,” Narang said.
“In 2024, the FBI and FTC reported that investment scams accounted for USD 5.7 billion in losses, the highest of any category,” he said. “Yet, even this figure is likely a conservative estimate, as the stigma of being conned keeps many victims in the shadows.”
Narang pointed to a recent firsthand account published by Wired that, he said, underscores how these scam compounds have evolved, including the use of a dedicated “AI Room” to support face-swapped video calls meant to “prove” a scammer’s identity.
“As the LLMs continue to improve their audio, video and image generation, these deceptions are going to become nearly indistinguishable from reality,” Narang said.
Tenable urged consumers not to be persuaded by screenshots of earnings or claims of insider expertise presented during online romances.
“While the technology is shiny and new, this is a scenario where the old and cliche advice remains one of the key ways to thwart these types of attacks,” Narang said. “If it sounds too good to be true, such as investment opportunities that can lead to earning thousands to hundreds of thousands of dollars, it’s probably a scam.”
“Don’t be swayed by screenshots of earnings or claims of insider expertise,” he said. “If a match brings up investments, whether aggressively or ‘coyly’, it is a scam. Cut contact, unmatch, and report.”
Narang has more than a decade of experience tracking and exposing social engineering trends and focuses on the intersection of AI and modern cybercrime, Tenable said.
Article Information
Comments (0)
LEAVE A REPLY
No comments yet
Be the first to share your thoughts!
Related Articles

Ink, Grit, and the Ilonggo Stubborn Streak
The Unvarnished History of the Daily Guardian Iloilo City, Western Visayas (2001–2026) * * * There is a building in Mandurriao, Iloilo City, that houses one of the most improbable survivors in Philippine community journalism. It is not glamorous. It does not have the backing of a media conglomerate or the deep pockets of special


