Protecting Abuse Survivors from the Intimate Adversary in the Age of AI

Published April 29, 2026

news story image

As AI becomes deeply integrated into our daily lives—offering everything from productivity hacks to emotional support—a critical challenge has emerged: ensuring these tools do not become weapons for abuse.

At the University of Maryland, Julio Poveda, a fifth-year computer science doctoral student, is closing the gap between traditional cybersecurity and the lived experiences of survivors of intimate partner violence (IPV)—abuse that occurs within a close relationship, such as between current or former spouses and partners. 

His research addresses a threat Silicon Valley often overlooks: the “intimate adversary.” Unlike a distant, faceless hacker, an intimate adversary is an abuser who exploits their personal relationship and shared digital access to surveil or harm their victim.

Poveda argues that while technology-facilitated abuse may seem like an outlier to developers focused on global security, it represents a privacy crisis that traditional frameworks are failing to address. He distinguishes between “classical” and “unconventional” threat models; while most systems are built to stop remote intruders, an intimate adversary often has physical access to a survivor’s device, knows their routines, and shares cloud accounts. This proximity allows for the installation of spyware and the coerced monitoring of digital life, turning the safety of the home into a complex security battlefield.

Poveda suggests that the best way for the industry to understand this danger is to look at how businesses protect themselves from their own employees.

“The IPV threat model resembles what organizations face when dealing with insider threats,” Poveda explains.

The urgency of this research was on full display at the RSAC 2026 Conference, which was held March 23–26 in San Francisco. While the conference itself is named for the foundational public-key encryption technology developed by RSA Data Security, Inc. in 1982, Poveda used the platform to argue that traditional encryption is only half the battle.

He presented a project developed in collaboration with Michelle Mazurek, an associate professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS) and director of the Maryland Cybersecurity Center (MC2), and Diana Freed, an assistant professor of computer and data science at Brown University. Their research provided a critical analysis of how current AI safety measures fail to protect those in abusive domestic situations, where the threat is not a lack of encryption, but a breach of trust.

During the presentation, Poveda warned that the growing trend of using AI chatbots as interventions for survivors carries hidden risks. While these tools are designed to be accessible and empathetic, their “social bond” can create a dangerous illusion of safety. If a survivor trusts a chatbot too much with sensitive information that isn’t technically private, a leak could lead to a violent escalation of abuse.

Mazurek says catching these flaws early is a matter of public safety.

“Julio's work examining these chatbots is so important because it’s critical to recognize safety concerns with these tools before they are in widespread use,” she says.

One of Poveda’s core arguments is that developers must move beyond security theater—cosmetic features that offer a false sense of safety—and toward a mindset of rigorous abusability testing. This involves stress-testing how a platform could be intentionally weaponized by an abuser before the technology is ever released to the public.

“More than a technical change, I suggest a mindset shift,” he says. “Conduct abusability testing so you and your team can further reflect on how your platform or tool could affect others, and identify ways to mitigate unintended uses of your technology. Also, work closely with survivors and advocates who can provide valuable insights.”

Poveda’s research sits at the demanding intersection of computer science, law, and social safety. Interacting with at-risk populations brings the weight of vicarious trauma, a challenge he says he manages through interdisciplinary collaboration and structured debriefing within his research team.

Poveda is advised by Dave Levin, an associate professor of computer science with an appointment in UMIACS and a core member of MC2.

“Technology-facilitated abuse is an area I am still interested in further studying because the digital landscape is shifting under our feet every day,” Poveda says. “As AI becomes more sophisticated, so do the methods used by abusers. We have to stay ahead of that curve; our goal isn’t just to understand the tech, but to ensure it can never be used as a tool for coercion or fear.”

He adds that his long-term goal is to ensure digital interventions fully understand the complexity of a survivor’s situation in the way a trained human advocate does. 

By centering the most vulnerable users in the design process, Poveda’s research is moving the field toward a future where smart tools are not just functional, but fundamentally safe for everyone.

—Story by Melissa Brachfeld, UMIACS Communications Group