• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

AI security’s biggest threat isn’t code, it’s the growing failure to communicate across disciplines

Island News Desk
September 18, 2025
Enterprise Security

Communication barriers among stakeholders pose a greater risk to AI security than technical flaws, according to Security Policy Researcher and Advisor Tiffany Saade.

Credit: Outlever.com

The biggest problem is that we aren't speaking the same language. You have diverse stakeholders who all agree there's a threat, but they define key terms like ‘safety’ and ‘interpretability’ in completely different ways because they're looking at them from different lenses. We end up with policy that looks amazing on paper but is either too difficult or unclear to apply on a technical level.

The most dangerous vulnerability in AI isn't in the code, it's in the conversation. Policymakers, lawyers, and engineers agree on the threat yet remain divided by jargon and discipline. The problem isn't talent, it's translation.

Tiffany Saade is an AI Security Policy Researcher at Stanford University and Consultant on AI Governance and Cybersecurity to the Ministry of Information Technology and AI of Lebanon. She sees a critical failure unfolding, not in technology itself, but in how we talk about it.

  • Lost in translation: "The biggest problem is that we aren't speaking the same language," Saade says. "You have diverse stakeholders who all agree there's a threat, but they define key terms like ‘safety’ and ‘interpretability’ in completely different ways because they're looking at them from different lenses. We end up with policy that looks amazing on paper but is either too difficult or unclear to apply on a technical level."

  • The double bind: The communication gap is widening just as AI agents emerge as powerful, dual-use tools. On one side, they serve as revolutionary co-pilots for security teams. On the other, Saade warns, they introduce a twin threat: agents becoming honeypots for bad actors, and agents weaponized to scale multi-stage cyber operations.

Compounding this is the 'pace problem,' an unavoidable reality that creates a period marked by hyper-vigilance. "You have innovation that is moving so fast," she explains. "The question is, how do we create mitigation strategies that remain relevant?"

  • The weakest link: AI innovation's breakneck, uneven progress is creating what Saade calls 'asymmetric access' to security, a vulnerability she’s witnessed firsthand. “Not everyone is prepared to counter threats from AI agents. Entire countries and institutions lack the maturity to do so, and they become weak links in the global AI innovation flow,” she says. “I come from Lebanon, a country where we barely have access and are just starting to build capacity. When you come from a place like this, you truly see the importance of leveling the playing field.”

  • Playing offense: Rather than trying to slow innovation, Saade advocates for building adaptable security through a proactive, aggressive posture. That means organizations must constantly red team their own systems to uncover vulnerabilities before adversaries do. "We have to be our own most sophisticated adversary," says Saade. "The goal is to find and fix vulnerabilities on our own terms, not on an attacker's." She argues that 'secure by design' is the only approach to build resilience against a torrent that contains ever-changing threats.

Saade warns of a "complacency trap," a subtle human risk that emerges as AI agents become more capable. The danger, she explains, is that organizations will over-rely on these agents, "giving our cognition and our security vigilance away to tools that are themselves vulnerable" and thereby creating a dangerous blind spot.

"No matter how secure your agents are, no matter if you have the perfect cybersecurity controls, you will get attacked," she states. "So the real question is, how hard are you going to get hit?" In the age of intelligent systems, breaches are par for the course, and it comes down to how well an organization absorbs, adapts, and responds.

Related content

In Local Government, Cybersecurity Success Comes From Doing More With Less

Shane McDaniel, CIO for the City of Seguin, shows how municipal cybersecurity moves forward through resourcefulness, trust, and community when budgets and priorities collide.

New Oversight Frameworks Address Internal Fraud as Power Concentrates in Leadership

Srilakshmi Tariniganti, Technology Risk Manager at Sutherland, reframes AI risk around people, outlining oversight models that curb internal fraud by checking concentrated executive power.

How a Forensic Mindset Strengthens Cyber Incident Response and Prevents Repeat Failures

Vincent Romney, Deputy CISO at Nuskin & Pharmanex, outlines why lasting security comes from forensic reasoning that traces incidents back to culture, decisions, and leadership.

You might also like

See all →
In Local Government, Cybersecurity Success Comes From Doing More With Less
New Oversight Frameworks Address Internal Fraud as Power Concentrates in Leadership
How a Forensic Mindset Strengthens Cyber Incident Response and Prevents Repeat Failures
Powered by Island.
© ISLAND, 2025. All rights reserved