• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security
  • Cloud & SaaS

 Back to New Tab

AI security’s biggest threat isn’t code, it’s the growing failure to communicate across disciplines

Island News Desk
September 18, 2025
Enterprise Security

Communication barriers among stakeholders pose a greater risk to AI security than technical flaws, according to Security Policy Researcher and Advisor Tiffany Saade.

Credit: Outlever.com

The biggest problem is that we aren't speaking the same language. You have diverse stakeholders who all agree there's a threat, but they define key terms like ‘safety’ and ‘interpretability’ in completely different ways because they're looking at them from different lenses. We end up with policy that looks amazing on paper but is either too difficult or unclear to apply on a technical level.

The most dangerous vulnerability in AI isn't in the code, it's in the conversation. Policymakers, lawyers, and engineers agree on the threat yet remain divided by jargon and discipline. The problem isn't talent, it's translation.

Tiffany Saade is an AI Security Policy Researcher at Stanford University and Consultant on AI Governance and Cybersecurity to the Ministry of Information Technology and AI of Lebanon. She sees a critical failure unfolding, not in technology itself, but in how we talk about it.

  • Lost in translation: "The biggest problem is that we aren't speaking the same language," Saade says. "You have diverse stakeholders who all agree there's a threat, but they define key terms like ‘safety’ and ‘interpretability’ in completely different ways because they're looking at them from different lenses. We end up with policy that looks amazing on paper but is either too difficult or unclear to apply on a technical level."

  • The double bind: The communication gap is widening just as AI agents emerge as powerful, dual-use tools. On one side, they serve as revolutionary co-pilots for security teams. On the other, Saade warns, they introduce a twin threat: agents becoming honeypots for bad actors, and agents weaponized to scale multi-stage cyber operations.

Compounding this is the 'pace problem,' an unavoidable reality that creates a period marked by hyper-vigilance. "You have innovation that is moving so fast," she explains. "The question is, how do we create mitigation strategies that remain relevant?"

  • The weakest link: AI innovation's breakneck, uneven progress is creating what Saade calls 'asymmetric access' to security, a vulnerability she’s witnessed firsthand. “Not everyone is prepared to counter threats from AI agents. Entire countries and institutions lack the maturity to do so, and they become weak links in the global AI innovation flow,” she says. “I come from Lebanon, a country where we barely have access and are just starting to build capacity. When you come from a place like this, you truly see the importance of leveling the playing field.”

  • Playing offense: Rather than trying to slow innovation, Saade advocates for building adaptable security through a proactive, aggressive posture. That means organizations must constantly red team their own systems to uncover vulnerabilities before adversaries do. "We have to be our own most sophisticated adversary," says Saade. "The goal is to find and fix vulnerabilities on our own terms, not on an attacker's." She argues that 'secure by design' is the only approach to build resilience against a torrent that contains ever-changing threats.

Saade warns of a "complacency trap," a subtle human risk that emerges as AI agents become more capable. The danger, she explains, is that organizations will over-rely on these agents, "giving our cognition and our security vigilance away to tools that are themselves vulnerable" and thereby creating a dangerous blind spot.

"No matter how secure your agents are, no matter if you have the perfect cybersecurity controls, you will get attacked," she states. "So the real question is, how hard are you going to get hit?" In the age of intelligent systems, breaches are par for the course, and it comes down to how well an organization absorbs, adapts, and responds.

Related content

Arizona State University CISO Makes Security a Business Function to Speed Research Safely

Lester Godsey, Chief Information Security Officer for Arizona State University, explains why the CISO role is evolving from a defensive gatekeeper to a strategic business enabler, and how modern security leaders can adapt for success.

Enterprise AI Becomes Critical Infrastructure as Gap Between Security and Governance Grows

Aaron Mathews, Global Head of Cybersecurity at Orion Innovation, explains why AI is becoming essential to business operations even though security and governance frameworks haven't kept pace.

Hindsight Comes at High Cost for Security Leaders as 'Bolt-On' Security Breaks Budgets in OT

Gernette Wright, IT Security Officer, Americas at Schneider Electric, on threats to legacy OT systems and failed human patches.

You might also like

See all →
Enterprise AI Becomes Critical Infrastructure as Gap Between Security and Governance Grows
Hindsight Comes at High Cost for Security Leaders as 'Bolt-On' Security Breaks Budgets in OT
How a Senior Telecom Engineer Spots Security Risks Hidden in 'Patchwork' IT
Powered by Island.
© ISLAND, 2025. All rights reserved