Back to New Tab
Zero trust in the AI era: Protecting enterprises and consumers alike from sophisticated scams
CXO Spotlight
Michael Leland, Island Field CTO, joined News 3 Las Vegas to discuss learnings about AI-enabled threats from the Black Hat USA conference.

Adversaries can use tools to mine personal information. They build a personal image of who you are, how and when you communicate, and with whom. They can then tailor an attack directly to you.
AI-powered scams dominate consumer headlines, but the same tactics are already threatening enterprises with far higher stakes. Polished phishing emails that trick individuals into clicking a link become business email compromises that expose entire organizations. Deepfake voices used in ransom calls to families are being turned against finance teams and executives to authorize multimillion-dollar transfers. Consumer scams aren’t a sideshow; they’re the live-fire drills for the breaches already tearing through corporate networks.
Michael Leland, Field Chief Technology Officer at Island, sat down with News 3 Las Vegas to discuss the consumer side of AI-driven security risks as part of Island’s presence at the Black Hat USA conference. The annual event sees cybersecurity leaders spotlighting emerging threats, and unsurprisingly, AI dominated this year's agenda. For enterprises, that conversation translates into a growing concern over identity and access: as enterprises deploy agentic and semi-autonomous systems, each new capability expands the attack surface.
Zero trust: The same vigilance that Leland advises for consumers—"If you’re ever asked for money or credentials or personal information, verify by calling that person back directly"—applies to enterprises through the zero trust framework. At its core, zero trust extends that principle by assuming no request is inherently trustworthy. Every user, device, and system must be authenticated and authorized continuously, not just at login. Access is granted based on least privilege, and elevated permissions are tightly controlled and monitored.
The model is increasingly urgent as AI enables phishing, deepfakes, and social engineering to mimic trusted colleagues or executives with uncanny accuracy. With familiarity no longer a signal of safety, zero trust strips away that assumption and replaces it with real-time validation at every layer of the enterprise. At the center of that validation is identity, as credential theft and misuse remain one of the most reliable entry points for attackers. Once an identity is controlled, every additional permission or system dependency becomes a lever, turning a single weak point into the foundation for a full-scale breach. The ability to neutralize compromised credentials before they can be weaponized is therefore becoming just as important as controlling how permissions are granted and expanded.
Gone phishing: Phishing has always been low effort, but AI has made it far more convincing. "The fact is that phishing attempts by adversaries are becoming that much more effective," Leland said. "The emails are now much more legible. They sound like us. They’re written in a way that makes us more amenable to clicking that link."
What once looked like sloppy spam now passes as legitimate business communication. The same polished lures that trick consumers are already flooding enterprise inboxes, where a single click can expose sensitive systems. The risk is real: Island recently helped The Bank of Marion stop a sophisticated phishing attempt against its employees, proof of how quickly a convincing message can escalate into a threat against customers and operations.
Deepfakes in the boardroom: "AI can impersonate your voice, your likeness, your physical imagery so well that it would fool even your parents," Leland warned. It can also fool your colleagues. The threat has moved into enterprise environments, where deepfake technology is being used to impersonate executives and trick finance teams into authorizing fraudulent transfers.
Recent cases show just how quickly this has escalated. In 2024 attackers used AI-generated CEO voices and likenesses to target global firms, including Ferrari and WPP, underscoring that deepfakes are no longer fringe experiments but active tools in high-value fraud.
Social engineering: "Adversaries can use tools to mine personal information," Leland said. "They build a personal image of who you are, how and when you communicate, and with whom. They can then tailor an attack directly to you." For businesses, this means attackers are mapping not just individuals but entire organizational hierarchies, with social engineering reported as the cause for 17% of enterprise data breaches. LinkedIn data, conference speaker bios, and even Slack screenshots can become raw material for hyper-personalized spear-phishing.
The fraud wave is hitting on all fronts. In 2024, U.S. households reported more than $12.5 billion in scam losses—a 25% jump in a single year—and the very same techniques are surfacing in enterprise breaches. Criminals don’t draw a line between personal and professional targets; what works against families is quickly repurposed against finance teams, suppliers, and executives. For security leaders, the lesson is clear: consumer scams and enterprise intrusions are two sides of the same attack chain.