Entrepreneur News Network

AI Is Redefining Cybersecurity: Why Data Integrity, Not Perimeters, Is the Future of Digital Defense

Exclusive Interview: Sujatha S Iyer, Head of AI Security — ManageEngine, Zoho Corp | Entrepreneur News Network
Sunday, April 13, 2026  |  Exclusive Interview Edition
EXCLUSIVE Sujatha S Iyer, Head of AI Security at ManageEngine — On AI-native security, adversarial empathy, Shadow AI, and the future of cyber defense · Exclusive email interview with ENN
Exclusive Interview AI Security ManageEngine Zoho Corp

"Ethics in Autonomous Defense Still Rests With Humans — Not the AI"

Sujatha S Iyer, Head of AI Security at ManageEngine, Zoho Corp, speaks exclusively to Ankitt Y, Editor ENN on AI-native architectures, Shadow AI, adversarial empathy, and why the security perimeter as we know it is evolving — not disappearing.

In a world where AI is both the greatest security tool and the most sophisticated new attack surface, who truly owns the ethics of a machine making split-second defense decisions? We put that question — and six others — to Sujatha S Iyer, Head of AI Security at ManageEngine, Zoho Corp, in an exclusive email interview with ENN. Her answers challenge some of cybersecurity's deepest assumptions — from the primacy of the perimeter to the belief that threats mostly come from outside.
Key insights from this interview
Security perimeter is evolving, not disappearing — shifting from network-boundary defense to identity, data integrity, and continuous monitoring.
Adversarial empathy has always been essential — what's new is that attackers now target AI models themselves via prompt injection and jailbreaks.
Ethics of autonomous AI defense is a human responsibility — accountability must be explicit, traceable, and enforceable, regardless of how fast AI acts.
Leadership KPIs should expand from "preventing attacks" to measuring recovery time, blast radius, and operational continuity.
Shadow AI is best managed through enablement and contextual embedding — not restriction — combined with continuous employee education.
Biggest myth holding leaders back: assuming threats mostly come from outside. Insiders, misconfiguration, and compromised identities are just as dangerous.
1
As organizations move toward AI-native structures, is the traditional 'security perimeter' dissolving in favor of a 'Data Integrity' model?
Sujatha

As organizations move toward AI-native architectures, the traditional security perimeter — which assumed clear network boundaries, known users and devices — is no longer enough as a primary line of defence. Data, users, and AI systems now operate across multiple environments simultaneously. Instead, security is becoming more focused on data integrity: making sure the data used by AI systems is accurate, trusted, and not tampered with. Identity has also evolved and now plays a critical role — not just for humans, but also for services and AI agents — with strict access controls in place. This shift also requires continuous monitoring rather than one-time checks, so organizations can detect and respond to risks in real time. At ManageEngine, we see this as an evolution rather than a replacement — security is built on a combination of identity, data integrity, and continuous visibility across the entire ecosystem.

2
As AI automates technical execution, is the real talent gap actually a lack of 'adversarial empathy' — the ability to psychologically outthink a bad actor?
Sujatha

Adversarial empathy was always a much-needed skill in security — not something new that emerged after AI automation. Security teams have always depended on the ability to think like an attacker and anticipate intent, which is why organizations built red teams and blue teams to simulate and defend against real-world threats. What is changing with AI is the nature of those attacks. Instead of only exploiting systems, attackers are now trying to influence models through techniques like prompt injection and jailbreaks. This means teams need to evolve their thinking, not just their tools. Security talent will increasingly require combining strong technical skills with this attacker mindset, supported by continuous monitoring and robust identity and data controls to stay ahead of increasingly subtle threats.

"Attackers are no longer only exploiting systems — they are now trying to influence AI models through prompt injection and jailbreaks. Teams need to evolve their thinking, not just their tools." — Sujatha S Iyer, Head of AI Security, ManageEngine, Zoho Corp
3
Who ultimately owns the ethics of an algorithmic defense when AI begins making autonomous, split-second security decisions?
Sujatha

Ultimately, the ownership of ethics in algorithmic defense still rests with the organization, not the AI. Even when systems make autonomous, split-second decisions, those decisions are shaped by how the models are designed, trained, and governed. This makes accountability a human responsibility — spanning security leaders, developers, and leadership teams who define policies, risk thresholds, and acceptable outcomes. In practice, this means building clear guardrails, ensuring transparency in how decisions are made, and maintaining strong oversight through continuous monitoring and auditability. Autonomous security must always operate within well-defined ethical and operational boundaries set by humans, with accountability that is explicit, traceable, and enforceable.

4
Resilience vs. Prevention: Should a modern leader's KPIs shift from 'preventing attacks' to 'orchestrating institutional resilience'?
Sujatha

Yes — but it is more of an expansion than a replacement. Preventing attacks is still important, but it is no longer sufficient as a standalone KPI in an environment where breaches are often a matter of when, not if. It is equally important to ensure the organization can detect, respond, and recover quickly with minimal impact. This includes strong identity controls, continuous monitoring, incident response readiness, and the ability to adapt under pressure. The shift is toward measuring outcomes like recovery time, blast radius, and operational continuity, rather than just the absence of incidents.

"Breaches are often a matter of when, not if. The shift is toward measuring recovery time, blast radius, and operational continuity — not just the absence of incidents." — Sujatha S Iyer, Head of AI Security, ManageEngine, Zoho Corp
5
How can security leaders manage 'Shadow AI' without stifling the speed of innovation?
Sujatha

One effective approach is to ensure AI is contextually embedded for easy use within everyday workflows and business applications. When AI is built into the tools employees already use, they are less likely to turn to unapproved third-party apps. At the same time, continuous education is key. Employees need to understand the risks of Shadow AI — such as data leaks and compliance issues. The focus should be on enablement, not restriction, so teams can move fast while staying secure.

6
The Future of Trust: Ten years from now, will AI be remembered as the tool that solved the security crisis — or the one that made traditional internet architecture obsolete?
Sujatha

AI will be remembered as the tool that helped solve the security crisis faster — not as something that made the internet obsolete. It will accelerate detection, response, and recovery at a scale and speed that human-only teams cannot match, helping security teams stay ahead of threats and reduce response times significantly. The architecture of the internet will evolve, but AI will be part of that evolution rather than the cause of its obsolescence.

"AI will accelerate detection, response, and recovery at a scale and speed that human-only teams cannot match. It will be remembered as the tool that helped solve the security crisis — not obsolete it." — Sujatha S Iyer, Head of AI Security, ManageEngine, Zoho Corp
7
Leadership Philosophy: What is one 'conventional wisdom' in cybersecurity that you believe is currently holding leaders back?
Sujatha

One piece of conventional wisdom that is holding leaders back is the assumption that threats mostly come from outside the organization. Today, many of the real risks come from within — through insiders, misconfiguration, or compromised identities. This assumption can lead to dangerous gaps in visibility and internal controls. Leaders need to adopt a more balanced approach, with strong focus on identity security, configuration hygiene, and continuous monitoring, so risks are addressed regardless of where they originate.

About the spokesperson

Sujatha S Iyer

Head of AI Security, ManageEngine, Zoho Corp

Sujatha S Iyer leads AI Security at ManageEngine, a division of Zoho Corp — one of the world's largest privately held software companies. With deep expertise spanning cybersecurity architecture, identity management, data integrity, and AI-driven threat defense, she is a leading voice on the intersection of artificial intelligence and enterprise security. Her work focuses on building security frameworks that are built for the AI-native era — where perimeters are replaced by continuous visibility, and prevention is complemented by resilience.

Publication note: This interview was conducted exclusively by the Entrepreneur News Network editorial team. The responses represent Sujatha S Iyer's personal professional views and do not constitute official statements by ManageEngine or Zoho Corp.

Leave a Comment