"Ethics in Autonomous Defense Still Rests With Humans — Not the AI"
Sujatha S Iyer, Head of AI Security at ManageEngine, Zoho Corp, speaks exclusively to Ankitt Y, Editor ENN on AI-native architectures, Shadow AI, adversarial empathy, and why the security perimeter as we know it is evolving — not disappearing.
As organizations move toward AI-native architectures, the traditional security perimeter — which assumed clear network boundaries, known users and devices — is no longer enough as a primary line of defence. Data, users, and AI systems now operate across multiple environments simultaneously. Instead, security is becoming more focused on data integrity: making sure the data used by AI systems is accurate, trusted, and not tampered with. Identity has also evolved and now plays a critical role — not just for humans, but also for services and AI agents — with strict access controls in place. This shift also requires continuous monitoring rather than one-time checks, so organizations can detect and respond to risks in real time. At ManageEngine, we see this as an evolution rather than a replacement — security is built on a combination of identity, data integrity, and continuous visibility across the entire ecosystem.
Adversarial empathy was always a much-needed skill in security — not something new that emerged after AI automation. Security teams have always depended on the ability to think like an attacker and anticipate intent, which is why organizations built red teams and blue teams to simulate and defend against real-world threats. What is changing with AI is the nature of those attacks. Instead of only exploiting systems, attackers are now trying to influence models through techniques like prompt injection and jailbreaks. This means teams need to evolve their thinking, not just their tools. Security talent will increasingly require combining strong technical skills with this attacker mindset, supported by continuous monitoring and robust identity and data controls to stay ahead of increasingly subtle threats.
Ultimately, the ownership of ethics in algorithmic defense still rests with the organization, not the AI. Even when systems make autonomous, split-second decisions, those decisions are shaped by how the models are designed, trained, and governed. This makes accountability a human responsibility — spanning security leaders, developers, and leadership teams who define policies, risk thresholds, and acceptable outcomes. In practice, this means building clear guardrails, ensuring transparency in how decisions are made, and maintaining strong oversight through continuous monitoring and auditability. Autonomous security must always operate within well-defined ethical and operational boundaries set by humans, with accountability that is explicit, traceable, and enforceable.
Yes — but it is more of an expansion than a replacement. Preventing attacks is still important, but it is no longer sufficient as a standalone KPI in an environment where breaches are often a matter of when, not if. It is equally important to ensure the organization can detect, respond, and recover quickly with minimal impact. This includes strong identity controls, continuous monitoring, incident response readiness, and the ability to adapt under pressure. The shift is toward measuring outcomes like recovery time, blast radius, and operational continuity, rather than just the absence of incidents.
One effective approach is to ensure AI is contextually embedded for easy use within everyday workflows and business applications. When AI is built into the tools employees already use, they are less likely to turn to unapproved third-party apps. At the same time, continuous education is key. Employees need to understand the risks of Shadow AI — such as data leaks and compliance issues. The focus should be on enablement, not restriction, so teams can move fast while staying secure.
AI will be remembered as the tool that helped solve the security crisis faster — not as something that made the internet obsolete. It will accelerate detection, response, and recovery at a scale and speed that human-only teams cannot match, helping security teams stay ahead of threats and reduce response times significantly. The architecture of the internet will evolve, but AI will be part of that evolution rather than the cause of its obsolescence.
One piece of conventional wisdom that is holding leaders back is the assumption that threats mostly come from outside the organization. Today, many of the real risks come from within — through insiders, misconfiguration, or compromised identities. This assumption can lead to dangerous gaps in visibility and internal controls. Leaders need to adopt a more balanced approach, with strong focus on identity security, configuration hygiene, and continuous monitoring, so risks are addressed regardless of where they originate.
About the spokesperson
Sujatha S Iyer
Sujatha S Iyer leads AI Security at ManageEngine, a division of Zoho Corp — one of the world's largest privately held software companies. With deep expertise spanning cybersecurity architecture, identity management, data integrity, and AI-driven threat defense, she is a leading voice on the intersection of artificial intelligence and enterprise security. Her work focuses on building security frameworks that are built for the AI-native era — where perimeters are replaced by continuous visibility, and prevention is complemented by resilience.
Ruchi Kumar is the associate editor at Entrepreneur News Network and TVW News India, where she leads editorial strategy, brand storytelling, and startup ecosystem coverage. With a strong focus on innovation, business, and marketing insights, he curates impactful narratives that spotlight India’s evolving entrepreneurial landscape. She has written extensively on fintech, AI and emerging startups.