The 2024 Security Priorities study shows that for 72% of IT and security decision makers, their roles have expanded to accommodate new challenges, with Risk management, Securing AI-enabled technology and emerging technologies being added to their plate. Artificial Intelligence has sharpened both edges of the sword, as organizations are better equipped to defend against cybersecurity conundrums that are finessed to be deadly, wide-ranging and impacting operations and market reputation.
CISOs discuss how the parameters of cybersecurity are transforming with the introduction of AI, and the confluence of leadership, sophisticated products and regulations that protect the security interests of organizations and its people.
High defenses, low incidences
The rise of AI-powered cyber threats demands security providers redesign their solutions. Rohit Singh, Associate Director – Cyber Security & Information System of People interactive (Shaadi.com) says, “Security solutions should move beyond static rule-based systems, leveraging AI to understand attack intent and delivering tailormade, high-confidence threat responses.” Systems like Self-Healing Security Frameworks can prove beneficial, where cybersecurity tools are armed with self-repairing mechanisms, allowing them to autonomously recover from AI-driven attacks without manual intervention.
Nantha Ram, Head of Cybersecurity Engineering and Automation of a Global Technology & Engineering Company, says, “With AI-powered threats becoming more sophisticated, adaptive AI models should be leveraged to detect deviations and deepfake-based attacks in real-time. Explainable AI (XAI) will rise to ensure AI-driven security mechanisms do not become black-box solutions.”
PM Ramdas, CTO & Head – Cyber Security, Reliance Group adds, “Organizations need complete visibility into security tool decisions that protect enterprise infrastructure. Providers must offer comprehensive audit trails and explainable AI features that help maintain regulatory compliance and stakeholder trust.”
Ramdas offers an important element that cybersecurity tools should possess i.e. integrating human oversight capabilities. “Solutions must include intuitive interfaces and controls that enable security teams to validate and override AI decisions when needed, especially during critical incidents.”
Balancing cybersecurity and AI
Effective AI implementation requires strategic collaboration between security leaders and the C-suite to balance innovation with risk management.
Nantha Ram notes that since the advent of AI, the C-Suite is motivated to incorporate AI-driven innovation seamlessly to not hamper cybersecurity resilience efforts. “Establishing protocol such as Zero Trust for AI workflows ensures risk assessment is conducted before deploying AI tools in critical business processes. Regular engagement with the board and business leaders ensures risk visibility.” Harvinder Singh Banga, CIO, CJ Darcl Logistics elaborates that while AI is a multifaceted technology, aiding everything from fleet management to demand forecasting, cybersecurity takes precedence.
“We work closely with business leaders, IT, and risk teams to balance innovation with security. AI-driven threat intelligence, real-time monitoring, and quarterly security audits help mitigate risks, while regular C-Suite briefings ensure informed decisions, preventing security trade-offs as we leverage AI for operational excellence.”
PM Ramdas explains that when executives understand the security implications of AI initiatives, they become strong advocates for balanced, secure implementation strategies. This proactive approach helps build a corporate culture where cybersecurity is viewed as an enabler of AI innovation rather than a hindrance.
Trust in the age of Deepfake AI
‘Seeing is believing’ was an adage to live by, but in the era of undetectable, sophisticated deepfake fraud courtesy AI, this doesn’t quite ring true. Apart from public misinformation, organizations suffer massive impact to business, reputation and trust deficit if the evil side of AI runs unchecked.
Rohit Singh speaks of their ‘AI vs AI’ mechanisms to stay ahead of scammers. “We counter AI-driven fraud with AI-powered detection tools that analyse micro-expressions, vocal tonality, and inconsistencies in digital communications to identify deepfake attempts in real time. We also employ adaptive authentication, such as liveness detection, contextual MFA, and real-time identity challenges, to thwart impersonation attempts.”
Harvinder elaborates that AI-driven fraud can wreak havoc as logistics relies on real-time coordination between fleet managers, vendors, and drivers. He says, “To mitigate risks, we employ a multi-layered security approach, including AI-driven anomaly detection to flag unusual bidding patterns and geo-fencing alerts to prevent unauthorized transport diversions. MFA and biometric verification enhance access security, reinforced by security awareness training. Additionally, we have AI-powered voice & video authentication and adaptive phishing detection models being planned for future implementation.”
Good AI, Bad AI
“The ban on generative AI tools in government devices highlights the critical need for robust organizational policies on AI usage,” says PM Ramdas. “The key focus must be on balancing innovation with data protection through a structured approach to AI tool management.”
While AI is practically a fixture in organizations, bad actors are leveraging AI’s precision, accuracy and reach to infiltrate organizations and weaken defenses. The loss of revenue, trust deficit created in the consumers and reputation damage this can cause is insurmountable.
“To mitigate this, Nantha Ram says, “Our AI models undergo rigorous data audits to ensure data diversity to eliminate biases stemming from imbalanced datasets. Ensuring diversity in data sources helps models make impartial decisions. Additionally, fairness metrics are implemented to prevent models from prioritizing or neglecting specific attack vectors.”
PM Ramdas also emphasizes the importance of an AI ethics committee with diverse stakeholders. “The integration of human expertise in AI development and eployment is critical. Security teams should regularly review AI-flagged threats to validate detection patterns and establishing fairness metrics to help identify potential discrimination early.”
DeepSeek-ing trouble
Freely available, open-source AI tools such as ChatGPT and DeepSeek have been a major disruptor and their ease and promptness, while tempting for employees, poses significant risk for organizations. They can cause such glaring security breaches that a recent rule by the Indian Ministry of Finance has banned such AI tools on official devices, citing data security risks.
CIOs are unanimous on this issue – there is no room for unauthorized AI tools in an org’s ecosystem.
To Harvinder Banga, AI security is paramount given CJ Darcl’s large logistics network. “Any AI tool usage requires exceptional approval from the IT Management and Governance Team. A secure AI sandbox environment allows controlled AI testing without enterprise risk. To reduce reliance on third-party AI, we are developing custom AI-powered internal chatbots.”
PM Ramdas says, “Organizations should maintain an approved list of AI tools that meet security requirements, while clearly defining restricted use cases.”
Him and Rohit Singh also highlight the importance of tailored and consistent employee training on AI-related risks, including data leakage and prompt injection attacks to create a security-conscious culture, with Rohit adding, “Robust DLP (Data Loss Prevention) measures track and prevent sensitive data from being shared with external AI systems. Regular audits ensure compliance with evolving regulations while promoting AI literacy among employees.”
Leave a Reply