Highlights
AI Integration and Cybersecurity Risks for Businesses
In recent months, AI integration has been rapidly adopted by enterprises to enhance workforce capabilities. However, as this adoption progresses, the associated cybersecurity risks are escalating. Although businesses recognise these threats, there’s a notable disparity between existing policies and real-world execution.
According to the Sprinto CISO Pulse Check 2026 report, nearly one in three organisations experienced a significant AI-related security issue in the past year, exposing a crucial “execution gap” concerning the adoption of advanced technology.
AI Governance Lags, Despite Awareness
For organisations suffering from security breaches, threats identified include issues like shadow AI usage, model inversion, and data poisoning. Despite this, the study indicated that over 70% of organisations are closely monitoring AI regulations and preparing to meet compliance standards.
Nonetheless, companies regard AI-related threats as distinct, necessitating specific policies, monitoring, and controls. Yet, these threats are not managed as part of their overall security strategy, revealing that implementation remains a significant concern.
Current Statistics on AI Policy Enforcement
- Just 21% of organisations have effective controls to prevent employees from sharing personal or sensitive data on public AI platforms.
- Close to 39% acknowledge that while AI usage policies are in place, these are not consistently enforced throughout the workforce.
- Only 22% of organisations have predominantly automated AI risk monitoring, with most relying on manual or semi-automated methods.
- Two-thirds of organisations require weeks or even months to enact necessary policies, which is considered dangerously slow in an AI-driven landscape.
Investment Rises as Governance Evolves
While there exists a considerable gap in execution, organisations are beginning to make necessary changes. The study revealed that approximately 69% of organisations have allocated budgets for AI risk management in 2026, with an additional 17% intending to follow suit.
Going forward, organisations should focus on implementing rigorous technical controls, conducting thorough AI risk assessments, and training employees on secure AI practices.






