AI-Driven Development Demands “Machine Scale” Security to Avoid Bottlenecks
The accelerating pace of modern software delivery is outpacing traditional security protocols, a challenge amplified by the integration of generative AI. Discussions at the Cyber Security & Cloud Expo Global 2026 highlighted a shift from theoretical “AI safety” debates to the practical realities of securing rapidly deployed applications, with experts emphasizing the need for trust embedded within automated workflows.
The core issue, as articulated by industry leaders, is that the velocity of development is creating friction with governance. Organizations are grappling with how to maintain security without sacrificing speed.
The Expanding Definition of a Secure Software Supply Chain
The very definition of a secure software supply chain is undergoing a fundamental change. According to a senior official at Sonatype, development automation necessitates a revised approach to establishing trust. Analysis of the company’s recent ‘State of the Software Supply Chain’ report reveals emerging risks stemming from open-source downloads and the increasing use of AI-assisted coding tools.
“Automation changes threat models,” the official stated. “Manual code review simply fails when AI agents are generating code at a high volume.”
To maintain development velocity, engineering teams must integrate security checks directly into the continuous integration pipeline. Establishing trust at “machine scale” – automating security validation – is now seen as the only viable path forward. Relying on human security gatekeepers at the end of a sprint inevitably creates delivery bottlenecks.
Data Context and the Rise of Data Exfiltration
Legacy security technologies are proving inadequate because they often lack the necessary context regarding the data they are designed to protect. A solutions engineer at Concentric AI emphasized that data security must evolve from a static, perimeter-based approach to a more dynamic, managed asset strategy. This is particularly critical for developers rolling out GenAI applications, which ingest and process massive datasets.
“AI and context are necessary to identify risk within these environments,” the engineer noted.
The consequences of failing to secure this data are severe. Reports indicate that a staggering 94% of ransomware attacks now involve data exfiltration, with attackers prioritizing data theft over simple encryption. This demands a multi-layered prevention strategy, and for platform engineers, backup and recovery systems alone are insufficient; threat detection must occur closer to the data source.
Managing a Chaotic Attack Surface
Rapid development cycles and unmonitored assets contribute to what one expert at Outpost24 described as a “chaotic attack surface.” This complexity requires innovative approaches to security. The company’s session on “Modern External Attack Surface Management” explored how engineering teams can leverage “unlikely synergies” to secure risky endpoints that often bypass standard inventory checks. The goal for DevSecOps teams is to bring these assets under management before they become entry points for attackers.
AI Integration and Cloud Infrastructure Resilience
As AI applications transition to production, their integration into cloud infrastructure requires specific architectural standards focused on cyber resilience. A chief technology innovation officer at Fixed Solutions outlined a lifecycle for engaging with AI through cloud systems, emphasizing increased automation and enhanced data analytics as key components.
A panel discussion featuring leaders from JPMorgan Chase, Saint-Gobain, and TMSC underscored the importance of not hindering developer experience with overly restrictive security measures. These executives argued that developers need to optimize cloud infrastructure use while maintaining robust governance.
The Human Factor in AI Security
While technical controls remain essential, the behavioral aspect of AI security is gaining prominence. A head of Human-Centred AI and Innovation at Standard Chartered warned that autonomous AI agents introduce risks related to psychological and behavioral manipulation.
“AI behavior can steer user decisions in ways that traditional technical vulnerabilities miss,” the executive cautioned. This adds a new dimension to the threat model for AI developers, who must now consider how their systems might inadvertently manipulate human operators.
Another senior architect at National Highways advocated for embedding cyber resilience into enterprise strategy, arguing for a “human-centric security” approach that integrates practitioner fundamentals with broader business goals. Designing systems that account for human behavior, rather than relying solely on technological enforcement, is paramount.
Ethical and Technical Challenges at the Intersection of AI and Cybersecurity
The convergence of AI and cybersecurity presents complex ethical challenges that directly impact technical implementation. A panel discussion involving representatives from Santander, The Adecco Group, and National Highways examined how AI is reshaping modern threat detection and response. While AI offers powerful new tools for defense, it also introduces operational complexities that require careful management.
The traditional separation between “development” and “security” is becoming obsolete. Whether it’s the supply chain risks identified by Sonatype, the data context demanded by Concentric AI, or the human-centric design advocated by Standard Chartered, the solution lies in integration.
Ultimately, security must be a fundamental property of the platform itself, not a layer applied as an afterthought.
Want to learn more about cybersecurity from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the AI & Big Data Expo. Click here for more information.
Developer is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
