At Google Cloud Next ’26 in Las Vegas, the company unveiled what it calls the Gemini Enterprise Agent Platform, a rebranded and expanded version of Vertex AI designed to let businesses build, test, and deploy AI agents that can autonomously execute workflows across Google Cloud and Workspace using natural language.
The platform, announced by CEO Thomas Kurian ahead of the keynote, integrates Gemini 3.1 Pro, Gemini 3.1 Flash Image, Lyria 3, and Anthropic’s Claude Opus 4.7, all accessible through a low-code interface called Agent Studio. It includes a simulation environment for stress-testing agents, a registry for governance, a marketplace for third-party agents, and a semantic layer enabling cross-data reasoning.
Central to the architecture is Anthropic’s Model Context Protocol (MCP), which allows agents to interact with every Google Cloud and Workspace service as if they were native tools. Kurian described the shift as a response to evolving user demand: “Now we’re seeing, as the models evolve, people wanting to delegate tasks and sequence of tasks or workflows to agents, and these agents then being able to turn around and use a computer, use all of GCP and Workspace as a tool.”
This marks the third rebranding of Google’s enterprise AI offering in under two years — from Agentspace (launched December 2024) to Gemini Enterprise, and now to the Gemini Enterprise Agent Platform — reflecting an ongoing effort to consolidate fragmented features into a cohesive system. Google insists this evolution is not merely cosmetic; all Vertex AI services will now be delivered exclusively through the Agent Platform.
Kurian, who joined Google in 2018 after a 22-year tenure at Oracle as President of Product Development, emphasized that the company’s internal use of its own cloud infrastructure — “eating its own dogfood,” as he put it in prior interviews — validates the platform’s readiness for external customers. He noted that Google itself runs on the same stack as Google Cloud, a point echoed by CEO Sundar Pichai, who revealed that half of Google’s capital expenditure is now directed toward cloud infrastructure.
For more on this story, see DeepSeek Unveils V4 AI Models With Strong Performance and Low Costs.
The move comes as Google seeks to close the gap with Amazon Web Services and Microsoft Azure in the enterprise cloud market, where it has long trailed despite strengths in AI research. By bundling its leading models with open access to Anthropic’s Claude and positioning agents as active partners rather than passive tools, Google is betting that usability and integration will outweigh pure scale.
Security, a recurring theme in Kurian’s discussions, remains a foundational pillar. He reiterated that enterprise adoption hinges on trust, particularly as agents gain autonomy to act on sensitive data across systems. The platform’s governance tools — including access controls and audit trails — are positioned as enablers of safe deployment.
For non-developers, the Gemini Enterprise app — a web-based interface — provides a centralized space to discover, create, share, and run agents without requiring machine learning expertise. This democratization effort aims to extend agent development beyond data science teams to business analysts and operations staff.
Whether this platform can finally translate Google’s AI advantages into cloud market share remains uncertain. Competitors are also advancing their agent capabilities, and enterprises may hesitate to lock into a single vendor’s ecosystem, even one as integrated as Google’s. But for now, the company has moved beyond pilots and promises: agents are no longer theoretical. They are running at scale — for Google, and now, it hopes, for its customers.
What exactly is the Gemini Enterprise Agent Platform, and how does it differ from Vertex AI?
The Gemini Enterprise Agent Platform is the rebranded and expanded evolution of Vertex AI, combining Google’s leading AI models, low-code agent building via Agent Studio, governance tools, a third-party marketplace, and a semantic layer for cross-data reasoning. Unlike Vertex AI, which offered discrete AI services, the Agent Platform delivers all capabilities exclusively through this unified system, with Anthropic’s Model Context Protocol enabling agents to work across every Google Cloud and Workspace service.
Why is Google emphasizing that it uses its own cloud infrastructure for internal operations?
Google highlights that it runs on the same infrastructure as Google Cloud to demonstrate the platform’s reliability and readiness for enterprise use. This “eat your own dogfood” approach, stressed by CEO Thomas Kurian, serves as proof that the system can handle real-world, scaled workloads — not just theoretical or pilot projects — thereby building trust with external customers concerned about performance and security.

