Google has quietly entered the race for high-fidelity, AI-powered speech-to-text with the release of a latest Google AI dictation app that works offline, titled “Google AI Edge Eloquent.” Currently available on iOS, the app aims to move beyond simple transcription by transforming the messy reality of human speech into polished, professional prose without requiring a constant internet connection.
Unlike traditional dictation tools that capture every “um,” “ah,” and mid-sentence correction, Eloquent is designed to capture intended meaning rather than a verbatim record. By leveraging on-device models, the app filters out filler words and self-corrections in real time, effectively acting as an editor that cleans up speech as it is spoken.
Coming from a software engineering background, I find the “Edge” designation in the app’s name particularly telling. In the industry, edge computing refers to processing data locally on the device rather than sending it to a centralized cloud server. For a dictation app, this means lower latency and a significant boost to user privacy, as the audio doesn’t necessarily have to depart the phone to be understood.
On-device intelligence and the Gemma framework
The core of the app’s functionality relies on Gemma-based automatic speech recognition (ASR) models. Once these models are downloaded to the device, the app can perform transcription entirely offline. This local-first approach allows users to dictate in areas with poor connectivity while maintaining a high level of accuracy.

While the app prioritizes local processing, it does offer a “cloud mode.” When enabled, the app utilizes cloud-based Gemini models to handle more complex text cleanup and refinement. Users have the autonomy to toggle this off if they prefer local-only processing for sensitive conversations or data security.
Beyond transcription: Text transformation
The app does more than simply convert audio to text; it allows users to reshape that text based on the intended output. After a user pauses their dictation, Eloquent provides several transformation options to change the tone and length of the prose:
- Formal: Adjusts the language for professional emails or documents.
- Key points: Distills a long spoken thought into a concise bulleted list.
- Short: Trims the fat for quick messages.
- Long: Expands on the dictation for more detailed drafting.
To improve accuracy with specific terminology, the app can import names, keywords, and industry jargon directly from a user’s Gmail account. This reduces the common frustration of AI mishearing proper nouns or technical terms. Users can manually add custom words to a personal dictionary to further refine the ASR performance.
For those tracking their productivity, the app includes a history of transcription sessions. This includes searchable archives and metrics such as total word count and words-per-minute (WPM) speed.

A shifting strategy for Android and iOS
The rollout of Google AI Edge Eloquent has been somewhat opaque. While the app is currently available on iOS, early App Store listings contained references to a version for Android. These descriptions initially promised “seamless Android integration,” including the ability to set Eloquent as the default system keyboard and the utilize of a floating button for quick access to transcription across any app.
Yet, Google has since updated the App Store listing to remove those Android references. In their place, the company has added a note indicating that an iOS keyboard is coming soon, suggesting a pivot toward refining the iPhone experience before expanding further.
This move places Google in direct competition with a growing crop of AI-first dictation startups, including Wispr Flow, SuperWhisper, and Willow. These apps have carved out a niche by focusing on the “intent” of the speaker rather than the literal transcription, a trend Google is now embracing with an experimental, offline-capable tool.
Whether this remains a standalone experimental app or eventually integrates into the broader Gboard or Android system remains to be seen. However, the successful testing of these Gemma-based models on iOS could pave the way for a more robust, on-device transcription experience for Android users in the future.
The next confirmed checkpoint for the app is the upcoming release of the iOS keyboard, which will allow users to dictate directly into other applications without leaving the Eloquent interface.
Do you prefer on-device AI for privacy, or are you comfortable with cloud processing for better accuracy? Let us know in the comments.
