Android 16 QPR3: Pixel 10 Screen Automation for PC Use

by Priyanka Patel

Gemini Set to Gain “Screen Automation” Capabilities on Android, Signaling Major AI Expansion

Gemini, Google’s advanced artificial intelligence model, is poised to significantly expand its capabilities on Android devices, as evidenced by a newly discovered “Screen automation” permission within the latest Android 16 QPR3 Beta 2. This development suggests Google is actively preparing to allow Gemini to interact with and control other apps on users’ smartphones, mirroring functionality currently available to AI Ultra subscribers on the desktop web.

The discovery,made through a process called “APK Insight” – decompiling the latest Google app uploaded to the Play Store – reveals a potential shift in how users interact with their devices. According to the analysis, decompiling application files allows researchers to uncover hints of future features.While Google may not ultimately release all discovered features,the presence of this permission strongly indicates the company’s intentions.

Did you know? – APK Insight involves examining an app’s code without installing it. This allows researchers to identify unreleased features and changes google is developing, offering a glimpse into future updates.

New “Screen Automation” Permission Unveiled

Within the settings of Android 16 QPR3 Beta 2, a new “Special app access” item has appeared: “Screen automation.” The description clarifies that this permission will “Allow apps to help you complete tasks by interacting with other apps’ screen content.” Currently, this permission is only visible on Pixel 10 devices, with no appearance on the Pixel 9 series, raising questions about a potential phased rollout.

The Google app is currently the only application listed as supporting this permission, offering users three options: “Always allow,” “Ask every time” (the default setting), and “don’t allow.” A company release explains that granting this permission will enable the app to “see and interact with other apps’ screen content to help you complete tasks, even when the apps are in the background.”

Pro tip – Be cautious when granting “Screen automation” permissions. Allowing an app to control others raises privacy and security concerns; review app permissions carefully before enabling.

“Computer Control” – The Underlying Code

The internal code strings associated with this functionality are currently referred to as “computer_control,” hinting at the broader ambition of enabling Gemini to function as a thorough digital assistant capable of navigating and manipulating the user interface.

Users will be presented with options to manage task progress, with prompts like: “To view task progress, open the [app name] app. If you stop the task now, you may lose your progress. To restart, open the [app name] app.” A “Stop task & open app” button will likely be included for user control.

building on Project Astra

This Android integration builds upon demonstrations from earlier in the year. In may, Google showcased Project Astra, the technology underpinning future Gemini Live capabilities, successfully scrolling through Chrome for Android and clicking within the YouTube app.This demonstrated Google’s vision for AI agents navigating digital environments “just as humans do: by clicking, typing an


Here’s a breakdown of how the questions are answered within the edited article:

* Why: Gemini is gaining screen automation capabilities to expand its functionality on Android, allowing it to interact with and control other apps. This mirrors desktop functionality and builds on Project Astra.
* Who: Google is developing this feature for Gemini, and it’s currently available in the Android 16 QPR3 Beta 2, initially appearing on Pixel 10 devices.
* What: The new “Screen automation” permission allows apps (currently only the Google app) to interact with other apps’ screen content to complete tasks. The underlying code is referred to as “computer_control.”
* How did it end?: The article doesn’t have a definitive “end” as it’s reporting on a developing feature. It concludes by referencing Project Astra and Google’s vision for AI agents navigating digital environments. The feature is currently in beta and its final form and rollout are yet to be resolute.

You may also like

Leave a Comment