Two former Apple engineers, who previously worked on the ambitious Vision Pro headset, have unveiled a new piece of AI hardware that feels like a nostalgic trip back to the mid-2000s. The device, simply called “Button,” is a generative AI chatbot housed in a wearable case that bears a striking, deliberate resemblance to the original iPod Shuffle.
Created by Chris Nolet and Ryan Burgoyne, the device attempts to carve out a niche in the increasingly crowded and scrutinized market of AI wearables. Whereas the industry has struggled to justify why users require a dedicated piece of hardware when they already carry a powerful smartphone, the creators of the ex-Apple engineers AI button are betting that a physical, tactile experience will outweigh the convenience of a mobile app.
The device functions as a bridge to a generative AI chatbot. By pressing the physical button, users activate the AI to listen, answer questions, or execute demands. The responses are delivered either through a built-in speaker or via Bluetooth connections to smart glasses and earbuds.
Solving the Privacy Paradox
One of the most significant hurdles for first-generation AI wearables has been the “always-on” nature of their microphones. Devices like the Humane AI Pin and the Rabbit R1 faced criticism for their privacy implications, as they often rely on ambient listening or complex gestures to trigger responses.
The Button takes a more traditional approach to user agency. By requiring a physical press to enable the chatbot, Nolet and Burgoyne have effectively eliminated the privacy nightmare associated with devices that are constantly monitoring their surroundings. This “intent-based” activation ensures that the AI only listens when the user explicitly decides it should.
However, this solution introduces a different problem: the “why” of the hardware. In the current tech ecosystem, a button that triggers a chatbot is essentially a physical shortcut for an action that can be performed via a voice command (like “Siri” or “Hey Google”) or a single tap on a smartphone screen.
The Struggle for Hardware Justification
The history of personal electronics is defined by convergence. The smartphone famously absorbed the functionality of the standalone camera, the GPS navigator, and the MP3 player. To many industry observers, the push for dedicated AI hardware feels like an attempt to reverse that trend—essentially trying to invent a standalone device for a service that is already natively integrated into the phone in our pockets.
This sentiment has been echoed in the critical reception of recent AI gadgets. Tech reviewer Marques Brownlee notably described the Humane AI pin as the worst product he had ever reviewed, while the Rabbit R1 was characterized as barely reviewable. The core issue remains the same: if the device does not offer a fundamental capability that a smartphone cannot, the friction of carrying an extra piece of hardware becomes a deterrent.
When pressed on why the Button isn’t simply an app, Chris Nolet drew a parallel to the evolution of the internet. “You can use the internet on your PC, but it’s better on the phone,” Nolet said. “The new innovation is AI. You can use AI on your PC, you can use it on your phone, but our pitch is that it’s better on the Button.”
Despite the analogy, the specific technical or experiential reasons why it is “better” remain undefined. The creators have yet to explain what specific utility the Button provides that a well-designed mobile application or a smartwatch complication could not replicate.
Comparing the New Wave of AI Wearables
To understand where the Button fits into the current landscape, it is helpful to look at how it differs from its predecessors in terms of interaction and privacy.
| Device | Trigger Mechanism | Primary Privacy Approach | Form Factor |
|---|---|---|---|
| Button | Physical Press | Manual Activation | Clip-on/Pendant |
| Humane AI Pin | Voice/Gesture | LED Indicators | Chest Pin |
| Rabbit R1 | Push-to-Talk Button | App-based Control | Handheld |
The Path Forward for AI Hardware
The Button represents a fascinating experiment in “minimalist” hardware. By stripping away the screen and the complex interfaces of previous attempts, Nolet and Burgoyne are testing whether a single, tactile point of entry is enough to change how we interact with large language models.
For those of us who spent years in software engineering, the instinct is always to ask if the hardware is a “feature” or a “product.” If the value proposition is solely the speed of access, the Button is a feature. If it evolves to handle tasks that require a dedicated physical presence—perhaps through integration with other wearables or specialized sensors—it could grow a product.
As the industry continues to iterate, the success of the Button will likely depend on whether the creators can move beyond the “it’s just better” pitch and define a specific, indispensable use case that justifies its place on a user’s clothing.
Further updates on the Button’s availability and specific software capabilities are expected as the creators move from the prototype and reporting phase toward a broader release. We will continue to monitor official announcements regarding its production timeline.
What do you think? Would you carry a dedicated AI button, or is your smartphone enough? Let us understand in the comments.
