Apple AI: Visual Intelligence Powers New AirPods, Smart Glasses & More

by priyanka.patel tech editor

Apple’s push into artificial intelligence is taking shape and the company appears to be betting heavily on a technology it calls “Visual Intelligence.” Although the tech giant has teased AI capabilities for its devices for months, a recent report from Bloomberg details how this technology—essentially computer vision—will be central to a range of upcoming products, including latest AirPods, smart glasses, and even an AI pendant. The focus on Visual Intelligence, however, has some observers wondering if Apple is truly breaking new ground, or simply repackaging features already available in competing devices.

The core idea behind Visual Intelligence is to give Apple devices the ability to “see” and interpret the world around them. This isn’t entirely new; similar capabilities are already found in smart glasses like the Ray-Ban Meta AI glasses, which can identify objects, translate text, and provide contextual information. But Apple aims to integrate this technology across a broader spectrum of devices, potentially reaching a much wider audience. The company’s CEO, Tim Cook, has hinted at the importance of this feature, drawing parallels to how he previously emphasized health sensors before the Apple Watch and augmented reality before the Apple Vision Pro, according to Bloomberg’s Mark Gurman.

The applications of Visual Intelligence, as outlined by Bloomberg, range from the mundane to the potentially useful. The technology could identify items on a plate of food, provide detailed turn-by-turn navigation instructions based on landmarks, or even remind users to complete tasks when they approach specific objects. While the navigation use case is noted as potentially novel, many of these features are already present in existing smart glasses and AI-powered devices. The upcoming Apple smart glasses are expected to feature an advanced camera system capable of capturing high-resolution photos and videos, as well as providing visual data to Siri. The AI pendant, reminiscent of the recently discontinued Humane Ai Pin, will utilize a lower-resolution camera for visual insight, but won’t be capable of taking photos or videos. Even the next generation of AirPods are slated to include a camera, though its primary function will be to gather information rather than capture images.

However, the reliability of computer vision remains a significant hurdle. As someone who has tested the Ray-Ban Meta AI glasses, I’ve found that computer vision isn’t always accurate. The technology can misidentify objects and struggle with complex scenarios, making it difficult to fully trust in everyday use. This is a challenge that Apple will necessitate to overcome if it wants Visual Intelligence to be more than just a gimmick. The technology does hold promise for accessibility, but Apple’s current pitch doesn’t appear to be focused on that area.

The Reliance on Existing AI Models

Apple’s Visual Intelligence isn’t being built in a vacuum. The existing features within iOS already rely heavily on established AI models from other companies. Currently, Visual Intelligence leverages OpenAI’s ChatGPT, and will soon incorporate Google’s Gemini, as reported by Bloomberg. These models, while powerful, are not without their flaws and are known to occasionally produce inaccurate or misleading results. This dependence on external AI raises questions about Apple’s ability to truly differentiate its Visual Intelligence offering.

A Crowded Field of AI Gadgets

Apple isn’t the only company exploring the potential of AI-powered wearables. Meta, with its Ray-Ban Meta AI glasses, has already made significant strides in this area. The failed Humane Ai Pin, recently acquired by HP, serves as a cautionary tale about the challenges of bringing an AI-first device to market. Apple’s success will depend on its ability to offer a compelling user experience that addresses the shortcomings of existing products. The company’s track record of seamless integration and user-friendly design could give it an edge, but it will need to deliver on the promise of reliable and useful AI features.

The current state of AI gadgets suggests a broader struggle to find practical applications for computer vision. While the technology has the potential to be transformative, it’s still unclear whether consumers are ready to embrace devices that constantly analyze their surroundings. Apple’s vision for Visual Intelligence, while ambitious, may not be significantly more useful than OpenAI’s reported plans for a smart speaker with a camera – a concept that has already been met with skepticism.

What’s Next for Apple’s AI Push?

Apple is expected to unveil its AI-centric hardware later this year, with a potential launch event in March alongside a low-end MacBook, according to Bloomberg. The company’s ability to deliver on its promises will be crucial to its success in this rapidly evolving market. For now, it remains to be seen whether Apple can overcome the challenges of computer vision and create AI gadgets that truly enhance our daily lives.

What do you believe about Apple’s plans for Visual Intelligence? Share your thoughts in the comments below.

You may also like

Leave a Comment