No matter how much I’ve interacted with various AI chatbots, they all seem to share one trait: a penchant for confident inaccuracies. I recently tested the Ray-Ban Meta smart glasses, equipped with built-in AI image recognition, by asking about my usual, somewhat nerdy hobbies. Unfortunately, the glasses struggled to engage with any of the books, figures, prints, or toys I showcased. The only experience akin to this disconnect was during awkward conversations with my father.
During the Meta Connect conference, CEO Mark Zuckerberg unveiled various updates for the Ray-Ban glasses. Among the new features announced were reminders, allowing users to ask, “Where did I park my car?” Additionally, users can now scan QR codes and easily reply to friends on apps like WhatsApp or Messenger. Future updates promise to include real-time translation and live video scanning, enabling the AI to make comments on what you’re currently seeing.
However, I wouldn’t trust this AI to accurately describe my room’s decor, let alone items in a supermarket. I received a pair of the new Headliner transition lens glasses, and while they’re far more aesthetically pleasing than my old, yellowed shades, the photo quality is just decent compared to my iPhone. The built-in audio offers a better listening experience than most laptop speakers, making them suitable for personal audio when relaxing at the beach.
While the messaging and music integrations are solid, I aimed to see how well this AI wearable performed, especially given the failures of other devices I’ve tried. I took the Meta Ray-Bans around my apartment, querying them about my collections of tabletop RPGs, wall prints, comic character statues, and my stash of Warhammer 40K fiction. Unfortunately, it felt like conversing with a brick wall—one indifferent to fantasy or science fiction, merely pretending to engage. Unlike my dad, who sometimes makes an effort, these glasses could not disguise their lack of interest.
Does Meta’s AI Lack Nerdy Information in its Training Data?
When I pointed the glasses at my metal print depicting a scene from the 2019 RPG Disco Elysium, they mistakenly identified it as Borderlands. They confused Detective Kim Kitsuragi for Claptrap and thought Harry Dubois—also known as “Tequila Sunset”—was one of the vault hunters. When I asked them about my gaming setup, they confidently informed me that my PlayStation 5 was actually a PlayStation 4.
I tried using the glasses on various pieces of memorabilia, both niche and mainstream. For instance, when I showed them my action figures from Bryan K. Vaughan and Fiona Staples’ Saga comics, they inaccurately claimed one was Dr. Strange. Likewise, my statue of Marv from Sin City was identified as The Hulk. It seems as though the glasses’ AI conflates all nerdy content with Marvel characters. When I displayed my prints of Samus Aran from the Metroid series, Meta insisted they resembled Iron Man.
Even in instances where the glasses got it right, the accuracy was questionable. While they correctly read the titles of some indie RPG rulebooks, like Deathmatch Island and Lacuna, they suggested these games were related to Warhammer miniature wargaming, which they are not. Yes, I play Warhammer 40K, but these books have no association with it.
On the upside, the device recognized who Luigi was, indicating that Nintendo’s influence reaches beyond the confines of my nerd bubble. Still, you’d think an AI could differentiate between a Pokémon and a Korok from The Legend of Zelda.
Despite these shortcomings, the glasses could provide some factual answers, although they often lack depth. For instance, while they acknowledged Dan Abnett’s authorship, when I inquired about how many works he has produced for Games Workshop’s Black Library, they claimed, “Over 25, but the exact number is unknown.” In reality, the count is significantly higher, closer to 50 when accounting for everything.
Meta’s current AI seems outdated, still utilizing Llama 3.1 70B models, which may not adequately handle everyday queries. The glasses lack access to location data, which might actually be a benefit. However, they couldn’t tell me where the closest boba tea shop was, despite there being two within three blocks of my location.
I struggled to access QR code scanning and the new reminders features, despite having the latest update. While reminders seem to be a practical feature for the glasses, it raises privacy concerns. If you capture a photo of sensitive items like your license plate and ask the AI for an analysis, know that Meta collects that data to improve its AI.
Although the AI models are deliberately limited regarding privacy concerns, they won’t identify or comment on faces or individuals. However, you can still snap photos discreetly, as the AI does not track or analyze people’s appearances. Nonetheless, the Ray-Ban Meta glasses come with significant privacy implications. A group of university students successfully hacked the glasses to enable facial recognition, allowing them to gather additional information from the internet, including names and other personal data.
This capability wasn’t how the Ray-Ban Metas were intended to function. A Meta representative noted that such facial recognition could apply to any camera, not exclusively their glasses. Still, Meta has worked to keep the cameras discreet, appealing primarily to influencers eager to share images on Instagram. As it stands, the AI doesn’t offer much beyond being a quirky addition to social media posts.
Ultimately, these fashionable smart glasses may not cater to enthusiasts who attend events like New York Comic Con, seeking in-depth insights into characters and collectibles. In their current iteration, I wouldn’t rely on their AI capabilities beyond mere entertainment.
Discover more from Marki Mugan
Subscribe to get the latest posts sent to your email.