Skip to main content

Poll: Would you wear Intel’s new smartglasses in public?

Sigma 16mm F1.4 Contemporary review
Daven Mathies/Digital Trends
Intel has had numerous AR glasses projects in the works, but now we’ve finally seen the actual product. It’s called Vaunt — and based on the report from The Verge — they actually might do what Google Glass never could.

Vaunt uses an advanced technology that actually projects lasers right onto your retinas, delivering you a running a feed of notifications and information without an actual screen. For example, you might get message notifications or map directions, sent via Bluetooth from your phone directly to your eyes. Vaunt doesn’t even have touch or voice controls — instead, it relies on eye movement to do things like dismiss notifications or disappear from view altogether.

According to Intel, “The design intent was always zero social cost.” But is that really how people feel about AR glasses?

We can all recall Google Glass and the backlash that followed. Remember the coffee shop that banned them? How about the “Stop the Cyborg” campaign and the proliferation of the term “Glassholes?”

Once word got out, it didn’t take long for Google Glass to disappear altogether. The situation may have been overblown, but there’s no question the public wasn’t ready for wearables that were quite so invasive. Intel’s glasses look like they’ve sidestepped the issue by making them fairly nondescript — and not including a camera built-in. Even better, they even come in multiple styles and work with prescriptions.

Luke Larsen
Luke Larsen is the Senior editor of computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more