Skip to main content

The latest HoloLens video clarifies its limited field-of-view

Coming out of E3, and when we sat down and got a hands on with the Microsoft HoloLens, one thing were abundantly clear: the technology on offer is as cool as the field of view is small. Now, Microsoft has released a video that provides would-be users with a better look at the FOV limitation,  so those who can’t go hands on with the device itself can actually see what they’ll be working with.

During its press briefing at E3 2015, Microsoft showed off a demo of the HoloLens with a special camera that let us see the world of Minecraft playing out on a table in 3D. Even for the most cynical folks on the Internet, it was hard not to be impressed. Sure, the demo-er looked ridiculous pulling and twisting at his table when the non-HoloLens view was shown, but when we could see what he was seeing, it was awesome.

But we weren’t actually seeing what he was seeing, because for those viewing at home, our entire screen was filled with the HoloLens projection. However, in reality the projection only appears in a fairly small box directly in front of the user’s eyes. As our own Matt Smith puts it from his time with the device–

The biggest disappointment is the field of view. What’s not obvious in Microsoft’s demos is that holograms only appear in a box directly before you. Its size is difficult to describe, but it seemed to consume about two-thirds my field of view.

Understanding this limitation is difficult with imagination alone, but when you see the video it starts to make a bit more sense. Shots showing a simulated version of the FOV are fairly few in this new clip, which shows how HoloLens technology is being used at Case Western Reserve University for medical students, but you’ll find them at 0:49, 0:58, 1:23, 1:37, and 1:42 in the video.

Dave LeClair
Former Digital Trends Contributor
Dave LeClair has been writing about tech and gaming since 2007. He's covered events, hosted podcasts, created videos, and…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more