Skip to main content

Ukrainian military testing Hololens-equipped headsets to improve field of view

ukrainian military is augmenting tank commanders with hololens headsets hololensmilitary
LimpidAmor
As gamers, we’ve been using augmented reality features on the battlefield for decades. Even since the initial first-person shooters, we’ve had ammo readouts and health readings.

Being a real soldier isn’t like that, but it could be in the near future — and the Ukrainians are leading the way. A new prototype helmet could be used by tank commanders to give them a wider field of view without exposing them to greater risk.

Developed by a Ukrainian defense contractor, LimpidArmor, the headset — known as the Circular Review System — integrates a traditionally protective helmet with the Hololens. It can be utilized to give tank commanders a 360-degree view of the battlefield, rather than just the limited view that you are often restricted to from within a tank.

While camera feeds can of course be used to provide a moveable 2D visual, a unified 360-degree perspective is far more versatile and can be used to overlay much more information to the wearer if required.

limpid
Image used with permission by copyright holder

Shown off at the Arms and Security event in Kiev, Ukraine, in early October (thanks MSPowerUser), the Hololens-equipped headset would not only be capable of a wider field of view, it could also highlight enemies and or allied troops (“friendlies”). Other features include automated target acquisition and tracking, potentially even using it to call in strikes from drones and similar attack craft.

While trials have taken place in LimpidArmor’s testing facilities, it hasn’t seen any live field testing just yet. However, it is expected to progress quite quickly through the next few phases, with plans to bring it to market as soon as possible.

LimpidArmor is said to also be working on civilian implementations of the headset for use by airline pilots, large industrial vehicle drivers, and drone pilots. Will gamers follow?

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more