Skip to main content

NY Times to send out 300,000 Cardboard VR viewers in second giveaway

ny times daily 360 google cardboard
Bizoon/123RF
The New York Times is hooking up with Google again to give away 300,000 Cardboard virtual reality (VR) viewers to many of the news outlet’s subscribers.

Last November the Times handed out a million of the viewers to print subscribers, while the latest giveaway sees the $15 gadget going to its “most loyal” digital subscribers.

It’s all part of the Times’ continued push into VR, which so far includes six mainly news-focused productions for its iOS and Android NYT VR app, which launched last year alongside that first Cardboard giveaway.

Digital subscribers – or at least, the really loyal ones – can expect to receive their free viewer in time for the news outlet’s May 19 launch of its latest VR film, the intriguingly titled Seeking Pluto’s Frigid Heart.

No, it’s not an immersive examination of possible emotional issues affecting one of Disney’s most famous and adored characters, but instead a close-up look at the planet Pluto that’ll allow viewers to “soar above never-before-seen rugged mountains and bright plains, and stand on Pluto’s unique surface as its largest moon hovers over the horizon,” the Times teased in a release.

The iconic news publication collaborated with the Lunar and Planetary Institute and the Universities Space Research Association to create accurate three-dimensional virtual worlds from data gathered in 2015 by NASA’s New Horizons spacecraft.

Google’s cheap-as-chips VR viewer, which launched in 2014, works with a wide range of smartphones. If you’re not a Times subscriber but want to try Cardboard or an alternative, this page offers a range of (mostly) low-cost viewers, which you can quickly filter by price, smartphone, and material. Cardboard users looking to try VR content other than that offered by the Times can choose from a range of apps by visiting the Play Store’s dedicated section here.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more