Skip to main content

Asus’ Transformer Book is the first Ultrabook you can rip the screen off

Image used with permission by copyright holder

In the months since the launch of Windows 8, we’ve seen notebooks with detachable screens and plenty of Ultrabooks, but never one that fills both needs – until now. On Monday, Asus announced its new Transformer book TX300CA, which it claims is the first ever detachable-screen Ultrabook. The Transformer Book TX300CA is special in that it combines some of the most powerful hardware available in a sleek and slim design that can convert on the fly into a tablet.

The Transformer Book TX300CA features a Core i7 processor backed with HD4000 integrated graphics, and 4GB of DDR3 RAM. It will also feature Asus’ new SonicMaster audio technology, which is supposed to offer enhanced sound much like HP’s Beats by Dr. Dre technology. The laptop comes with your choice of an SSD or HDD, along with USB 3.0, HD front and rear cameras, a 13-inch full HD (1920 x 1080) IPS display with multi touch, and is powered by Windows 8. The fully detachable keyboard means you can choose to take it along as a cover or stand, or leave it behind and maximize your mobility.

Asus has yet to detail pricing or availability, but based on the prices on Asus’ other Zenbooks and Transformers, we wouldn’t be surprised to see it sell somewhere in the $1,500 range.

Topics
Joshua Sherman
Former Digital Trends Contributor
Joshua Sherman is a contributor for Digital Trends who writes about all things mobile from Apple to Zynga. Josh pulls his…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more