Skip to main content

Hands-On Video with the Plastic Logic QUE proReader

plastic-logic-reader
Image used with permission by copyright holder

We get a firsthand look at the QUE proReader’s ultra-thin design and its 11.6-inch touch-screen with iPhone-like functionality. We also find out that QUE owners will have access to the QUE store which offers publications formatted specifically for the QUE.

Digital Trends Staff
Digital Trends has a simple mission: to help readers easily understand how tech affects the way they live. We are your…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more