Skip to main content

Intel expands 2-in-1 reference design to include plus-sized 12.5-inch displays as sales slump

intel reference designs 2 in 1s asus chi p1100210
Image used with permission by copyright holder
It seems that dockable 2-in-1 devices may be set to make an en-masse move towards larger designs, if information from DigiTimes is correct.

The publication claims “sources from notebook makers” indicate the 2-in-1 reference design has been updated to fifth-generation Core processors, as expected, and now includes 12.5-inch models, which was not guaranteed.

These larger models will easily accommodate the Intel Core M processor, which offers quicker per-core performance than the Atom found in many 10-inch variants current on the market.

The report suggests the move may be in response to slump sales of 2-in-1s. Though the category initially looked strong, demand quickly tapered off, leaving companies like Asus looking for ways to fill in the gap.

In fact, Asus has already shown a 12.5-inch model, the much-loved Asus Transformer Book Chi T300, which was revealed at CES 2015 and is scheduled for a Feburary 2015 release. That system weighs only one and a half pounds and measures .65 inches thick with keyboard dock attached.

Can these larger 2-in-1 systems help boast sales? That’s hard to say for certain, but it will give consumers a choice that was mostly unavailable before. While a few large 2-in-1s with Core processors arrived in 2014 they were quite bulky and difficult to use as a tablet. A new reference design based on Core M can be much thinner, as Asus has already proved, and may lower prices as well.

Matthew S. Smith
Matthew S. Smith is the former Lead Editor, Reviews at Digital Trends. He previously guided the Products Team, which dives…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more