Skip to main content

Intel is ready to shrug off Broadwell’s delays, pursue 7nm production process

Digital Storm Triton laptop Intel logo
Image used with permission by copyright holder
At this week’s International Solid-State Circuits Conference held in San Francisco, representatives from Intel’s chip division laid out the company’s plans to hit the 7 nanometer limit for its processors by the year 2018.

According to Senior Fellow Mark Bohr, Intel plans to break through the 14nm barrier into 10nm by sometime in 2016, and keep on chugging through straight toward 7nm around two years after that.

Related: Intel’s latest graphics chips tested

These predictions come on the heels of some well-publicized missteps that the company took on its path toward hitting the 14nm boundary, “I think we may have underestimated the learning rate—when you have a technology that adds many more masks, as 14[nm] did, it takes longer to execute experiments in the fab and get information turned, as we describe it,” said Bohr.

The challenge the company has faced over the past few chip cycles has been trying to find a way to push the theoretical limits of what silicon is capable of, without moving onto new materials such as graphene or silicene. With new techniques applied to the light it uses to etch transistors into silicon, Intel believes the problems it’s encountered with the 14nm Broadwell build have been surmounted, and the lessons learned there will translate into a smoother production rate for 10nm and 7nm builds in the future.

Bohr told news outlets in his conference call that Intel’s development team for 10nm is running 50 percent quicker than Broadwell, which means it should be on track to hit the 7nm goal without any additional hiccups from here on out.

Chris Stobing
Former Digital Trends Contributor
Self-proclaimed geek and nerd extraordinaire, Chris Stobing is a writer and blogger from the heart of Silicon Valley. Raised…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more