Skip to main content

Intel and Micron crunch flash memory down to 20nm

Chipmaking giant Intel and memory developer Micron have announced they have produced a new 20 nm process for producing 20 nm NAND flash memory. Most flash storage on the market today uses either 32nm or 25nm processes: the 20nm process means the folks who make everything from smartphones to tablets to portable game systems to solid-state drives can put more storage capacity in less space inside the device—and tap into it all using less power.

Intel Micron Flash die size comparison
Image used with permission by copyright holder

“Industry-leading NAND gives Intel the ability to provide the highest quality and most cost-effective solutions to our customers, generation after generation,” said Intel’s VP and general manager for non-volatile memory solutions Tom Rampone, in a statement. “The Intel-Micron joint venture is a model for the manufacturing industry as we continue to lead the industry in process technology and make quick transitions of our entire fab network to smaller and smaller lithographies.”

The modules are being produced by a joint venture between Intel and Micron, dubbed IM Flash Technologies.

Intel and Micron have produced an 8 GB sample device that measures just 118mm², is a 30 to 40 percent reduction in board space required for storage compared to current 8 GB 25nm modules. The 8 GB modules also cost about one third less to manufacturer compared to 25nm modules.

The companies are sampling the 8 GB modules now and expect to take the components to mass production in the second half of 2011. Around the same time, Micron and Intel expect to begin sampling 16GB modules built on the same 20nm process—all told, the companies say those 16 GB modules could combine together to put 128 GB of storage capacity in an area smaller than a typical U.S. postage stamp. And the shrinkage won’t stop there: IM Flash Technologies is already working on a 16nm process for flash storage.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more