Skip to main content

Hard Disk Tech Earns Physics Nobel Prize

Hard Disk Tech Earns Physics Nobel Prize

The Royal Swedish Academy of Sciences has awarded the 2007 Nobel Prize in physics to France’s Albert Fert and Germany’s Peter Grünberg for their research into giant magnetoresistance&mdashGMR—which enables greater amounts of data to be packed into ever-smaller areas.

The two scientists discovered the phenomena independently of each other; they also shared the 2007 Japan Prize for their research.

GMR works by producing a disproportionately large electrical response to a very small magnetic input. The phenomena emerges due to the “spin” of electrons in atoms: the electrons can be made to spin in different directions in different circumstances, producing differences in electrical resistance which, in turn, can be used to represent digital data. Fert and Grünberg discovered ways to exploit the GMR effect using nanometer-thin layers of magnetic and non-magnetic material are stacked together. The discovery was unexpected, but quickly applied to commercial applications like data storage, where it has enabled the production of ever-smaller (and ever-denser) hard drives: it’s no exaggeration to say the iPod—and notebook computers—wouldn’t exist without the efforts of Fert and Grünberg.

Fert and Grünberg independently discovered the GMR phenomena in 1988. By 1997, scientists were working on the first read-out heads for employing the technology in hard drives.

Fert and Grünberg will share a prize of 10 million Swedish krona (about $1.5 million U.S. dollars).

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more