Skip to main content

Gene Roddenberry's floppy disk stash decrypted after 30 years

floppy innards
Brad Jones/Digital Trends
Files from a collection of nearly 200 floppy disks belonging to Star Trek creator Gene Roddenberry has been recovered after three months of work from data recovery specialists DriveSavers. The floppies reportedly contain notes, story ideas and even scripts, all of which are thought to have been produced in the 1980s.

While these disks contain just 160 kilobytes of data each, they’re likely to be of great interest to fans of the series and of Roddenberry himself. However, it seems that there are currently no plans to share the files at present, as the contents of the disks are still under the possession of the Roddenberry estate.

The floppies were found several years after Roddenberry’s death in 1991. For years, the data contained on the disks was inaccessible thanks to a quirk of their owner’s computing habits; rather than using a standard PC, he wrote on a custom-built computer with a custom-programmed OS, according to a report from Ars Technica.

Roddenberry apparently owned two of these customized computers — one was sold at auction, and one had since broken down. DriveSavers were given access to the non-functional computer and the collection of disks, and after three months had developed a piece of software capable of reading the data.

However, this was only the first step in a longer process. Reading through the work proved to be a tedious and time-consuming task in its own right, and it would take almost a year for the team to transform the data into documents that could be read by human eyes.

All entities involved are remaining fairly silent about what exactly was found on the disks — but that might not be the case for very long. DriveSavers’ director of engineering Mike Cobb has teased that more details might be on their way, given that 2016 marks the 50th anniversary of the Star Trek franchise.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more