Skip to main content

Gamers outperform both scientists and algorithms in protein-folding race

protein folding gamers on top folditr
Image used with permission by copyright holder
Gamers have managed to beat both professional scientists and specially designed AI algorithms in the latest attempt to discover how certain proteins fold. The victors achieved their victory by doing what they do best: playing video games.

If you’ve been around the internet for the past couple of decades, there’s a good chance you’ve heard of protein folding. That’s because projects like Folding@Home and Foldit have helped galvanize the public into aiding a complex and time-consuming task: figuring out how proteins do what they do.

Although the project at hand involved a simple puzzle game, the proteins folded by players could actually have a real bearing on science and that’s exactly what happened. Just shy of 470 gamers took on the task of folding the best proteins they could and went up against 61 undergraduates, a couple of trained crystallographers, a specially designed computer modelling program, and a pair of AI folding algorithms.

The gamers managed to beat them all.

Related: Here’s why you need the hands of a surgeon to beat Flappy Birds on hard

In the course of their victory, the gamers discovered the structure of a protein which may help prevent plaque formation — something that’s a key component in Alzheimer’s research. In addition, the model of competition and gamifying of a scientific endeavor that was followed has a lot of people excited. Many professors and teachers watching from afar are now considering Foldit and similar applications as potentially very effective learning and research tools.

“I’ve seen how much players learn about proteins from playing this game,” said postdoctoral fellow at the University of Michigan, Scott Horowitz (via U-M news). “We spend weeks and weeks trying to jam this into students’ brains and Foldit players learn it naturally because it’s fun.”

Another point taken from this study is that there may be something to the unique “3-D mentality,” that the gamers have, from spending so much time projecting themselves into a virtual world. It may be that there are other scientific processes which could greatly benefit from the attention of some of the world’s best digital puzzle solvers.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more