Skip to main content

A software malfunction in Washington let 3,200 inmates out early

software glitch releases prisoners early prison cell
John McAllister/123rf
That the U.S. prison system is in desperate need of reform seems pretty widely accepted, but releasing inmates early due to a software glitch probably isn’t what activists had in mind as a solution. On Tuesday, Seattle Governor Jay Inslee admitted that since 2002, around 3,200 prisoners have been released by mistake. The problem stemmed from a technological error from the Washington’s Department of Corrections, which resulted in incorrect prison sentences (all in the favor of the incarcerated) for around 3 percent of inmates.

Another 3,100 individuals still in prison have inaccurate anticipated release dates, and according to Inslee’s general counsel, Nicholas Brown, many of the mistakes resulted in a three months too soon freedom date. But in some more extreme cases, a release date was listed as 600 days off the mark.

“That this problem was allowed to continue to exist for 13 years is deeply disappointing,” Inslee said. “It is totally unacceptable, and frankly it is maddening.”

While most of the prisoners who benefited from the state’s mistake won’t have to return to jail, there are at least seven inmates who’ve been released but whose correct release date has yet to pass. And these seven, unfortunately, will have go back behind bars.

The problem was initially identified in 2012, but for some reason, was never actually corrected. Supposedly, the coding fix that should have addressed the issue was continually delayed, and the governor himself was not alerted to the situation until last week.

“How [this problem] did not rise up in the agency to the highest levels is not clear to me,” Brown said.

Now, state officials say that the system will be fully rectified by January 7. An external investigation will also be conducted in attempts to better understand this bizarre malfunction. “I have a lot of questions about how and why this happened, and I understand that members of the public will have those same questions,” said Inslee. “I expect the external investigation will bring the transparency and accountability we need to make sure this issue is resolved.”

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more