Skip to main content

PayPal reinstates account for Pfc. Bradley Manning fund, Courage to Resist

bradley_manningPayPal has resumed processing donations made to Courage to Resist, a non-profit fund set up to provide support for accused WikiLeaks whistle-blower US Army Pfc. Bradley Manning. The move comes after the group accused PayPay of imposing a temporary suspension on their account because of anti-WikiLeaks sentiments.

In a statement on the company blog, PayPal says the suspension of Courage to Resists account “had nothing to do with WikiLeaks.” Instead, it was because the organization “had not complied to our stated policy requiring non profits to associate a bank account with their PayPal account,” which is “not an issue” for “the vast majority of non-profits.”

The PayPal statement follows a press release by Courage to Resist, which says that “evil” PayPal “opted to apply an exceptional hurdle for us to clear in order to continue as a customer, whereas we have clearly provided the legally required information and verification.”

PayPal denies this, saying that after a review of Courage to Resist’s account, they “have decided to lift the temporary restriction placed on their account because we have sufficient information to meet our statutory ‘Know Your Customer’ obligations.”

Last December, PayPal met the wrath of pro-WikiLeaks hacktivist group Anonymous after PayPal stopped processing donations made to the anti-secrecy organization, which had just released a massive cache of US embassy cables to the public. The Anonymous DDoS attack successfully disrupted PayPal’s website. So it’s not surprising that the company is doing whatever it can to avoid another confrontation.

US Army Pfc. Bradley Manning is accused of passing on classified documents, including the US embassy cables, to WikiLeaks. He is currently being held in solitary confinement in Quantico, Viginia, where he has been since June 2010. His legal costs are an estimated $100,000, which Courage to Resist is hoping to raise.

Andrew Couts
Former Digital Trends Contributor
Features Editor for Digital Trends, Andrew Couts covers a wide swath of consumer technology topics, with particular focus on…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more