Skip to main content

Coinbase bans Wikileaks from its currency exchange platform

Engadget reports that Coinbase has banned Wikileaks Shop’s account for violating the exchange’s terms of service. This means that the site will no longer convert cryptocurrency payments into real money such as dollars or euros. Coinbase did not go into the specifics of why it banned Wikileaks, but did note that it has to honor “regulatory compliance mechanisms” under the  U.S. Financial Crimes Enforcement Network.

Wikileaks can still accept payments via Bitcoin and other cryptocurrencies, but it will have to find a new way to convert those tokens into hard currencies. In response to the move, Wikileaks has called for a boycott of the service.

As The Verge’s Andreas Antonopoulos points out, Wikileaks started accepting Bitcoin and similar currencies solely because traditional financial institutions had turned against the platform.

One of the big appeals of Bitcoin and other cryptocurrencies is that they are largely unregulated and anonymous. This makes them ideal for those who for reasons legitimate and otherwise,  are concerned about privacy. However, in recent months, we have seen an increasing amount of regulatory oversight.

Part of this is due to the simple fact that governments are finally starting to catch up to cryptocurrencies. The SEC has recently started to crack down on scams and frauds operating as initial coin offerings. The IRS has also started taxing cryptocurrency as it does with other investments though the enforcement on that is still a bit murky.

As for Wikileaks, there are other cryptocurrency exchanges it could use. However, it is possible that Coinbase’s actions will set a precedent with other exchanges, forcing Wikileaks to rely on less reputable avenues of exchange.

Eric Brackett
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more