Skip to main content

China shutters 130,000 Internet cafes

chinese-internet-users-in-internet-cafe
Image used with permission by copyright holder

A new report by the Chinese government indicates authorities in the country have shut down some 130,000 Internet cafes in the last six years for operating illegally—and, in many cases, that illegal operation was admitting minors. The closures are seen as part of China’s ongoing efforts to control how its citizens access the Internet and what content will be available to them—and, as part of that, the government apparently plans to crack down on independent Internet cafes and focus on promoting Internet cafe chains…since the chains will be easier to control.

About a year ago, China made it illegal for Internet cafes to admit minors, claiming that content found on the Web could be dangerous to them.

China’s official Xinhua News Agency (Chinese) indicates the Ministry of Culture claims more than 160 million Chinese access the Internet via Internet cafes. China’s Internet user population is estimated at about 450 million and growing, meaning roughly a third of China’s Internet users access the Internet via cafes. The Ministry also says there are about 144,000 Internet cafes in the country, about a third of which are operated by chains. Overall, China not only has the world’s largest market of Internet users, but also the largest market of Internet cafes. Recent third party estimates indicate about half of Chinese Internet cafe users are between the ages of 18 and 25.

China’s Ministry of Culture says it plans to release the text of its report next month.

Topics
Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more