Skip to main content

Iranian citizens get Google software, government still stonewalled

Google-IranGoogle announced today that Google Earth, Picasa, and Chrome are now available to citizens of Iran. The Iranian government is still blocked from using the software.

Due to tense relations with the Middle Eastern nation, the US chose to place restrictions on software downloads to the country. Google reports that today, “some” of those restrictions were lifted and thus, the Internet titan has made mapping, photo-sharing, and Web browsing available. The company vows it will comply by US export controls and sanctions programs by blocking Iranian government IP addresses.

Last March, The US Treasury Department decided to allow Internet-based communication services to Cuba, Sudan, and Iran to encourage free speech within the notoriously repressive regimes. The amendment came when the aftermath of Iran’s 2009 presidential election proved the far-reaching effects of social media. Twitter and YouTube became crucial tools for Iranians to communicate with the world — in fact, it was reported that the US government requested Twitter reschedule a routine upgrade during the post-election protests.

According to the loosened US sanctions, Google could also now allow chat. However, because of concerns over government privacy (or, rather, a lack thereof) the search giant’s instant messaging service will not be introduced quite yet. “It’s a balancing act between providing information but doing it in a way that doesn’t compromise people’s safety,” Google director of public policy and communications strategy Scott Rubin told Voice of America News. Google’s export compliance programs manager Neil Martin added, “Any government that wants to might be able to get into those conversations, and we wouldn’t want to provide a tool with the illusion of privacy if it wasn’t completely secure.”

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more