Skip to main content

10 minute movie downloads: Google’s high-speed broadband now live in Stanford

google fiberLast year we heard that Stanford residents would be getting first taste of Google’s delectable 1 Gbps fiber network project. Now it seems that the beta test has gone live for those near the university.

Cnet reports that the service has been in the market for a month; one of the first to try out the ultra high-speed was Stanford econ professor Martin Conroy, and he’s been putting it to good use. The beta test area, also known as those-who-are-worthy, is located at the university’s Residential Subdivision, which contains a mix of both faculty and students.

A Reddit user called The Team wrote in a gloating thread titled “I just got Google Fiber” that residents were given a wireless N router and told the ultra-high-speed service was free for a year.

Subsequently, The Team downloaded a 1.6GB movie in 10 minutes. Posting results from Speedtest.net, it was found that the download speeds maxed out at about 151.68 Mb/s and 92.79Mb/s respectively. All without a cap. The Reddit user announced he would be going on a speedtest via anandtech.comdownloading spree.

151 Mb/s isn’t 1Gbps, but it is impressive. The beta is the first stop on the way to Kansas City, Kansas where Google plans to build a much larger network. Kansas city beat out over 1,100 other applications for the honor; even hardcore Topeka, Kansas which renamed itself to Google.

The broad Google plan, which is in line with Federal goals, is to get America up from our global rank of 15 for broadband access. It’s hoped that ultra-high-speed in homes and business will drive innovation.

Alongside Google’s efforts, the 29 university-strong Gig.U project plans on bringing 1 Gbps broadband to many college communities in America’s heartland.

Jeff Hughes
Former Digital Trends Contributor
I'm a SF Bay Area-based writer/ninja that loves anything geek, tech, comic, social media or gaming-related.
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more