Skip to main content

Google is bringing AI to the classroom — in a big way

a teacher teaching teens
Google

Google is already incorporating its Gemini AI assistant into the rest of its product ecosystem to help individuals and businesses streamline their existing workflows. Now, the Silicon Valley titan is looking to bring AI into the classroom.

While we’ve already seen the damage that teens can do when given access to generative AI, Google argues that it is taking steps to ensure the technology is employed responsibly by students and academic faculty alike.

Following last year’s initial rollout of a teen-safe version of Gemini for personal use, the company at the time decided to not enable the AI’s use with school-issued accounts. That will change in the coming months as Google makes the AI available free of charge to students in over 100 countries though its Google Workspace for Education accounts and school-issued Chromebooks.

Teens that meet Google’s minimum age requirements — they have to be 13 or older in the U.S., 18 or over in the European Economic Area (EEA), Switzerland, Canada, and the U.K. — will be able to converse with Gemini as they would on their personal accounts. That includes access to features like Help me write, Help me read, generative AI backgrounds, and AI-powered noise cancellation. The company was quick to point out that no personal data from this program will be used to train AI models, and that school administrators will be granted admin access to implement or remove features as needed.

What’s more, teens will be able to organize and track their homework assignments through Google Task and Calendar integrations as well as collaborate with their peers using Meet and Assignments.

Google Classroom will also integrate with the school’s Student Information System (SIS), allowing educators to set up classes and import pertinent data such as student lists and grading settings. They’ll also have access to an expanded Google for Education App Hub with 16 new app integrations including Kami, Quizizz, and Screencastify available at launch.

Students will also have access to the Read Along in Classroom feature, which provides them with real-time, AI-based reading help. Conversely, educators will receive feedback from the AI on the student’s reading accuracy, speed, and comprehension.

In the coming months, Google also hopes to introduce the ability for teachers to generate personalized stories tailored to each student’s specific education needs. The feature is currently available in English, with more than 800 books for teachers to choose from, though it will soon offer support for other languages, starting with Spanish.

Additionally, Google is piloting a suite of Gemini in Classroom tools that will enable teachers to “define groups of students in Classroom to assign different content based on each group’s needs.” The recently announced Google Vids, which helps users quickly and easily cut together engaging video clips, will be coming to the classroom as well. A non-AI version of Vids arrives on Google Workspace for Education Plus later this year, while the AI-enhanced version will only be available as a Workspace add-on.

That said, Google has apparently not forgotten just how emotionally vicious teenagers can be. As such, the company is incorporating a number of safety and privacy tools into the new AI system. For example, school administrators will be empowered to prevent students from initiating direct messages and creating spaces to hinder bullying.

Admins will also have the option to block access to Classroom from compromised Android and iOS devices, and can require multiparty approval (i.e. at least two school officials) before security-sensitive changes (like turning off two-step authentication) can be implemented.

Google is introducing a slew of accessibility features as well. Chromebooks will get a new Read Aloud feature in the Chrome browser, for example. Extract Text from PDF will leverage OCR technology to make PDFs accessible to screen readers through the Chrome browser, while the Files app will soon offer augmented image labels to assist screen readers with relaying the contents of images in Chrome.

Later this year, Google also plans to release a feature that will allow users to control their Chromebooks using only their facial expressions and head movements.

These features all sound impressive and should help bring AI into the classroom in a safe and responsible manner — in theory, at least. Though given how quickly today’s teens can exploit security loopholes to bypass their school’s web filters, Google’s good intentions could ultimately prove insufficient.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
AI Teammates are coming to your workplace
A screenshot from Google I/O showing an AI Teammate side by side with the presenter.

At Google I/O 2024, Google has announced a new AI feature within its Google Workspace ecosystem called AI Teammate. The idea is simple: create an AI agent and make up a job for it within your organization. This AI Teammate, powered by Gemini, will be able to act within your virtual office just like any other teammate would, and can be given a name and asked questions.

As shown in the demo, you can grant it full access to a range of Google apps, Spaces, meetings, chats, and documents within your workplace -- and give it a job. In the demo, this AI Teammate was given a description, as well as a variety of jobs and instructions, including monitoring and tracking specific projects, analyzing data, and facilitating team collaboration.

Read more
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more