Skip to main content

Google: social is only “one chapter” in future plans

google-social

Amidst reports that Facebook gets more traffic than any other website, Google is calling social networking only “one chapter” of its long-term strategy, reports the AFP. Speaking with Australian public television, Patrick Pichette, Google’s chief financial officer, said that the digital economy is exploding and innovation is crucial to surviving.

“The digital world is exploding and it has so many chapters — it has cloud computing, it has mobile, it does have social, it has searches, it has so many elements. Within that… social (networking) is just one chapter,” said Pichette. “Everybody has to take in consideration social signals, but it’s one of so many signals to make the right decision. So yes, absolutely it will be part of our strategy. Yes, it will be embedded in many of our products.”

Good to know. Hopefully Google realizes that a book is unreadable if it’s missing chapters. It’s even worse if those chapters don’t make sense together, as has been the case with many of Google’s social products thus far.

For months, rumors have persisted that Google is working on its own social network, a Facebook killer of sorts, called “Google Me.” The search giant has denied the rumors in recent weeks, but only in cryptic, non-denial ways. Last week Facebook announced an email service that could be seen as a competitor to Gmail.

Jeffrey Van Camp
Former Digital Trends Contributor
As DT's Deputy Editor, Jeff helps oversee editorial operations at Digital Trends. Previously, he ran the site's…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more