Skip to main content

Oxford Dictionaries admit ‘bloggable’ and other tech terms

bloggableIt’s becoming more and more obvious how the digital age is ingrained into our lives – and our language. Today the Oxford Dictionaries Online announced it will pick up various Internet related terms, including “bloggable,” “scareware,” and “cyberbullying.”

These are only a few of the recent additions, as the organization’s statement reveals that “hundreds of other new words and phrases” inspired from texting will be recognized. For instance, “sexting,” “clickjacking,” and “feature phone” will be included.

According to the The Telegraph, the rise and prevalence of all things digital has been a catalyst in expanding Oxford Dictionaries’ repertoire. “The rapid development of technology creates multiple new products, services, and functionalities, which all need new terms to describe them. We are also seeing the very fast circulation of new vocabulary on a global basis, with the expansion of social media” a spokeswoman for the organization said.

While some of the entries make sense – for instance, “sexting” has become a widely used and easily recognizable word, others seem preemptive to say the least. “Fnarr fnarr,” which is apparently a phrase implying snickering via text and defined as “used to represent sniggering, typically at a sexual innuendo,” seems better suited for the likes of Urbandictionary.com.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more