Skip to main content

Mahalo cuts staff, Google algorithm alteration to blame?

mahaloCult of Mac may be in the clear, but some of us (ahem, Digital Trends included) haven’t quite made it off Google’s black list yet. Search and how-to video site Mahalo was one of the many sites Sistrix found had been punished by Google’s recent algorithm adjustment, and unfortunately the company is immediately reeling from the effects.

According to various reports, CEO Jason Calacanis and president Jason Rapp sent out a mass e-mail yesterday afternoon claiming they would have to enact sweeping cuts because of the change in Google Search. According to Center Networks, which received the memo, Mahalo will be forced to cut 10 percent of its staff, eliminating “a handful of positions in the company.” The e-mail said that non-essential services would have to go, and part of that will alter its freelance content – meaning that for the moment, it’s halted. However, Mahalo’s video content shouldn’t take a hit. According to Calacanis and Rapp, Google’s alterations didn’t impact its rankings with YouTube. Mahalo partners with YouTube, and apparently that collaboration is still going strong.

mahaloPerhaps to stay in those good graces, Calacanis wrote on his Twitter account, “Note: We don’t blame @Google for our problems. We support efforts to make better search results & know they always treat partners fairly.” However, he also posed the question: “Which deserve [sic] #1 @Google Rank: How to Play a Xylophone on eHow or @Mahalodotcom” and posted an instructional video from his own site.

Google engineer Amit Singhal told Wired this morning that the company is in the process of restoring legitimate sites’ page rank.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more