Skip to main content

Groupon apologizes to Japanese customers

Groupon JapanJapanese customers received a public apology from Groupon CEO Andrew Mason this morning, who admitted his daily deals site had “really messed up.”

A coupon for a traditional New Year’s meal went awry when an overwhelming demand resulted in late deliveries and food in “terribly conditions,” the AP reports. The meal, called “osechi,” is a symbolic dish also known for its aesthetic presentation. The disappointment led many Japanese customers – who had shelled out 10,500 yen, or $127, for the coupon – to take to the Internet, posting complaints as well as photos of the shoddily prepared meal.

According to Groupon, the café that had offered the coupon through its site was overwhelmed by the order volume and unable to meet the demand. But in addition to the apology, Groupon also refunded unhappy customers and offered 5,000 yen in vouchers.

Groupon has experienced significant success stateside in recent weeks, particularly with its unprecedented round of fundraising. The $950 million investment has solidified the company’s position and will undoubtedly fund its growth. The company has also been extending its reach in Asia, and last month acquired three popular coupon websites in Taiwan, the Philippines, and Singapore.

And while popularity isn’t typically an issue for any growing company, it was in this case. Mason explained that Groupon has created formulas to help vendors it partners with determine how to meet consumer demand, and how many coupons to offer. Unfortunately, this has not been introduced to Japan yet. “Basically because the popularity of Groupon Japan has grown so quickly it took us off guard. We weren’t expecting to run into this problem so quickly.”

Mason claims capacity planning is now being introduced to the Japanese staff.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more