Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Airline’s website error offers astonishing bargains for flyers

Passengers booking with All Nippon Airways (ANA) scored a bargain recently when a website error allowed them to book seats in first class and business class at prices far below their normal rate.

At least one person paid $890 for a first-class round-trip from Jakarta, Indonesia, to the Caribbean via Tokyo and New York, a seat with ANA that would ordinarily cost around 20 times that amount.

Data obtained by Bloomberg revealed that a Jakarta round-trip ticket in business class flying via Singapore, Tokyo, and New York that would normally cost $10,000 was offered on the website for a mere $300.

The Japanese carrier said that an error on its Vietnamese website temporarily showed the wrong currency conversion. It hasn’t said how many passengers managed to book the cut-rate tickets before they were removed.

An ANA spokesperson reportedly said that the airline would honor the tickets for the lucky passengers who purchased them, but later reports suggested a final decision has yet to be made.

One lucky passenger told Bloomberg that he booked a round-trip ticket in business class, flying from Jakarta to Honolulu via Tokyo, for a mere $550 instead of $8,200.

“I never thought I’d catch such a deal,” he told Bloomberg, adding that he entered his details as quickly as possible, as he knew ANA would take the ticket down as soon as it noticed the mistake.

Such mishaps do occur from time to time. Four years ago, Cathay Pacific messed up twice in a matter of weeks, selling first-class seats for a tenth of their proper price. The Hong Kong-based airline honored the reservations.

However, it can sometimes go the other way, too, with a traveler using British Airways’ website once asked to pay more than $4 million for a family holiday that should have cost around 200 times less.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more