Skip to main content

Apple vs FBI shown in different light as journalist hacked mid-flight

journalist hacked mid flight inflight
123RF
A journalist working on a story about the Apple vs FBI case, covering the potential weakening of smartphone security, had his laptop hacked during a flight by someone sitting in the row just behind him. At a time when many are wondering why Apple is kicking up such a fuss about the FBI’s request, this highlights an important point: whenever security is weakened people take advantage of it.

On the flight, USA Today journalist Steven Petrow sent emails to colleagues, discussed elements of articles he was writing, and even reached out to security experts for comments and quotes. To do all this, though, he used the “open” in-flight Wi-Fi from American Airlines partner, Gogo. Although encouraged to use a VPN if accessing sensitive information via the public Internet source, Petrow ignored that advice, to his regret.

Fortunately the man sitting just behind him alerted him to the security flaw in his practices when the flight landed, as he was keen to point out the dangers associated with weakened security — including the fact that he was operating on the flight’s open Wi-Fi. The situation may be analogous to what many might face if Apple loses in its FBI case.

Related: Tim Cook says FBI is asking Apple to write the software ‘equivalent of cancer’

The argument given by many from the perspective of the FBI is that this is just one smartphone Apple is being asked to weaken the security on. Apple’s action in opening the phone won’t be rolled out to others as a standard update and no other users will be immediately affected, we’re assured.

However, as Mr Petrow and his white-hat friend believe, it won’t take long before other users are affected. If precedent is set that the FBI can demand a weakening of security, it will likely do so again when it has another phone it needs to break into. Governmental agencies in other countries may soon wish to do the same.

If this begins to happen on a larger scale, many believe the FBI will see this as a foot in the door to creating standardized backdoors into hardware. Time will tell.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more