Skip to main content

IBM wants to change IoT and car security by testing throughout the product's life

IBM Watson
Image used with permission by copyright holder
IBM announced the debut of two new security testing initiatives targeting the connected car industry and the Internet of Things (IoT). Building upon the foundation of its penetration testing division, X-Force Red, IBM is looking to instill new standards of security for smart products, as well as drive an industry practice of security testing throughout the life of a product.

With a projected 61 million connected cars and more than 20.4 billion IoT devices running in the wild by 2020, there is no doubt that the connected world of smart-technology is coming to more products than ever before. As we have seen in a number of instances though, these sorts of devices, designed and constructed in industries without a wealth of experience in digital security, can open up huge holes for hackers and nefarious actors to exploit.

That is where IBM comes in. It already operates a penetration testing branch, known as X-Force Red, which provides both expertise and security testing to its customers, but it wants to take things a step further. Specifically targeting the automotive and IoT industries, IBM is hoping to engender new practices and standards within connected devices and their industries.

With connected cars, IBM believes much more can be done to keep them and its users safe. It previously raised the issue of a change in ownership potentially leaving powerful applications and connected features with the previous owner, opening up huge holes in the vehicle’s security. With several thousand potentially connected components of concern too, IBM is looking to provide automotive makers with a comprehensive testing platform to make sure that all systems are locked up tight before sale.

IBM aims to share the best practices it develops to standardize security protocols across the industry.

In developing its new, automotive testing platforms, IBM worked with more than a dozen automotive manufacturers and suppliers and now believes X-Force Red can offer comprehensive testing and ongoing recommendations for future car security. It aims to share the best practices it develops, to standardize security protocols across the industry.

IBM wants to do the same with IoT devices too, though it believes the industry presents even greater threats to end user and enterprise security. It claims that due to shortened production cycles, products are often rushed through the design process, leaving them and their users vulnerable.

To that end, IBM is bringing its Watson computing system to help test the security of IoT devices autonomously and remotely. Combined with the existing X-Force Red testing initiatives, IBM believes this will deliver a next-generation testing platform for IoT devices, helping them launch with better defenses.

However, IBM wants to change the way the industry looks at security — not just as something to get right for the product’s debut, but throughout its life cycle too. It claims that 58 percent of IoT developing organizations only test IoT applications during the production phase. It wants to extend that until the death of the product.

IBM will look to offer customers cloud-based security testing throughout a product’s life, as well as create practices for responding to threats and incidents.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more