Skip to main content

Microsoft Acquires Credentica’s U-Prove

Microsoft Acquires Credentica

Microsoft is looking to bolster its online privacy portfolio with the acquisition of Credentica’s U-prove technology. Terms of the deal weren’t disclosed, but Microsoft is not only getting U-prove: it’s getting the underlying patents, and U-prove’s Stefan Brands and Credentica’s Greg Thompson and Christian Paquin will join Microsoft’s Identity and Access Group.

U-prove was developed as a technology to enable users to offer up only the minimal amount of personal information required to carry out an electronic transaction. The U-prove system also employes encryption to ensure disparate systems can’t mine transaction data for information, building aggregate profiles that potentially violate users’ privacy.

Microsoft plans to integrate U-prove into WIndows Communication Foundation and CardSpace, both of which are built on Microsoft’s .Net development framework. U-prove is of obvious interest to online merchants and ecommerce sites, by should also appeal to governments, medical applications, and identity verification services. On his Identity Corner site, Brand notes that U-prove has been approached many times in the past about a takeover, but he feels Microsoft is the right company to drive the technology forward because of its presence in both the client and server sides of the process. Stefan Brand has been developing U-prove for 15 years.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more