Skip to main content

Microsoft says its addressing ‘misleading’ apps in Windows Store

microsoft to address misleading apps in windows store
Image used with permission by copyright holder
Microsoft says that it has streamlined the Windows Store app certification requirements in order to ensure that apps in the store have titles that are accurate, and don’t misrepresent what they’re do, or what they’re for. Microsoft made in the announcement in this official blog post.

Microsoft also says that, as part of these tweaked requirements, apps will need to be more accurately categorized to reflect what the app does. On top of that, Redmond states that app icons need to be more accurate as well, and can’t pull the rug out from its users by using an icon which implies a different purpose or use for them.

Related: Windows 9 public preview likely coming soon

These changes apply to both newly submitted apps from here on our, as well as existing app updates. Microsoft describes this as a first step towards a better Windows Store experience for its users. To this point, Microsoft says that it has removed over 1,500 apps due to such violations, reimbursing people for the cost of those app downloads in the event that they weren’t free.

Related: Here are the latest Windows 9 rumors

It will be interesting to see what Microsoft decides to do next to combat misleading apps. In the meantime, if you know of or come across an app that can be categorized as misleading in accordance with Microsoft’s standards, you can let them know by sending an email to reportapp@microsoft.com.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more