Skip to main content

Firefox will scan your browsing history to suggest advertiser sites

firefox-suggested-tiles
Image used with permission by copyright holder
In an attempt to sell advertising space in a user’s new tab page within the Firefox browser, Mozilla is launching a new platform called “Suggested Tiles” specifically for advertisers. Similar to Google using your Web search history to load related advertisements within Google Adsense placements, Mozilla will look through your visited sites within Firefox to suggest an advertiser site to visit and display it on the new tab page.

However, there are user protections built into the new feature as detailed on Mozilla’s Advancing Content blog. Users will be able to flip off the Suggested Tiles function by toggling a check box within the browser’s settings. Users can also completly avoid site suggestions by opting for a blank page when opening up a new tab within Firefox.

suggested-tiles-offIf a user does opt into the Suggested Tiles feature, personal data won’t be supplied to advertisers, only anonymized aggregate data. Mozilla won’t be using tracking cookies to keep tabs on users or building profiles based off a user’s URL history. Mozilla will have final approval over all sites that are entered into the Suggested Tiles program and will require visits to at least five similar sites before offering up a related suggestion through the feature.

As a hypothetical example, a user may see an advertisement for The Food Network within the Suggested Tiles section of a new tab page if the user has visited recipe sites, foodie blogs or other culinary related sites. Mozilla will be including a “Suggested” tag on the tile in order to differentiate it from the rest.

Regarding the launch of Suggested Tiles within Firefox, Mozilla is expected to launch the new feature within the Beta version of the browser relatively soon. The full launch of the feature to the most current version of Firefox will likely occur later in the summer.

Mike Flacy
By day, I'm the content and social media manager for High-Def Digest, Steve's Digicams and The CheckOut on Ben's Bargains…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more