Skip to main content

AOL appears to be killing third-party access to AOL Instant Messenger

AOL headquarters
Image used with permission by copyright holder
AOL Instant Messenger (AIM) will be familiar to anyone who’s been using PCs for a while. There was a time when AOL was the leading internet service provider, and AIM was among the most popular instant messaging tools for keeping in touch with friends and family.

Fast forward to today, and while AOL still exists, it’s just a shadow of its former self. And AIM has also fallen by the wayside, enough so that the company is turning off the service used by third-party clients, as Ars Technical reports.

For quite some time, there have been two ways to use AIM as an instant messaging service. You could install AOL’s own messaging app, which runs on Windows, MacOS, iOS, and Android. That method still exists and likely will do so for at least the foreseeable future.

The other method was to install one of a number of third-party clients for your platform and access the service that way. A number of options existed, including Adium, Trillium, and Pidgin, some of the more popular options.

Those third-party apps require the use of AOL’s OSCAR chat protocol, however. Without it, there’s no way to pass messages to and from the AIM service. And as one user discovered and posted on Twitter, AOL is starting to cut apps off from its messaging service by turning off OSCAR support.

Just got this AIM message. Anyone else still using AIM out there? pic.twitter.com/2WpR1lTwmH

— Cyrus Farivar // @cfarivar@journa.host (@cfarivar) February 28, 2017

Speculation suggests that AOL is cutting its losses due to low usage of AIM, which costs the company real money to maintain. AOL hasn’t made any official announcements yet, but it’s possible that we might be experiencing the end of an era if the AIM service is indeed starting to ramp down.

Mark Coppock
Mark has been a geek since MS-DOS gave way to Windows and the PalmPilot was a thing. He’s translated his love for…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more