Skip to main content

83-year-old gets hit with $24,289 bill from AT&T by using AOL’s dial-up service

AOL headquarters
Image used with permission by copyright holder
According to AOL’s second quarter results in 2014, 2.3 million people still use its dial-up Internet service. That’s not exactly a small number of people, and this is why AT&T subscriber Ron Dorff’s $24,289 bill from using dial-up stands out even more, reports the LA Times.

The 83-year-old California resident explained to the outlet that he was used to paying $51 a month to AT&T for landline service, making him one of the few remaining AOL dial-up users in his neck of the woods. All of a sudden, his AT&T bill jumped to $8,000 in one month. He tried to contact AT&T to no avail, and though a tech was allegedly dispatched, the octogenarian said no one ever arrived at his house.

One month later, the bill had doubled to nearly $16,000. Even though a tech did finally show up at his house, Dorff was only told there was something wrong with his modem. Dorff contacted AT&T with the issue, hoping to have the bill reduced, but he said the company wasn’t all ears.

“The woman said they couldn’t make an adjustment,” said Dorff. “I told her I couldn’t possibly afford what they wanted. She just insisted that I had to pay it. She was very blunt about it.” Dorff lives off of his Social Security benefits, which gives him $1,500 a month.

Eventually, Dorff reached out to David Lazarus over at LA Times with his issue, and when AT&T was contacted by the outlet, the company finally explained the root of the enormously high bill. For some reason, Dorff’s modem started dialing an international number and, as a result, Dorff was charged by the minute.

According to AT&T, it was in the process of addressing the issue when the LA Times contacted the company and well on its way towards dropping the charges. Based on Dorff’s experience with AT&T regarding his matter, however, it would have been nice if the company told him that earlier.

Williams Pelegrin
Former Digital Trends Contributor
Williams is an avid New York Yankees fan, speaks Spanish, resides in Colorado, and has an affinity for Frosted Flakes. Send…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more