Skip to main content

Comcast Defends Blocking P2P Traffic

Comcast Defends Blocking P2P Traffic

In formal comments to the Federal Communications Commission, cable operator Comcast has stated that blocking P2P and file sharing traffic to some of its users is a justifiable action to keep the performance of its network at high levels to all its customers. Likening its practices as the equivalent of a “busy signal” that a fax machine or telephone might generate, Comcast argued blocking file-sharing and P2P traffic by forging “reset” packets was a valid network management tool where no other alternatives exist.

Comcast has not disclosed the details of its network management tools or policies, even to its subscribers, saying only that it engages in “reasonable” actions to ensure the overall performance of its network, although last month the company updates its terms of service to warn it may arbitrary terminate file sharing sessions over congested network segments.

The issue came to a head last October when the Associated Press ran an article detailing apparent filtering and packet forgery being conducted by Comcast systems to block BitTorrent and Gnutella. The findings were confirmed by the EFF and other parties; within weeks, Comcast was sued for blocking P2P traffic on its network, and a formal complaint was filed with the FCC.

Comcast is currently the second-largest Internet service provider in the United States.

Critics argue Comcast’s actions violate the principle of “net neutrality,” whereby all Internet traffic would theoretically be treated with equal priority. The FCC has endorsed principles of net neutrality, but FCC chair Kevin Martin has also spoken in favor of network management policies, so long as providers are transparent about their actions.

Comcast’s 57-page filing can be downloaded from the FCC.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more