Skip to main content

Panasonic Announces First 4x Writable BD-R

Panasonic Announces First 4x Writable BD-R
Image used with permission by copyright holder

If you’ve dropped a couple hundred dollars on a 4x Blu-ray burner lately, you may be interested to know that the first 4x-writable Blu-ray discs are available, allowing you to finally exercise your pricy investment. Panasonic announced on Tuesday that it would be the first manufacturer to offer the faster-writing discs.

Single-layer discs with 25GB capacities will be available later this month, while the dual-layer 50GB discs will be coming down the pipe in September. With such massive large capacities, the write-speed of the disc actually plays a major factor in their usability. Currently, a Blu-ray burner operating at 1x, or 36 Mbps, would take nearly four minutes to write a 1GB folder, or an hour and a half for an 25GB entire disc. Naturally, a burner operating at four times that speed can do a 1GB folder in just one minute, and an entire disc in 23 minutes. Panasonic’s discs are rated for 18 MBps.

Panasonic achieved the increase in write speed with a new phase change technology, which they claim makes the burning process stable and reliable even at four times the speed. The company did not reveal pricing for the upcoming discs, but with 1x BD-Rs in the $10 to $30 price range, the faster versions will undoubtedly not come cheap.

Nick Mokey
As Digital Trends’ Managing Editor, Nick Mokey oversees an editorial team delivering definitive reviews, enlightening…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more