Skip to main content

Toshiba Rolls Out Two More SLI Notebooks

Toshiba Rolls Out Two More SLI Notebooks

Toshiba has added two more SLi-capable notebooks to its Satellite X205 line in the form of the X205-SLi5 and X205-SLi6. Toshiba is positioning the notebooks as the perfect companions for serious gamers, offering 17-inch screens, plenty of RAM and hard drive storage, HDMI and S/PDIF output, and (of course) dual Nvidia GeForce 8600M GT graphics. The new notebooks follow up on Toshiba’s first X205 SLI notebook offerings introduced last September.

The X205-SLI5 sports a 2.4 GHz T8300 Intel Core 2 Duo processor with 3 MB of L2 cache and a 17-inch, 1,440 by 900-pixel display, while the X205-SLi6 packs a 2.5 GHz T9300 Core 2 Duo with 6 MB of L2 cache and a 17-inch 1,680 by 1,050-pixel display. Both ship with nVidia SLi Dual GeForce 8600M GT graphics controllers with 512 MB of dedicated video memory, 4 built-in Harmon Kardon speakers, Bluetooth 2.0, an integrated 1.3 megapixel Webcam, 3 GB of RAM (expandable to 4 GB), 802.11a/g/n Wi-Fi wireless networking, and a DVD±R double layer burner. Both systems support dual hard drives: the SLi5 can ship with two 160 GB drives, while the SLi6 can pack two 200 GB drives.

The Toshiba Satellite X205-Sli5 and SLi6 are available now, with prices starting at $1,999.99 and $2,499.99, respectively.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more