Skip to main content

The future of fast PC graphics? Connecting directly to SSDs

Performance boosts are expected with each new generation of the best graphics cards, but it seems that Nvidia and IBM have their sights set on greater changes.

The companies teamed up to work on Big accelerator Memory (BaM), a technology that involves connecting graphics cards directly to superfast SSDs. This could result in larger GPU memory capacity and faster bandwidth while limiting the involvement of the CPU.

A chart breaks down Nvidia and IBM's BaM technology.
Image source: Arxiv Image used with permission by copyright holder

This type of technology has already been thought of, and worked on, in the past. Microsoft’s DirectStorage application programming interface (API) works in a somewhat similar way, improving data transfers between the GPU and the SSD. However, this relies on external software, only applies to games, and only works on Windows. Nvidia and IBM researchers are working together on a solution that removes the need for a proprietary API while still connecting GPUs to SSDs.

The method, amusingly referred to as BaM, was described in a paper written by the team that designed it. Connecting a GPU directly to an SSD would provide a performance boost that could prove to be viable, especially for resource-heavy tasks such as machine learning. As such, it would mostly be used in professional high-performance computing (HPC) scenarios.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The technology that is currently available for processing such heavy workloads requires the graphics card to rely on large amounts of special-purpose memory, such as HBM2, or to be provided with efficient access to SSD storage. Considering that datasets are only growing in size, it’s important to optimize the connection between the GPU and storage in order to allow for efficient data transfers. This is where BaM comes in.

“BaM mitigates the I/O traffic amplification by enabling the GPU threads to read or write small amounts of data on-demand, as determined by the compute,” said the researchers in their paper, first cited by The Register. “The goal of BaM is to extend GPU memory capacity and enhance the effective storage access bandwidth while providing high-level abstractions for the GPU threads to easily make on-demand, fine-grain access to massive data structures in the extended memory hierarchy.”

An Nvidia GPU core sits on a table.
Niels Broekhuijsen / Digital Trends

For many people who don’t work directly with this subject, the details may seem complicated, but the gist of it is that Nvidia wants to rely less on the processor and connect directly to the source of the data. This would both make the process more efficient and free up the CPU, making the graphics card much more self-sufficient. The researchers claim that this design would be able to compete with DRAM-based solutions while remaining cheaper to implement.

Although Nvidia and IBM are undoubtedly breaking new ground with their BaM technology, AMD worked in this area first: In 2016, it unveiled the Radeon Pro SSG, a workstation GPU with integrated M.2 SSDs. However, the Radeon Pro SSG was intended to be strictly a graphics solution, and Nvidia is taking it a few steps further, aiming to deal with complex and heavy compute workloads.

The team working on BaM plans to release the details of their software and hardware optimization as open source, allowing others to build on their findings. There is no mention as to when, if ever, BaM might find itself implemented in future Nvidia products.

Editors' Recommendations

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
AMD’s canceled GPU could have crushed Nvidia
The AMD Radeon RX 7900 XTX graphics card.

For months now, we've been hearing rumors that AMD gave up on its best graphics card from the upcoming RDNA 4 lineup, and instead opted to target the midrange segment. However, that doesn't mean that such a GPU was never in the works. Data mining revealed that the card may indeed have been planned, and if it was ever released, it would've given Nvidia's RTX 4090 a run for its money.

The top GPU in question, commonly referred to as Navi 4C or Navi 4X, was spotted in some patch information for AMD's GFX12 lineup -- which appears to be a code name for RDNA 4. The data was then posted by Kepler_L2, a well-known hardware leaker, on Anandtech forums. What at first glance seems to be many lines of code actually reveals the specs of the reportedly canceled graphics card.

Read more
The war between PC and console is about to heat up again
Intel NUC 12 Enthusiast sitting on a desk.

There's no question that consoles are increasingly becoming more like PCs, but thanks to Nvidia, it appears that the opposite may be taking place too.

According to a new report by Wccftech, Nvidia is working with its partners to create a new ecosystem for gaming on small form factor (SFF) PCs. When it comes to Nvidia, many of us think of some of the best graphics cards that are as powerful as they are massive, like the RTX 4090. However, Nvidia is planning to flip that narrative and set its sights on an unexpected target.

Read more
It’s time to stop believing these PC building myths
Hyte's Thicc Q60 all-in-one liquid cooler.

As far as hobbies go, PC hardware is neither the cheapest nor the easiest one to get into. That's precisely why you may often run into various misconceptions and myths.

These myths have been circulating for so long now that many accept them as a universal truth, even though they're anything but. Below, I'll walk you through some PC beliefs that have been debunked over and over, and, yet, are still prevalent.
Liquid cooling is high-maintenance (and scary)

Read more