Skip to main content

Nvidia Workbench lets anyone train an AI model

Nvidia CEO showing the RTX 4060 Ti at Computex 2023.
Nvidia

Nvidia has just announced the AI Workbench, which promises to make creating generative AI a lot easier and more manageable. The workspace will allow developers to develop and deploy such models on various Nvidia AI platforms, including PCs and workstations. Are we about to be flooded with even more AI content? Perhaps not, but it certainly sounds like the AI Workbench will make the whole process significantly more approachable.

In the announcement, Nvidia notes that there are hundreds of thousands of pretrained models currently available; however, customizing them takes time and effort. This is where the Workbench comes in, simplifying the process. Developers will now be able to customize and run generative AI with minimal effort, utilizing every necessary enterprise-grade model. The Workbench tool supports various frameworks, libraries, and SDKs from Nvidia’s own AI platform, as well as open-source repositories like GitHub and Hugging Face.

Once customized, the models can be shared across multiple platforms with ease. Devs running a PC or workstation with an Nvidia RTX graphics card will be able to work with these generative models on their local systems, but also scale up to data center and cloud computing resources when necessary.

“Nvidia AI Workbench provides a simplified path for cross-organizational teams to create the AI-based applications that are increasingly becoming essential in modern business,” said Manuvir Das, Nvidia’s vice president of enterprise computing.

Nvidia has also announced the fourth iteration of its Nvidia AI Enterprise software platform, which is aimed at offering the tools required to adopt and customize generative AI. This breaks down into multiple tools, including Nvidia NeMo, which is a cloud-native framework that lets users build and deploy large language models (LLMs) like ChatGPT or Google Bard.

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

Nvidia is tapping into the AI market more and more at just the right time, and not just with the Workbench, but also tools like Nvidia ACE for games. With generative AI models like ChatGPT being all the rage right now, it’s safe to assume that many developers might be interested in Nvidia’s one-stop-shop easy solution. Whether that’s a good thing for the rest of us still remains to be seen, as some people use generative AI for questionable purposes.

Let’s not forget that AI can get pretty unhinged all on its own, like in the early days of Bing Chat, and the more people who start creating and training these various models, the more instances of problematic or crazy behavior we’re going to see out in the wild. But assuming everything goes well, Nvidia’s AI Workbench could certainly simplify the process of deploying new generative AI for a lot of companies.

Editors' Recommendations

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
Nvidia’s AI game demo puts endless dialogue trees everywhere
An AI game demo produced by Nvidia.

Nvidia did what we all knew was coming -- it made an AI-driven game demo. In Convert Protocol, you play as a detective trying to track down a particular subject at a high-end hotel. The promise is sleuthing through conversations with non-playable characters (NPCs) to get what you need. Except in this demo, you use your microphone and voice to ask questions instead of choosing from a list of preset options.

I saw the demo with a few other journalists in a small private showing. As the demo fired up and Nvidia's Seth Schneider, senior product manager for ACE, took the reigns, I was filled with excitement. We could ask anything; we could do anything. This is the dream for this type of detective game. You don't get to play the role of a detective with a preset list of dialogue options. You get to ask what you want, when you want.

Read more
All RTX GPUs now come with a local AI chatbot. Is it any good?
A window showing Nvidia's Chat with RTX.

It's been difficult to justify packing dedicated AI hardware in a PC. Nvidia is trying to change that with Chat with RTX, which is a local AI chatbot that leverages the hardware on your Nvidia GPU to run an AI model.

It provides a few unique advantages over something like ChatGPT, but the tool still has some strange problems. There are the typical quirks you get with any AI chatbot here, but also larger issues that prove Chat with RTX needs some work.
Meet Chat with RTX
Here's the most obvious question about Chat with RTX: How is this different from ChatGPT? Chat with RTX is a local large language model (LLM). It's using TensorRT-LLM compatible models -- Mistral and Llama 2 are included by default -- and applying them to your local data. In addition, the actual computation is happening locally on your graphics card, rather than in the cloud. Chat with RTX requires an Nvidia RTX 30-series or 40-series GPU and at least 8GB of VRAM.

Read more
Windows 11 will use AI to automatically upscale games
Person using Windows 11 laptop on their lap by the window.

Microsoft appears to have decided to jump on the upscaling train in a big way. The latest Windows 11 24H2 Insider build just showed up, sporting a new feature: AI-powered automatic super resolution tech. While the blurb underneath the feature indicates that it was made for games, it might be even more useful outside of them. However, there's a major downside -- it won't be as widely available as it may seem.

The feature was first spotted by PhantomOcean3 on X (formerly Twitter), and it was quite a significant find, considering that Microsoft is apparently keeping this one pretty well hidden. To enable it, users have to go through the following path: Settings > System > Display > Graphics. While it's perhaps not very intuitive to find, the feature itself could turn out to be quite promising.

Read more