Skip to main content

Nvidia Workbench lets anyone train an AI model

Nvidia CEO showing the RTX 4060 Ti at Computex 2023.
Nvidia

Nvidia has just announced the AI Workbench, which promises to make creating generative AI a lot easier and more manageable. The workspace will allow developers to develop and deploy such models on various Nvidia AI platforms, including PCs and workstations. Are we about to be flooded with even more AI content? Perhaps not, but it certainly sounds like the AI Workbench will make the whole process significantly more approachable.

In the announcement, Nvidia notes that there are hundreds of thousands of pretrained models currently available; however, customizing them takes time and effort. This is where the Workbench comes in, simplifying the process. Developers will now be able to customize and run generative AI with minimal effort, utilizing every necessary enterprise-grade model. The Workbench tool supports various frameworks, libraries, and SDKs from Nvidia’s own AI platform, as well as open-source repositories like GitHub and Hugging Face.

Recommended Videos

Once customized, the models can be shared across multiple platforms with ease. Devs running a PC or workstation with an Nvidia RTX graphics card will be able to work with these generative models on their local systems, but also scale up to data center and cloud computing resources when necessary.

Please enable Javascript to view this content

“Nvidia AI Workbench provides a simplified path for cross-organizational teams to create the AI-based applications that are increasingly becoming essential in modern business,” said Manuvir Das, Nvidia’s vice president of enterprise computing.

Nvidia has also announced the fourth iteration of its Nvidia AI Enterprise software platform, which is aimed at offering the tools required to adopt and customize generative AI. This breaks down into multiple tools, including Nvidia NeMo, which is a cloud-native framework that lets users build and deploy large language models (LLMs) like ChatGPT or Google Bard.

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

Nvidia is tapping into the AI market more and more at just the right time, and not just with the Workbench, but also tools like Nvidia ACE for games. With generative AI models like ChatGPT being all the rage right now, it’s safe to assume that many developers might be interested in Nvidia’s one-stop-shop easy solution. Whether that’s a good thing for the rest of us still remains to be seen, as some people use generative AI for questionable purposes.

Let’s not forget that AI can get pretty unhinged all on its own, like in the early days of Bing Chat, and the more people who start creating and training these various models, the more instances of problematic or crazy behavior we’re going to see out in the wild. But assuming everything goes well, Nvidia’s AI Workbench could certainly simplify the process of deploying new generative AI for a lot of companies.

Monica J. White
Former Digital Trends Contributor
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
I was wrong — Nvidia’s AI NPCs could be a game changer
A character in an AI-driven dialogue tree by Nvidia.

I wasn’t a fan of the Covert Protocol demo that Nvidia showed me on a video call late last week. The short mystery demo features a handful of NPCs that are directed entirely by AI. There’s a door greeter, an executive waiting for a room, and a receptionist -- and all three featured what were, on the surface, bland dialogue trees generated with AI.

At GDC 2024, trying out the demo myself, I was converted.

Read more
Nvidia’s AI game demo puts endless dialogue trees everywhere
An AI game demo produced by Nvidia.

Nvidia did what we all knew was coming -- it made an AI-driven game demo. In Convert Protocol, you play as a detective trying to track down a particular subject at a high-end hotel. The promise is sleuthing through conversations with non-playable characters (NPCs) to get what you need. Except in this demo, you use your microphone and voice to ask questions instead of choosing from a list of preset options.

I saw the demo with a few other journalists in a small private showing. As the demo fired up and Nvidia's Seth Schneider, senior product manager for ACE, took the reigns, I was filled with excitement. We could ask anything; we could do anything. This is the dream for this type of detective game. You don't get to play the role of a detective with a preset list of dialogue options. You get to ask what you want, when you want.

Read more
All RTX GPUs now come with a local AI chatbot. Is it any good?
A window showing Nvidia's Chat with RTX.

It's been difficult to justify packing dedicated AI hardware in a PC. Nvidia is trying to change that with Chat with RTX, which is a local AI chatbot that leverages the hardware on your Nvidia GPU to run an AI model.

It provides a few unique advantages over something like ChatGPT, but the tool still has some strange problems. There are the typical quirks you get with any AI chatbot here, but also larger issues that prove Chat with RTX needs some work.
Meet Chat with RTX
Here's the most obvious question about Chat with RTX: How is this different from ChatGPT? Chat with RTX is a local large language model (LLM). It's using TensorRT-LLM compatible models -- Mistral and Llama 2 are included by default -- and applying them to your local data. In addition, the actual computation is happening locally on your graphics card, rather than in the cloud. Chat with RTX requires an Nvidia RTX 30-series or 40-series GPU and at least 8GB of VRAM.

Read more