Skip to main content

Vulkan, VR and DirectX 12 all getting new additions in Futuremark's benchmarks

futuremark 3dmark vulkan vr time spy 003
Image used with permission by copyright holder
With more and more hardware now supporting the DirectX 12 graphics application program interface, developer of 3D-benchmarking software Futuremark pledged to add a new DirectX 12 test to its 3DMark suite of testing tools in the future. It will be less taxing than the original and will be joined by a new Vulcan benchmark, too.

The idea behind the new DirectX 12 benchmark is to offer one for those running contemporary chips that are not necessarily designed for top-tier gaming. Notebooks with onboard graphics chips and those with entry-level add-in cards are worth benchmarking too, though the original 3DMark DirectX 12 test would be likely to bring them to their knees.

To that end, the next DirectX 12 test that Futuremark plans to add to its suite of tools, is a low-level test that will offer varied scores, even for those running budget hardware. For those with much more powerful dedicated graphics cards and processors, the original test will likely still suffice.

Alongside this new DirectX 12 benchmark, though, will be a much more impressive Vulkan test. Aimed at both Windows and Android platforms, according to TechPowerUp. Details about what sort of hardware it will be aimed for remains elusive for now, though of course much like the Mantle API Vulkan is based on, the benchmark will be able to take advantage of all that low-level hardware access which has us so excited for the future.

To round up its pre-Christmas announcement, Futuremark also noted that it is continuing the development of its virtual reality benchmark VRMark. Futuremark expects to launch new tests within that tool in 2017, with support for both PC and mobile platforms.

Expect to get our first look at those at the Consumer Electronics Show in early January.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more