Skip to main content

Close to the Metal Ep 35: We’re not over the moon about Mass Effect: Andromeda

badge_itunes-smallest   stitcher-smallest   rss-smallest

There have been a lot of exciting open-world RPGs lately, but unfortunately, neither of the big names have made their way to the PC world. That’s left enthusiasts with a lot of idle hardware on their hands, and we hoped Mass Effect: Andromeda might be just the game to solve that issue.

Those flames have been fanned by extensive teasing and hype-building surrounding the release, as well as some hot new GPUs entering the market around the same time. So we were actually a bit disappointed to find out the game doesn’t actually look all that nice. That’s despite the older, well-optimized Frostbite engine EA has been using for years.

So what’s causing the issues? Part of it has to do with the way the game handles resolution. Users can set their resolution wherever they like, but on medium and low, the game automatically applies render scaling, dropping the actual rendered resolution. The result is a blurry, blocky mess — one that offers a considerable performance boost.

Beyond that, the usual culprits result in the largest performance gains. Dropping ambient occlusion and light quality on their own can raise your framerate, without causing a lot of weird visual artifacts — except the weirdness inherent to this game — although realism suffers as lighting quality drops.

With the game rolling out the very same day we record our weekly computing podcast, Close to the Metal, it’s the perfect time to discuss our results and find out what went wrong.

Close to the Metal is a podcast from Digital Trends that takes a single topic from the computing world and takes a deep dive into it, exploring GPU performance, new software, or a lively discussion about hardware until we’ve covered every corner. Please subscribe, share, and send your questions to podcast@digitaltrends.com. We broadcast the show live on YouTube every Tuesday at 1pm EST/10am PST.

Brad Bourque
Former Digital Trends Contributor
Brad Bourque is a native Portlander, devout nerd, and craft beer enthusiast. He studied creative writing at Willamette…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more