Skip to main content

Amazon's Lumberyard demo shows off direct support for fancy anti-aliasing effects

"Bistro" Demo - Amazon Lumberyard
In the world of high-end game engines, Amazon might not be the at the top of your list of developers, but perhaps it should be. In its latest video showcase of the Lumberyard engine and its virtual reality integration, Amazon has shown off a beautifully designed bistro, complete with drinks on tap, rainy cobblestones, and warm and rich lighting.

With engines such as Unity, Unreal Engine, and Cryengine dominating much of the world of top-tier and indie game development, Amazon is looking to break into a busy market with Lumberyard. Being free from the get go is a good start and in this latest engine demonstration we can see technologies such as specular anti-aliasing, temporal anti-aliasing, and independent transparency in action.

In fact, Amazon is going so far as to claim that Lumberyard is the first engine to directly integrate such features, as per RoadtoVR. While that might be debatable, the demo, shown off as part of Nvidia’s showcase at this year’s Game Developer Conference, does suggest that Lumberyard has some real visual power behind it.

With support for the Oculus Rift, HTC Vive and OSVR headsets and accompanying software, that’s good news for those looking to break into VR game development during its nascent stages.

Feedback on the engine seems rather good so far, though commenters on this particular video did note that because Lumberyard is built from an offshoot of the most recent Cryengine release from Crytek, it does have some hangups from older developmental standards. They do point out that the engine has advanced by  leaps and bounds over the past year though, so it is likely to see continued, fast-paced improvement in the years to come.

We’ll have to wait and see whether LumberYard can compete visually with some of the more established engines.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more