Skip to main content

L.A. Noire interactive crime map highlights inspirations for the game’s cases

la-noire-crime-map
Image used with permission by copyright holder

Here’s a cool thing to play around with while you wait for May 17 to arrive, and with it, Rockstar Games‘ long-awaited open-world period police procedural — no, really… that’s basically what it is — L.A. Noire. The game’s developer, Team Bondi, teamed up with The Los Angeles Times, specifically its Archives group, to create a special interactive map of L.A., the L.A. Noire 1947 Edition Crime Map.

The map is actually an offshoot of the LA Times’ own Crime Map project, which reports on present-day crimes and related statistics using data from the Los Angeles Po­lice De­part­ment and Los Angeles County Sher­iff’s Department. The L.A. Noire map is quite a bit cooler, largely because it doesn’t present such a chilling picture of certain neighborhoods in present-day Los Angeles.

The idea is the same though: using an interactive map, you explore 1947 L.A. and learn about various crimes committed in the city during that time. Team Bondi conducted extensive research in putting together the story for the upcoming game, and many of the crimes protagonist Cole Phelps investigates are based on real-life events. The map actually links to scans of the original stories as they appeared in the paper at the time, a very nice touch. There’s some wild stuff in there, like the story of acrobat burglars who made off with $2,500 and “several hundred pounds of meat,” but not before drinking down several quarts of milk.

Expect more stories to be added to the site as the game’s May 17 release draws closer.

Topics
Adam Rosenberg
Former Digital Trends Contributor
Previously, Adam worked in the games press as a freelance writer and critic for a range of outlets, including Digital Trends…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more