Skip to main content

This is what happens when A.I. tries to reimagine Stanley Kubrick’s films

Neural Kubrick Synopsis
Thanks to his classic sci-fi movie 2001: A Space Odyssey, filmmaker Stanley Kubrick helped introduce the general public to the topic of artificial intelligence. Almost 50 years on, that movie’s HAL 9000 character continues to be one of the most enduring representations of A.I. in entertainment — and has helped inform everything from the design of smart assistants like Siri and Google Assistant to debates about the perils of machine intelligence. But what would modern-day A.I. make of Kubrick’s work?

That slightly offbeat premise is the basis of an intriguing project — called Neural Kubrick — from researchers at the U.K.’s Interactive Architecture Lab. The idea behind the project is to look at how artificial intelligence can impact filmmaking, an issue which speaks to a larger question about whether or not A.I. can be considered creative.

The exhibition created by researchers Anirudhan Iyengar, Ioulia Marouda, and Hesham Hattab, involves a multi-screen installation and deep neural networks which reinterpret scenes from 2001 and two other celebrated Kubrick movies: A Clockwork Orange and The Shining.

“Three machine learning algorithms take up the most significant roles in [our] A.I. film crew — that of art director, film editor, and director of photography,” Iyengar told Digital Trends. “There is a Generative Adversarial Network (GAN) that reimagines new cinematic compositions, based on the features it interprets from the input dataset of movie frames. There is a Convolutional Neural Network (CNN) that classifies visual similarities between inputted scenes and a dataset of hundreds of different movies, used to mimic the kind of decision making a film editor makes. And there is a Recurrent Neural Network (RNN), that analyzes the camera path coordinates of a cinematic sequence, and generates new camera paths to reshoot the original input sequence in virtual space — mimicking the role of a director of photography.”

The results of the Neural Kubrick experiment can be seen by checking out the website. It’s conceptual stuff, but it’s interesting because of the questions it poses about A.I. For instance, who is the author of a piece of work designed by an A.I.: The algorithm or its original programmer? Does any trace of Kubrick’s (very human) mastery of cinema remain when you’re trying to train a machine to replicate some of his decisions?

“It was intriguing for us to compare what meaning the machine makes of the given scene when all it interprets is features, patterns, zeroes, and ones,” Marouda told us.

The scenes generated by Neural Kubrick aren’t exactly entertaining in the classic sense, but they’re definitely interesting. At the very least, it’s difficult to imagine that Kubrick — a filmmaker known for pushing the technological limits of filmmaking — wouldn’t have been intrigued by the results!

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
IBM’s A.I. Mayflower ship is crossing the Atlantic, and you can watch it live
Mayflower Autonomous Ship alone in the ocean

“Seagulls,” said Andy Stanford-Clark, excitedly. “They’re quite a big obstacle from an image-processing point of view. But, actually, they’re not a threat at all. In fact, you can totally ignore them.”

Stanford-Clark, the chief technology officer for IBM in the U.K. and Ireland, was exuding nervous energy. It was the afternoon before the morning when, at 4 a.m. British Summer Time, IBM’s Mayflower Autonomous Ship — a crewless, fully autonomous trimaran piloted entirely by IBM's A.I., and built by non-profit ocean research company ProMare -- was set to commence its voyage from Plymouth, England. to Cape Cod, Massachusetts. ProMare's vessel for several years, alongside a global consortium of other partners. And now, after countless tests and hundreds of thousands of hours of simulation training, it was about to set sail for real.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more