Skip to main content

Human Screenome Project wants you to share everything you do on your smartphone

Roo Lewis/Getty Images

You’ve almost certainly seen them on YouTube. “Noah takes a photo of himself every day for 20 years” (5 million views.) “Portrait of Lotte, 0 to 20 years” (10.9 million views.) “Age 12 to married — I took a photo every day” (an astonishing 110 million views.) Heck, even Homer Simpson and Family Guy’s Peter Griffin have parodied the format. In an age of selfies and ubiquitous smartphone cameras, this increasingly popular genre of time-lapse videos depicting the aging process lets people self-chronicle their lived experiences in a quintessentially modern way that would have been all but impossible just a couple of decades ago.

But what if the bigger story wasn’t some YouTube star’s changing facial features, but rather the fact that tens of millions of us would dedicate minutes of our day to watching them? And, maybe after that, tweeted out a link to the video we’d just watched. Or sent it to a buddy on WhatsApp. Or fired up the camera app on our own smartphone and started making our own version. Or just forgot about what we’d just watched entirely, and played a quick game on Mario Kart Tour.

Child Using Smartphone
Image used with permission by copyright holder

In a world in which we live digitally, the way we consume media on our screens (and, particularly, on our smartphones) may just turn out to be the most profound way of documenting life in 2020. At least, that’s the idea of an ambitious new initiative called the Human Screenome Project. Created by researchers at Stanford and Penn State University, it’s a new mass data collection exercise that asks users to agree to share information about every single thing they do on their smartphones.

Special software developed by the project’s creators will take screenshots of these mobile devices every five seconds they’re active, encrypt it, send it off to a research server, and then use artificial intelligence algorithms to analyze exactly what it is that’s being looked at. In the process, the researchers want to create a multidimensional map of people’s changing digital lives in the twenty-first century; providing a moment-by-moment look at changes over the course of days, weeks, months, and potentially even years and decades.

“The digital media environment has [advanced] so much in the last few years,” Nilam Ram, professor of human development and psychology at Penn State University, told Digital Trends. “We don’t really have a good idea of how people are using their devices, and what it is that they’re being exposed to. Typically, research studies about screen time will rely on self-reports about how long people engaged with social media over the past week. That’s a really complicated question for people to answer. The evidence suggests that people are over- or underestimating their own engagement by as much as a few hours.”

A resource for researchers everywhere

According to Ram, the project traces back seven years to a chance meeting between himself and Byron Reeves, a professor of communication at Stanford. Reeves was interested in media and its effects on people. Ram was interested in behavioral time series data; a type of behavioral analytics that works with regular data points gathered in chronological order. This can be used to study — and predict — things about the behavior of individuals.

At first, the pair set out to explore multitasking. They developed software that they could use to see how rapidly student participants switched between tasks when they are working. They discovered that they would switch windows approximately every 20 seconds. “That was faster than anyone at that time thought anyone was switching from task to task,” Ram said. “From there, we developed software to do it on a smartphone.”

A Screenome Sample

They figured that this would be a natural extension of their work with multitasking. But when the initial flow of data, from a small group of students, came in, they realized they had tapped a far deeper well than they thought. “Once we started watching the time-lapse footage of what people were actually doing on their phone, we realized that, wow, there are so many different types of human behaviors that are expressed here,” Ram said. “That could be engagement with politics, mental health issues, social media issues, interpersonal relations, climate change. We can see things like the gender distribution of faces that people look at on their phones, the racial distribution of those faces — there’s so much richness in it.”

If this sounds like it’s too much for one pair of researchers to look at, you’d be absolutely correct. The hope is that the Human Screenome Project — whose name is a nod to the previous Human Genome Project — will create a vast sharable database of information that will be available for other researchers to explore as well. This will be part-ongoing user survey (albeit without users having to actively answer questions) and part-historical artifact, like a digital Mass Observation Project. The potential value of such an archive could be immense. Some researchers might use it to track the rise and fall of memes as they appear, flourish, and disappear into the cybernetic ether. Students of design could use it to look at how changing app user interfaces reflect transitions in that particular field. Others may use it, alongside cross-referenced information, to study the potential health impacts of social media. Or how screen time impacts concentration.

“The idea of the Human Genome Project was that, if we could map the human genome, it would change the way we approach disease and the treatment of disease,” Ram said. “I think it did that. Here, we’re in some ways trying to take the same kind of theoretical leap, saying that if we can map out the Human Screenome it will change the way we think about digital media and how it’s affecting people.”

It’s like mass surveillance but… good?

But is a project like this workable? The same thing that makes it so tantalizing from a research point of view is what also raises challenges. Simply put, as Apple co-founder and former CEO Steve Jobs predicted way back in 2007, the smartphone has become a consolidation of all the separate devices we once carried around. It is our laptop, our personal organizer, our portable music player, our GPS system, and more.

A group of people using smartphones
Image used with permission by copyright holder

With the requirement of physical user interaction and millions of available apps, a smartphone is a far more dynamic media environment than its antecedent: the living room television with its choice of a handful of channels to pick from. As windows into our interests go, smartphones are the epitome of what media theorist Marshall “the medium is the message” McLuhan would have called “extension of ourselves.” However, that makes them personal in a way that few other devices are. Allowing researchers to see everything you do on your smartphone will, for some users, simply be a step too far.

Still, Ram is confident this will not hold true for everyone. “Generally we find in our conversations with participants that they are well aware that their data is being collected by the big data companies on a very regular basis,” he said. “It’s being used in ways that they have no control over. They seem aware of that, and excited about the possibility that those data might instead be used for research purposes to understand human behavior.”

As of now, the Stanford Screenomics Lab has collected over 30 million data points from more than 600 participants. While it has yet to open up its platform to whoever wants to get involved, Ram hopes the number of users could eventually scale this number to far more epic proportions, with multi-year contributions from users.

And what about when smartphones finally give way to some other dominant technology? “[This is something that] could go on forever,” Ram said. “[That will mean that it has to] transform in different ways as screens move from being separate devices to ones that are embedded somehow, whether it’s a chip or a Google Glass-style evolution. We want to evolve our data collection paradigm along with the emergence of those technologies.”

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more