Nissan’s engineers are working on more than just cars. At CES 2019, the Japanese automaker unveiled “invisible-to-visible” (I2V) tech that connects cars to a virtual world they call the “Metaverse.” It lets drivers “see” inside buildings to find parking spots or get driving lessons from virtual avatars. It may sound purely like science fiction, but Nissan recently began testing I2V at its Japanese proving grounds.
Digital Trends recently caught up with Roel de Vries, Nissan’s global head of marketing and brand strategy, at the 2019 New York Auto Show to get the full story on I2V.
Digital Trends: Where did the idea to do this come from?
Roel de Vries: The way we are structured, and I think many car companies are structured, but we have what we call advanced engineering, we have engineering, and then we have product planning.
The way this normally works is that in advanced engineering you have this super-advanced stage, which is where people are dealing with questions about “Ok, what is the future of mobility? What is the future of technology?” We have people with PhDs, what we call fellows, that are pretty free to do whatever they want. They get quite a bit of money to really get out there and just explore and do things.
“If our car gets more and more autonomous, it needs to interact more and more with the outside world.”
Where this specifically comes from, I2V, there is this constant development around how do we make cars safer, and how do we make the experience of people inside cars better, and how can we use technology and data to do that. If our car gets more and more autonomous, it needs to interact more and more with the outside world. Otherwise, it cannot be autonomous.
Sometimes you say you have one level of autonomous, which is “I have my car, I have lots of sensors and radar and sonar,” and this machine checks constantly around it.
But it’s not really aware of its environment.
It’s not really aware of what is happening around the corner. The engineers are constantly busy with how to go beyond [that]. So, how I do look around the corner? Well, I don’t know because this thing can’t see through a building. So now you need to connect with something else than yourself.
Where does I2V come from? It’s purely from that. To be truly autonomous, the car needs to be able to do this type of thing for safety and autonomous-technology development. Then you have the story around “How do I make the driving fun and entertaining when I drive?”
There are so many data points out there, and you can connect these data points and almost create a world that’s out there, and create that world anywhere. So I can bring what’s around the corner into my car. That’s where the name comes from, because the invisible becomes visible.
“There are so many data points out there, and you can connect these data points and almost create a world.”
For the engineers, it’s a pretty straightforward. So when I said I wanted to take it to CES and make it come alive, that wasn’t they’re intent. They do this serious development because they believe that this will have a real impact on cars in the future.
So the concept of the Metaverse is basically just harnessing data that’s out there already?
Basically, yes, but then adding a massive scale. There are three main elements. What are the data points that are out there? How do I integrate all of that into something that makes sense? How can I get this at speed, real time, in a usable format, into the car?
Now, how do we get all of that at speed? I think all of the 5G developments are playing a role in that, where you start to make things possible that have just never been possible. That’s where these things come together, and where I2V was born.
Where did the idea to use avatars, and bring other people virtually into the car, come from?
I’ll be frank with you: that was not the main thing I wanted to show. Because if you bring the avatar in the car, then probably a better application for that is for a company like Skype to do it. I said [to the engineers] “that’s fascinating, but what’s the car story?” We spent a lot of time debating, “Okay, if that is possible, what is the benefit in the car?”
“For me what’s actually more intriguing in terms of the future is how you can see what you cannot see.”
Then we came up with the thing we demonstrated at CES, which is, for somebody who loves driving, to have a professional driver with you that can teach you, coach you, on how to make your driving more exciting. If I’m driving on a winding road or on a racetrack, I’d love to have [five-time Formula One champion] Lewis Hamilton sit next to me and tell me how to do this. It’s possible, because we can put this professional driver in a place with virtual reality. He sees everything I see on the track.
For me what’s actually more intriguing in terms of the future is how you can see what you cannot see, which is basically, I’m driving on this winding road, there’s traffic coming from the other side that I can’t see, so I drive much slower, much more conservatively, because I don’t know what’s going to happen. I don’t know where there’s a pothole in the road. I don’t know if, around the corner, the weather’s not so good. So to bring all of that information to me as a driver, that’s what I found most fascinating. But I also know that the avatar is the shiny, fun story of what you can do at CES.
You’ve mentioned uses for I2V that involve both human-driven and autonomous cars. In a human-driven car, are you concerned that throwing all of this information at the driver might be too distracting?
This is why it’s still advanced engineering. Those really are the things that we need to figure out. But that’s a constant debate, because on all new cars the screens are getting so big, the amount of stuff that you can put on them is getting bigger. So how to create what we call an HMI, human-machine interface, that still makes sense, and still is simple and intuitive, we still need to figure out.
Are there any other uses you view as being more on the practical side, more likely to get put into production?
The biggest applications are to make driving better. There are other fun things you can do like, if I drive in my car and it’s foggy, we can make the road look beautiful. You go on a holiday to Scotland and you want to see this beautiful scenery, but three out of four days it’s rainy in Scotland. But in the future, you [could] drive along the road, it would be like you’re driving on a sunny day.
I think there are unlimited possibilities. Which ones will become reality, and which ones won’t? I don’t know, but the fun part of these things is that you need to play with the imagination. Then something’s going to pop, and we’ll realize we can actually do it, and commercialize it. What that will be, I don’t know yet.
Nissan is testing I2V with people in a moving vehicle. Have you received any feedback from that yet?
People love it. It speaks to the imagination. Like, five or six years ago, people spoke about self-driving cars, now it’s invisible-to-visible. The trick now is to go from that, to a real application that we can actually commercialize.
What steps does Nissan still need to take to make that happen?
It needs to come from advanced engineering to a vehicle application. For that cycle, we’re looking at between five and 10 years, at least, before we do that. But often, in our industry, elements of [new technology] come much earlier.
I think there are unlimited possibilities. Which ones will become reality, and which ones won’t? I don’t know.
The use of data outside the car, far away, is going to be used for many things. Not so much to put an avatar next to you, but to know when which parking is free in the building down the road. What we did as a demonstration at CES was that you could see the building, it became transparent, and [showed you] the parking spot. What will happen before that is that, probably, inside your navigation system it will say, “parking lot 5B, fifth floor, second on the right is available,” without trying to create a virtual reality image of that building. That will probably come much earlier, and it’s using the same base technology.
With these types of developments, that’s how you see it coming to your car. So people don’t realize how it is coming to the car because it’s coming in these small applications.
So it will be more like a gradual addition of new features over time?
Yes, and it’s the same, by the way, as autonomous. People say, “Okay, when is the car self-driving?” It’s happening every day. Every day, you get more technology that helps you drive your car. ABS [anti-lock brakes] helps you brake, and now you have cars that help you stay in the lane, and helps you stop before the cars in front of you stop, and helps you drive by itself in a single lane. These things happen all the time, so autonomous driving is happening all the time.
Some people tell us, with autonomous driving, “I don’t want to give up control.” But then you tell them about forward emergency braking and they say, “Yeah that’s good.” It’s basically doing the same thing.
Another thing people ask about is e-commerce, are people going to buy cars online? The image is that I take my credit card, I take my computer screen, I spend five minutes, and tomorrow the car’s in front of my door.
My answer to them is the same: e-commerce is happening every day. Twenty years ago, you had to get a car magazine to read about the specifications, then you went to the website. Now you can go to the website and actually configure your car. Because it’s linked to your credit rating, you can now check how much the monthly payment is, and you can book a test drive. Over a period of time, you look back realize we are doing things completely different to the way we did them 10 years ago.
As tech features like I2V potentially become more commonplace, do you think they will overtake the importance of the cars themselves? Will people still care about the actual, tactile feel of the car, or will it just be how well it presents this virtual environment and connectivity to them?
First of all, what cars look like 30 or 40 years from now, I have no idea. Maybe there are flying cars, I don’t know. But I think these things don’t always go as fast as people make them out to be.
People say, with car sharing, that no one wants to own a car anymore. You could say, with I2V, I can live in the virtual world, so I don’t need the tactile [experience] anymore. I think there is another dimension of owning something that I think we underestimate, and to some extent it is coming back.
The physical, the ownership, is still a very big part of the experience of people. A watch that you own, the phone that you own, the car that you have, the interior in your house, what it feels like, how you experience it, how you live it, how you make it yours, I don’t think we should underestimate that. People don’t want to be in a world where everything is virtual. You want to have your stuff. That might change, but I don’t think it’s going to change as fast as some people make it out to be.