After an unprecedented level of leaks, Google’s New York event was light on surprises. The new Pixel 3 and Pixel 3 XL are exactly what we expected to see. Few phones have suffered so much criticism in the build up to an unveiling. The deep notch gouged out of the top of the Pixel 3 XL’s OLED screen seems universally reviled, and our Pixel 3 XL hands-on did not change our mind – it’s ugly.
We’re glad the Pixel 3 doesn’t have a matching notch; the big bezels definitely make it look dated. But there’s more to a phone than its looks. We’re often told that it’s what’s on the inside that counts and Google’s groundbreaking artificial intelligence (A.I.) advancements remind us of that. The dull design definitely disappoints, but we’re genuinely excited about what the A.I. smarts inside can do for us — including Google Duplex, which is rolling out to some Pixel phones in Atlanta, New York City, Phoenix, and the San Francisco Bay area.
“The big breakthroughs you’re going to see are not in hardware alone, they come at the intersection of AI, software, and hardware,” Rick Osterloh, senior vice president of Hardware at Google, said on stage at the event, in front of a huge screen bearing the words “AI + Software + Hardware.”
We can’t help feeling that the order is no accident. Google is all-in on AI and it has always prized the software experience above its hardware. This is what sets Google apart from competitors like Apple and Samsung. If you want a beautifully-crafted
The A.I. hype train
Everyone has been hyping artificial intelligence for so long now that it’s easy to get weary of the hyperbole. The sad truth is that the experience of using A.I. on most phones doesn’t come close to matching the promise.
Without naming names, we’ve seen A.I. in cameras consistently misidentify subjects and scenes, producing shots that are clearly worse than the normal auto mode. We’ve had countless suggestions for news stories or places to visit that the most cursory understanding of our tastes or location would reveal as erroneous. We’ve been repeatedly misunderstood when attempting to issue simple voice commands.
It feels as though as soon as Google started to realize some success from its investment in A.I., everyone wanted to jump on the bandwagon, but they’re years behind and there’s no shortcut to catch up.
Pre-emptive help
We’re used to things like predictive text, but the first time we can remember being offered some helpful information by our phone without asking for it was in 2012 when Google Now rolled out. Looking at your phone first thing in the morning, you’d see a card displaying your optimum commute. Pulling your phone from your pocket at a bus stop or train station would throw up a timetable.
It’s the only service that tries to anticipate your needs.
It didn’t do a great deal else, beyond alerting you to the latest sports scores for the teams you supported, but it was an exciting first step. Being able to see at a glance if there was a traffic delay or knowing exactly when the next bus would arrive, made life a little easier.
Google Now has become
Google Assistant handling calls
When we saw the Google Duplex demo earlier this year we were blown away. This is A.I. conducting a natural sounding conversation and booking a restaurant reservation or scheduling a haircut appointment. It can work within the parameters you set, so you can stipulate you want a reservation for 8 p.m. but between 8 p.m. and 9 p.m. is fine and it will go ahead and book the table for you, automatically adding the reservation to your Google Calendar once it’s booked.
If it didn’t tell you it wasn’t human, you wouldn’t guess. We don’t doubt it could beat the Turing test, provided the topic didn’t stray too far from booking your appointment.
This functionality starts rolling out to Google’s Pixel phones next month on a city-by-city basis, starting with New York City. Though it’s fairly limited in scope right now, we can see it growing into something we use often in our daily lives.
Another exciting exclusive for Pixel phones is the Call Screen feature. If you get an incoming call you can’t or don’t want to take, then you can tap screen call and the caller will hear this:
If it didn’t tell you it wasn’t human, you wouldn’t guess.
“Hi, the person you’re calling is using a screening service from Google, and will get a copy of this conversation. Go ahead and say your name, and why you’re calling.”
As the caller explains, the transcribed text pops up on your screen in real-time and you can choose to pick up, send a quick reply, or mark it as spam. If you do mark as spam it will automatically say:
“Please remove the number from your mailing and contact list. Thanks, and goodbye.”
We think the immediacy and convenience of this beats visual voicemail, which is worth remembering is also carrier specific and not available everywhere right now.
Amazing camera performance
One of the biggest arms races for smartphones in the last couple of years, and easily the biggest area of improvement, has been the camera. We’ve seen more and more dual-lens cameras and even triple-lens cameras as manufacturers struggle to outdo each other.
If you’re seeking proof of Google prioritizing A.I. over hardware, look no further than the Pixel camera. Google has stuck with a single-lens main camera, even reducing the megapixel count from the original Pixel for the Pixel 2 and yet it continues to outperform most of the competition.
The Pixel 2 is our reigning camera phone champion, because it most often takes the photos we want to keep or share.
“That’s not a fluke,” said Osterloh at the Google event. “We spent years researching computer vision technologies, analyzing hundreds of millions of photos.”
All these A.I. features make your camera easier to use and result in you getting better photos.
Google is getting better, more consistent results by employing artificial intelligence and computational photography, than its competitors are getting by packing in more lenses. The Pixel 2 turns out awesome portrait shots with that coveted bokeh background blur. With
The
All these A.I. features make your camera easier to use and result in you getting better photos. You may be able to achieve something technically better with the latest triple-lens camera from a competitor, but it will often require a little planning or tweaking. Google’s Pixel cameras are designed to be quick and easy so you can just point and shoot, which is how most people really use their phone cameras.
Really useful A.I.
The A.I. innovation in the new Pixel continues with Smart Compose in Gmail, which offers to finish your sentences with contextual phrases, cutting down on repetitive typing like addresses. It’s like a super-charged version of predictive text that could genuinely save you a lot of time.
We know that a lot of people find this stuff creepy or have legitimate concerns about privacy, but for us the utility eclipses our disquiet.
From warnings about train delays and reminders of where you parked to capturing the best possible photo, Google is doing things with A.I. that no one else can right now.