Hi Google, what’s new this year?

Quentin Braet
5 min readMay 9, 2018

Spring is in the air, so while everyone is hoping to get a tan on the terrace, we at In The Pocket, geeks as we are, gathered in our arena for one of the annual highlights of the tech conference season: Google I/O. Some hours before Sundar Pichai (the Google CEO) entered the stage, we made our own predictions for the event.

This year, it has proven to be very difficult to know what Google was up to, since the amount of leaked information was limited. Would it be a very surprising I/O edition, or just a 2.0 version of last year? The only thing we could predict with decent certainty, was that artificial intelligence would again play a key role through the whole presentation. And this is also how it went. It all started by showing of the effort Google is doing for healthcare projects, and from there on, they moved on to how they improved their own products and services with AI.

One of the latest goals of Google is to give us all a personal assistant. With the latest iteration of Google Assistant, It has become clear that voice alone is not the holy grail, but that we will often need a mixture of visual elements and voice commands to really speed up how we reach our intent. But this doesn’t mean that they didn’t improve on how the system understands voice commands, and actually, the progress is really impressive! With what we’ve seen yesterday, it is clear that we moved on from voice commands to real conversations with Google Assistant! You no longer have to say ‘Hi Google’ for every separate command, but more impressively: Google can now distinct multiple actions in single sentences. This is something that is easy for humans, but rocket science for computers.

But this was not the end of the demo, the real surprise was the moment when the Assistant called to a restaurant to make a reservation, completely on its own, with only your calendar and your intent to make a reservation as a starting point. The future is real people, bots are making phone calls now. Next year, maybe a bot will write my emails, oh wait, Google already implemented this and made an SDK available to do the same in your apps.

Demonstration of Google Assistant making phone calls in the background (a.k.a. Google Duplex)

This year’s Android P update (coming this fall) seems to have a lot more body than the previous one. It really focuses on Intelligence, simplicity and digital wellbeing. AI is becoming a core Android feature that shows itself in a lot of small things, ranging from brightness control to adaptive battery usage. But it is not limited to OS features only, with the newly introduced Slices and App Actions, your app can expose intents and shortcuts that the system can use to recommend app features or actions in the launcher or while searching.

Next to that, Google Also announced MLKit, a cross platform machine learning toolset that enables you to use top notch machine learning models, some of them even completely on-device. For the people that suffer from FOMO but hate themselves for it, Google also has your back: they envisioned a new way of muting and delaying notifications so that you can live in the moment and are not distracted by a notifications. Sounds like the people that made us addicted to our phones have seen the light for our social health. Remember our mobile DNA app?

Okay, so now let’s move on to Google Maps, remember the moment when Google introduced Street View? We all thought it was awesome, but why would Google do such an investment to capture all these photos. Just because it’s cool? Maybe, but do you also remember how last year they teased their idea for a VPS, or Visual Positioning System? This year they matched the puzzle pieces and added a layer of AR on top of it: until now, you had to look at the blue dot on the map to see where you are, and then often had to start walking randomly just to see where you’re moving to on the map before you could really orientate yourself and start navigating in the right direction. This is a first world problem of the past now, just turn on your camera and Google will use the Street View info to fixate the map in the right direction and will use an AR layer of information on top of your map to indicate your route, just like in the science fiction movies. Talking of AR: Google is also working very hard on something they call Cloud Anchors, a system they will use for AR interactions with multiple devices, this could become a real game changer.

For the people that haven’t seen enough AI magic by now, just remember: 2018 was also the year that self driving cars had their 15 minutes of fame during the Google I/O keynote, and I must say: it looks like we are really close now to a complete mind shift in transportation.

Turns out, it was an iteration on last year’s event, but this doesn’t mean that my developer heart wasn’t impressed by the improvements the engineers made and the new ideas that were pitched on stage. Google certainly has a plan and was already laying out the puzzle pieces since the first I/O event 10 years ago. For those of you that aren’t familiar with it: I/O stands for Input/Output, typically used in data processing, and this is what Google has been doing for years, they collected input, and today, we really start to see the output.

Originally published at inthepocket.com on May 9, 2018.

--

--