Google I/O 2018: The Biggest Highlights From This Year’s Developer Conference
Google I/O wrapped up last week in Mountain View California and, like always, this year’s conference revealed several exciting announcements about the year ahead for Google. This year, the company shared their plans for Android P, Google Assistant, Google Maps, and a few other updates poised to impact the mobile app development landscape.
Notably, this year’s I/O emphasized the role of artificial intelligence (AI), specifically machine learning, and the part it plays in personalizing Android systems for its users. At last year’s I/O, the company announced Google AI, as well as their plans to bring the benefits of AI to everyone. This particular mission shows no indication of slowing, and this year, it’s obvious that Google is determined to become the largest AI company in the world.
“I feel very fortunate as a company to have a timeless mission that feels as relevant today as when we started,” says Sundar Pichai, CEO of Google.
Pichai spoke about the excitement of driving technology forward in his opening keynote but stressed the importance of reflecting on Google’s responsibility for the impact of these advances and the ways they will influence day-to-day life.
AI will dramatically affect the way people experience the world through technology. Every highlight from I/O 2018 incorporated some aspect of AI and the greater effort to make information more useful, accessible, and beneficial to society.
The P is for Personalization
There was a lot of talk about the new Android operating system, Android P, and Google has already released the beta version for users to download. Most of the operating system updates incorporate AI and focus on personalization. Android P is a crucial first step towards AI at the core of mobile operating systems.
Generally speaking, almost every smartphone user will say battery life is an essential priority when it comes to choosing their device. Android P is designed to give the user a more consistent, long-lasting battery life. With on-device machine learning, Android P learns and adapts from usage patterns to deliver a better experience. Specifically, Android P’s machine learning capabilities learn which apps a user frequently opens, which apps a user is likely to open in the next few hours, and which ones they won’t use until later or at all within a day. Using this knowledge, Android P adapts and only uses the battery on the apps and services a user cares most about.
Most mobile operating systems already adjust screen brightness given the current lighting conditions, but Android P takes this feature one step further. Android P takes into account the user’s personal preferences as well as the current environment. Again, with on-device machine learning, the device will learn how particular users like to adjust their brightness settings and make any necessary changes for you in a power efficient way.
Last year, Google announced a predictive app feature for Android, and this year’s operating system goes beyond predicting what app a user wants to launch to what specific actions they want to take; this feature is called App Actions. Actions are predicted based on usage patterns. For example, if a user connects their headphones, Android will launch an action to resume the album they were listening to.
This feature is aimed to support re-engagement. To support app actions, developers only need to add an actions.XML file to their app. Including this in your app allows actions to not only surface in the launcher, but also in smart text selection, Play Store, Google search, and the assistant. This is a great way to provide a user deep links into an app given the right context.
The main focus for Android P is driving app engagement. Slices will dramatically change how users interact with apps and provide more real estate for developers looking to direct users into their app.
Slices provide users with a small, interactive excerpt from an app that surfaces in different areas of the operating system. Android P is laying the foundation primarily in search. For example, if a user is looking for a ride home and searches Lyft in Google Search, the user will see a slice from the Lyft app installed on their phone. Developers can use a number of UI templates to render a part of their app in the context of search.
The UI templates for Slices are multi-faceted giving developers access to feature-rich media which let users make purchases, watch video, or book hotel rooms directly from the Slices snippet. Developers have the option of using the provided templates which are modeled after Android Notifications or completely build their Slices from scratch. An early access program will be available for developers starting in June 2018. Introducing Slices will transform app development by creating a dynamic experience where app UI features can intelligently and intuitively show up in context for users.
Improvements to Google Assistant
Google has been making significant strides with their Google Assistant over the past year, but they’re just scratching the surface. Google Assistant is now available on a total of 500 million devices, 5000 connected home devices from dishwashers to doorbells, supports 30 languages across 80 countries, and is offered by 40 different auto brands.
A key theme for Google Assistant this year is making it more naturally conversational to the point where it understands the social dynamics of conversation. The goal is to make Google Assistant more natural and comfortable to talk to and to do this it begins with the foundation of the assistant – the voice.
Using Wavenet Speech Systems, Google was able to create models that take the underlying, raw audio to make a more natural sounding voice; a voice that resembles how humans speak more closely down to the pitch, pace, and every pause that conveys meaning.
Continued Conversation has been a highly requested feature for Google Assistant. With Continued Conversation, users don’t need to say “Hey, Google” to initiate every follow up request. More importantly, Google Assistant is able to recognize when a user is talking to it versus other people in the room.
With Multiple Actions, users can now ask their assistants multiple, and sometimes conflicting, requests at once. While this is something that is very natural and intuitive for humans, it’s a difficult concept for computers to grasp; however, Google Assistant is able to break down multiple requests at once to deliver the best user experience.
Combining Voice and Visual Experiences
At I/O this year, Google announced that they will be focusing on providing a new, visual canvas for the Google Assistant. Until now, the primary focus for Google Assistant has been verbal conversation, but now Google is merging the simplicity of voice command with a rich, visual experience. This summer, several smart screen displays will be available that come enabled with Google Assistant.
Google has also revisioned the way users connect with Google Assistant on their smartphones. Google Assistant for mobile is rapidly becoming more immersive, interactive and proactive. Users can interact with their smart home devices with their voice and get access to UI controls right at their fingertips. Rich and interactive responses to voice requests are incredibly helpful for users. Google Assistant will also be available in Google Maps and Navigation this summer.
Google is also working on Google Assistant Duplex, which ties together all of Google’s AI efforts. Although this feature has no release date yet, the demonstration at I/O this year was something to get excited about.
The vision for the Google Assistant is to help users get things done, and this includes making phone calls. Even in 2018, more than half of businesses in the U.S. don’t have an online booking system set up; this is where AI comes in. Users can ask their Google Assistant to make appointments for them, and the assistant will make the phone call in the background. With all the advancements made to Google Assistant this year, it’s almost impossible to tell that it isn’t a real person making the phone call.
Source: Business Insider
It’s clear from this year’s keynote, that Google is making the health and wellbeing of digital users a priority. With the new Android operating system, Google has added key capabilities to help users strike the right balance between life and technology.
Android P comes equipped with a dashboard that shows users exactly how they use their device. Users can see how much time they spend in apps, how many times they’ve unlocked their phone, as well as how many notifications they’ve received. This allows users to drill down on the data and see when and how they’re engaging with their devices. Developers will also have access to a more detailed break down of how users are spending time in their app which they can use to determine ways to drive more meaningful engagement.
Users can now limit their usage in certain apps and receive notifications when they’re approaching their time limit and need to think about doing something else. After a user has reached their daily limit, the app icon becomes grayed out on the home screen to remind users about their goal.
Notifications can be distracting, and it’s difficult at times to stay fully present during important events or occasions. With Android P, improvements have been made to do not disturb mode. These changes silence not only phone calls and texts, but notifications and visual interruptions as well. The new gesture, Shush, allows users to turn their phone over and automatically evoke do not disturb mode.
Google Maps and Augmented Reality
Source: Business Insider
Google also revealed an augmented version of Google Maps, which addresses one of the biggest complaints about the service: which direction is the blue dot headed? They solved this by integrating Google Lens within Google Maps. This version of Google Maps will automatically load streetview when a user’s device camera is open. When you point your camera at the street, the Streetview AR overlay will open and guide users in the right direction while keeping the original Maps UI layout at the bottom of the screen.
As well, Google tacked on a shortlist feature to Maps, which allows users to bookmark certain locations and share them with friends.
Smart Compose for Gmail
As the world already knows, Google rolled out a massive redesign for Gmail only two weeks before this year’s I/O conference. Aside from the redesign, Google also announced Smart Compose, which is an autocomplete feature for email.
Today, we are at a critical reflection point for technology. AI is going to have a significant effect on the way people interact with technology and engage with brands on a global scale. This year’s Google I/O conference highlighted the ways the company plans to make AI more accessible and mainstream. As user experience becomes more predictive and proactive, it’s clear that AI will be the central point for all technologies in the years to come.
As a full service custom mobile app development company, Clearbridge Mobile handles the entire lifecycle of your product from Planning and Strategy, UX/UI Design, App Development, QA/User Acceptance Testing, to Technical Delivery. We use a unique agile development process that gives you control over scope, reduces your risk, and provides you predictable velocity. Start a conversation today to get started on your mobile project.