Google I/O wraps this week, leaving developers to digest a wide variety of new product, service and feature announcements, ranging from a managed machine learning platform and other artificial intelligence-fuelled advances to Google Workspace updates to new Multitask Unified Model technology.

The livestreamed conference, held virtually after being cancelled last year amid coronavirus restrictions, fielded registrations from some 200,000 developers from 181 countries, with 80 percent from outside the United States.

“I/O has always been a celebration of technology and its ability to improve lives, and I remain optimistic that technology can help us address the challenges we face together,” Alphabet and Google CEO Sundar Pichai said. “At Google, the past year has given renewed purpose to our mission to organize the world’s information and make it universally accessible and useful. We continue to approach that mission with a singular goal: Building a more helpful Google for everyone. That means being helpful in moments that matter, and it means giving you the tools to increase your knowledge, success, health and happiness.”

Here’s a look at 11 of the new product, service and feature announcements made during the Google I/O conference, with more from Pichai found here and Android updates found here.

Vertex AI

Vertex AI, now generally available, is designed to help developers more easily build, deploy and scale machine learning (ML) models faster with pre-trained and custom tooling within a unified artificial intelligence (AI) platform.

The managed ML platform brings together AutoML and AI Platform into a unified API, client library and user interface. It requires almost 80 percent fewer lines of code to train a model versus competitive cloud providers’ platforms, according to Google Cloud.

Google Cloud has been working on the new product for two years to allow data scientists and ML engineers across ability levels to implement MLOps to build and manage ML projects throughout a development lifecycle. The objective of Vertex AI is to accelerate the time to ROI for Google Cloud customers, according to Craig Wiley, director of product management for Google Cloud AI.

“Two years ago, we realized there were two significant problems we had,” Wiley told CRN USA. “The first issue we had was that we had dozens of machine learning services for customers…and none of them worked together. We’ve been running so fast to build them all, that there was no internal compatibility. If you used product A to build something, you couldn’t then take that and do more on top of it with product B, and we realized that was a huge problem. So we committed internally that we would all go back, and we would define one set of common nouns and verbs, and we would refactor all of these services so that they were internally compatible.”

The second issue was that cloud providers and other ML platform companies were telling developers they had everything they needed to be successful, while not providing the important MLOps tools necessary for success.

In addition to completely refactoring all of its former services, Google Cloud has launched a series of new applications or capabilities around MLOps. Most are in preview, with the expectation that they’ll move into general availability in about 90 days.

Developers can more quickly deploy useful AI applications with new MLOps features such as Vertex Vizier, a black-box optimization service that helps tune hyperparameters in complex ML models; Vertex Feature Store, which provides a centralized repository for organizing, storing and serving ML features; and Vertex Experiments to accelerate the deployment of models into production with faster model selection.

Other MLOps tools include Vertex Continuous Monitoring and Vertex Pipelines to streamline the end-to-end ML workflow.

“We also are launching a series of Google ‘secret sauce’ pieces, a series of capabilities that are part of how Google is able to produce some of the output and capabilities it does -- so things like neural architecture search and our nearest neighbors matching engine and things of that nature,” Wiley said.

Multitask Unified Model (MUM)

In 2018, Google introduced Bidirectional Encoder Representations from Transformers (BERT), a neural network-based technique for natural language processing pre-training that allows anyone to train their own state-of-the-art, question-and-answering system.

With Multitask Unified Model (MUM), Google said it’s reached its next AI milestone in understanding information, advances that will be applied to Google Search.