Vertex AI is Google Cloud’s end-to-end ML platform for data scientists and ML engineers to accelerate ML experimentation and deployment. The platform unifies Google Cloud’s existing ML offerings into a single environment for efficiently building and managing the lifecycle of ML projects.

Vertex AI brings AutoML and AI Platform together into a unified API, client library, and user interface. AutoML lets you train models on image, tabular, text, and video datasets without writing code, while training in AI Platform lets you run custom training code. With Vertex AI, both AutoML training and custom training are available options. Whichever option you choose for training, you can save models, deploy models, and request predictions with Vertex AI.

Where Vertex AI fits in the ML workflow

You can use Vertex AI to manage the following stages in the ML workflow:

  • Create a dataset and upload data.
  • Train an ML model on your data:
    • Train the model
    • Evaluate model accuracy
    • Tune hyperparameters (custom training only)
  • Upload and store your model in Vertex AI.
  • Deploy your trained model to an endpoint for serving predictions.
  • Send prediction requests to your endpoint.
  • Specify a prediction traffic split in your endpoint.
  • Manage your models and endpoints.

Components of Vertex AI

This section describes the pieces that make up Vertex AI and the primary purpose of each piece.


You can train models on Vertex AI by using AutoML, or, if you need the wider range of customization options available in AI Platform Training, use custom training.

In custom training, you can select from among many different machine types to power your training jobs, enable distributed training, use hyperparameter tuning, and accelerate with GPUs.

Deploying models for prediction

You can deploy models on Vertex AI and get an endpoint to serve predictions on Vertex AI.

You can deploy models on Vertex AI whether or not the model was trained on Vertex AI.

Data labeling

Data labeling jobs let you request human labeling for a dataset that you plan to use to train a custom machine learning model. You can submit a request to label your video, image, or text data.

To submit a labeling request, you provide a representative sample of labeled data, specify all the possible labels for your dataset, and provide some instructions for how to apply those labels. The human labelers follow your instructions, and when the labeling request is complete, you get your annotated dataset that you can use to train a machine learning model.

Vertex AI Feature Store

Vertex AI Feature Store is a fully managed repository where you can ingest, serve, and share ML feature values within your organization. Vertex AI Feature Store manages all of the underlying infrastructure for you. For example, it provides storage and compute resources for you and can easily scale as needed.

Vertex AI Workbench

Vertex AI Workbench is a Jupyter notebook-based development environment for the entire data science workflow. Vertex AI Workbench lets you access data, process data in a Dataproc cluster, train a model, share your results, and more, all without leaving the JupyterLab interface.

Tools to interact with Vertex AI

This section describes the tools that you use to interact with Vertex AI.

Google Cloud Console

You can deploy models to the cloud and manage your datasets, models, endpoints, and jobs on the Cloud Console. This option gives you a user interface for working with your machine learning resources. As part of Google Cloud, your Vertex AI resources are connected to useful tools like Cloud Logging and Cloud Monitoring. The best place to start using the Cloud Console is the Dashboard page of the Vertex AI section:

Go to the Dashboard

Cloud Client Libraries

Vertex AI provides client libraries for some languages to help you make calls to the Vertex AI API. The client libraries provide an optimized developer experience by using each supported language’s natural conventions and styles. For more information about the supported languages and how to install them, see Installing the client libraries.

Alternatively, you can use the Google API Client Libraries to access the Vertex AI API by using other languages, such as Dart. When using the Google API Client Libraries, you build representations of the resources and objects used by the API. This is easier and requires less code than working directly with HTTP requests.


The Vertex AI REST API provides RESTful services for managing jobs, models, and endpoints, and for making predictions with hosted models on Google Cloud.

Deep Learning VM Images

Deep Learning VM Images is a set of virtual machine images optimized for data science and machine learning tasks. All images come with key ML frameworks and tools pre-installed. You can use them out of the box on instances with GPUs to accelerate your data processing tasks.

Deep Learning VM images are available to support many combinations of framework and processor. There are currently images supporting TensorFlow Enterprise, TensorFlow, PyTorch, and generic high-performance computing, with versions for both CPU-only and GPU-enabled workflows.

To see a list of frameworks available, see Choosing an image.

For more information, see Using Deep Learning VM Images and Deep Learning Containers with Vertex AI.

Deep Learning Containers

Deep Learning Containers are a set of Docker containers with key data science frameworks, libraries, and tools pre-installed. These containers provide you with performance-optimized, consistent environments that can help you prototype and implement workflows quickly.

For more information, see Using Deep Learning VM Images and Deep Learning Containers with Vertex AI.

Take the next step

Start building on Google Cloud with $500 in free credits and 20+ always free products.