For generative AI to achieve its immense potential, it needs to be broadly accessible and easy to integrate into a range of services. That’s why Google offer our customers Vertex AI Search and Conversation— generally available today— to abstract the complexity of creating generative search and chat applications. These products enable even developers with little machine learning expertise to build and deploy intelligent apps in as little as a few hours. 

Unveiled earlier this year in preview as Enterprise Search on Generative AI App Builder and Conversational AI on Generative AI App Builder, respectively, Vertex AI Search and Conversation offers a simple orchestration layer to combine enterprise data with generative foundation models, as well as with conversational AI and information retrieval technologies. 

Rather than spending months building gen AI apps, enterprise developers can quickly ingest data, add customization, and, with a few clicks, build a search engine or chatbot that can interact with customers and answer questions grounded in the factuality of their enterprise website along with specified structured and unstructured data sources. This ability to quickly prototype generative apps lets enterprises pursue a range of use cases, from food ordering to banking assistance to customer service. 

In addition to general availability, today, we’re also introducing new features so developers can build even more compelling apps that not only let users find important information through natural language, but can also take actions on their behalf. These new features include:

  • Multi-turn search, which supports follow-up questions without starting the interaction over; conversation and search summarization, which delivers crisp summaries for search results and chat conversations; and tools that let developers pre-program prompts and responses for specific queries in natural language just like they would give instructions to a human.
  • Vertex AI extensions, which can retrieve information in real time and act on behalf of users across Google and third-party applications like Datastax, MongoDB and Redis, and Vertex AI data connectors, which help ingest data from enterprise and third-party applications like Salesforce, Confluence, and JIRA, connecting generative applications to commonly used enterprise systems.
  • Grounding, which can increase confidence in your generative AI search and conversational applications by rooting generative outputs in your enterprise data. Organizations can flexibly decide if they want this data to be supplemented with the foundation model’s training data. And they can use helpful features like citations to boost user confidence in the quality of the returned results.

Let’s take a closer look at the capabilities of Vertex AI Search and Conversation.

Building personalized, compelling generative apps with Vertex AI

Vertex AI Search lets organizations set up Google Search-quality, multimodal, multi-turn search applications powered by foundation models, including the ability to ground outputs in enterprise data alone or use enterprise data to supplement the foundation model’s initial training. It will soon support enterprise access controls to ensure information is surfaced only to appropriate users, and features like citations, relevance scores, and summarization to encourage confidence in results and make them more useful.

With Vertex AI Search you can offer your customers and employees personalized immersive search experiences similar to the Generative Search Experience in Google

Organizations with more complex use cases can combine LLM embeddings with vector search to power a wide range of generative AI apps, such as semantic search, personalized recommendations, chat, multi-modal search, and more. Vector search gives organizations access to an easy to use vector similarity search solution, the same technology used by Google to power major services, such as Google Search and YouTube, at massive scale.

Vertex AI Conversation facilitates the creation of natural-sounding, human-like chatbots and voicebots, powered by foundation models with support for both audio and text. With it, developers can build a chatbot based on a website or collection of documents with just a few clicks. For further customizations, Vertex AI lets developers combine deterministic workflows with generative outputs, combining rules-based processes with dynamic AI to create apps that are engaging but reliable—including transaction abilities so users can prompt AI agents to, for example, book appointments or make purchases. Organizations can tune chats with a variety of data from websites, documents, FAQs, emails, and agent conversation histories, and they can generate interaction summaries, citations, and other data to facilitate handoffs between AI apps and human agents.

Vertex AI Conversation’s playbook feature (in preview) lets you use natural language to define what responses and transactions you want to enable your voice and chatbots to perform, similar to how you would instruct a human agent on how to handle tasks.

With the power of Google’s foundation models and user-friendly developer tools, organizations can: 

  • build immersive, personalized experiences for customers and employees, turning tasks that used to take hours into quick searches or conversational explorations. From surfacing the right information in the right context to executing actions, generating images, noting citations, and producing recommendations, apps can respond naturally to user inputs while giving organizations control over tone, conversation flows, data access, and more.
  • remove the need for data chunking, generating embeddings, or managing indices and conversational decision trees. Instead, developers can leverage a straightforward orchestration interface to quickly build apps, with little or no coding and no prior machine learning experience. 
  • help maintain control over grounding, application actions, and data. Developers can improve relevance of outputs and reduce hallucinations by grounding outputs in enterprise data. They also can control what data apps can access via connectors, and which actions apps can take on a user’s behalf. Organizations enjoy access controls to help keep data secure and support for standards like HIPAA, DRZ, and CMEK for various compliance standards. Data is stored in an enterprise’s own Google Cloud instance, and Google does not access this data or use it to train our models. 

Improving search and conversational use cases with generative AI is one of the most widely-applicable gen AI projects we see organizations pursuing, and with Vertex AI Search and Conversation, significantly faster time-to-deployment and time-to-value are within reach. Early adopters are already seeing success: 

  • Omar Omran, Six Flags’ Chief Digital Officer, emphasized the transformative potential of collaborating with Google Cloud and using Vertex AI Conversation. “This partnership signifies a monumental leap in our strategic direction, ushering in a new era of technological empowerment for Six Flags. With Google Cloud’s technology, we are committed to not only enhancing park operations, but also creating unparalleled, personalized guest experiences. We will bring an increased level of agility and responsiveness to our operations, redefining the way we serve our guests and setting new benchmarks in the amusement park industry.”
  • According to Harsh Kumar Sarohi, Senior Vice President, Technology, TradeIndia.com, one of India’s largest B2B portals,  “We have  been able to reduce our search drop off rate by 50% and increase existing user engagement by 6% using Vertex AI Search from Google Cloud. We are also leveraging Vertex AI Search’s analytics features to understand gaps in our current portfolio by using metrics like top search results with no queries.”
  • C1, a leading provider of contact center solution technologies, is using Vertex AI Conversation to support their customers as well as their employees. Their agent assist bot, C1 Auto Pilot, can help agents improve customer experience by providing real-time customer interaction suggestions, opportunities to cross-sell/upsell, sentiment analysis and automatic summarization with call insights. According to Mark Langanki, CTO, C1 “The overall goal with the C1 Auto Pilot solution is to help agents improve customer experience by reducing the time they spend on administrative tasks, such as note-taking and research, by providing them with the information they need to have a more productive and engaging conversation with the customer. We’ve seen call handling times decrease by providing agents with customer specific value add information.  We expect to see continuing overall improvement in both the agent experience and customer satisfaction.” 
  • What stands out for Dan Burgin, Sr Director AI Automation, C1 is the short time and ease to build with Vertex AI Conversation. “We were so impressed with how quickly we can build and deploy chatbots with Vertex AI Conversation in a manner that protects our client’s and end-customer data. We’re also in the final testing stage to deploy a generative AI-powered chatbot for our own employees that can answer frequently asked HR questions on our Intranet site in a timely and secure manner.”

Start building

Search and conversation use cases provide a clear opportunity for organizations to quickly gain experience with and benefit from generative AI technologies. We look forward to seeing more of our customers leverage Vertex AI Search and Conversation to delight their customers and employees.

These products are just one facet of our goal of serving organizations across the spectrum of AI needs and expertise levels—if you’re a machine learning engineer or a data scientist looking to build customized applications, check out updates to Vertex AI with Model Garden, foundation models, and tuning options, as well as our news about Colab Enterprise.