Google Cloud’s approach to trust and transparency in AI
Generative artificial intelligence has emerged as a disruptive technology that presents tremendous potential to revolutionize and transform the way Google do business. It has the power to unlock opportunities for communities, companies, and countries around the world, bringing meaningful change that could improve billions of lives.
The challenge is to do so in a way that is proportionately tailored to mitigate risks and promote reliable, robust, and trustworthy gen AI applications, while still enabling innovation and the promise of AI for societal benefit.
Google Cloud believe that the only way to be truly bold in the long term is to be responsible from the start.
“We are convinced that the AI-enabled innovations we are focused on developing and delivering boldly and responsibly are useful, compelling, and have the potential to assist and improve lives of people everywhere — this is what compels us,” said James Manyika, Google’s senior vice president for research, technology and society.
Google put that philosophy to work with a holistic approach to building enterprise-grade AI responsibly, taking into consideration a wide range of disciplines including data governance, privacy, security, and compliance. Google detail how Google are using this foundation in our AI development in a new paper, “Google Cloud’s Approach to Trust in AI.”
As we discuss in the paper, Google approach is fundamentally informed by Google AI principles. Google were one of the first to introduce and advance responsible AI practices, and these principles serve as an ongoing commitment to our customers worldwide who rely on our products to build and grow their businesses safely.
Google AI products are built atop a scalable technical infrastructure underpinned by a secure-by-design foundation and supported by robust logical, operational and physical controls to achieve defense in depth, at scale, and by default. We’ve taken a three-pronged approach to secure, scale, and evolve the security ecosystem, helping organizations deploy AI systems on Google Cloud, continuing to launch cutting-edge, AI-powered products and services to help organizations achieve better security outcomes at scale, and continuously evolving to stay ahead of threats.
In addition to Google focus on security, our approach includes incorporating privacy design principles, designing architectures with privacy safeguards, and providing appropriate transparency and control over the use of data. When bringing new offerings to the market, Google incorporate these principles throughout the product lifecycle and design architectures with comprehensive privacy safeguards such as data encryption and the ability to turn relevant features on or off.
One of the questions frequently posed to us is whether Google foundation models are trained on customer data, and by extension, whether customer data may as a result be exposed to Google, Google’s other customers, or the public. To address this question, by default, Google Cloud does not use Customer Data to train its foundation model. Google outline some key aspects of Google model tuning and deployment and data governance practices in our Vertex AI offerings.
Lastly, AI regulation is a dynamic, rapidly-evolving space. Google believe that AI is too important not to regulate, and too important not to regulate well, and thus advocate for risk-based frameworks that reflect the complexity of the AI ecosystem by building on existing general concepts. Google teams closely monitor and analyze new and updated regulations, and Google regularly engage regulators. Google Cloud also makes compliance documentation, certifications, control attestations, and independent audit reports readily available to satisfy regional and industry-specific requirements and support customers in their compliance validation efforts of Google Cloud’s platform, as well as their assessment of Vertex AI’s compliance and security controls.