For the fifth consecutive year, Gartner® has named Google a Leader in the 2022 Gartner Magic Quadrant™ for Cloud Infrastructure and Platform Services.
Over the past year, we have continued to invest in platform and go-to-market capabilities that have an immediate impact on the speed and simplicity of building solutions on Google Cloud. We are focused on designing and delivering workload-optimized infrastructure that enables ‘golden paths’ for our customers.
We have 35 cloud regions and 173 network edge locations around the globe today. We’ve also continued to focus on the growth of our partner ecosystem around the globe. Here are a few examples of recent product releases that support our progress:
Building solutions that are optimized for what matters most to you
- Many of our customers are looking for high compute performance at a competitive price. To meet this need, in 2021 we launched Tau VMs with AMD-based T2D VMs that deliver 42% better price-performance vs. other leading clouds. In July, we announced the expansion of our Tau VM family with our Arm-based machines. Powered by Ampere® Altra® Arm-based processors, these powerful VMs deliver single-threaded performance at a compelling price, making them ideal for scale-out, cloud-native workloads. Developers now have even more options when choosing the optimal architecture to test, develop and run their workloads.
- We are enabling customers to deliver immersive experiences through Media CDN, a modern, extensible platform for streaming media which augments our 173 network edge locations. On top of that, we have an additional 1,300 caches for CDN deployed worldwide. Media CDN leverages the same infrastructure that YouTube uses to deliver content to over 2 billion users around the world. Built with AI/ML technologies, Media CDN enables ad insertion, virtual billboards, real-time stats and analytics for sports games, etc. to help customers transform media experience.
- We’re also reaching beyond the cloud with Google Distributed Cloud, which extends Google Cloud infrastructure and services to different physical locations (or distributed environments), including on-premises or co-location data centers and a variety of edge environments. Anthos powers all Google Distributed Cloud offerings, delivering a common control plane for building, deploying and running modern, containerized applications at scale, wherever you choose.
Helping you drive innovation with AI and ML
- We recently made Cloud TPU v4 pods available to all our customers. Our machine learning cluster powered by Cloud TPU v4 pods offers 9 exaflops of peak aggregate performance and runs at 90% carbon-free energy, making it one of the fastest, most efficient, and most sustainable ML infrastructure hubs in the world.
- On the security front, growing cybersecurity threats have every company rethinking its security posture. We invest in a planet-scale network that’s secure, performant and reliable, and match that by defining industry-wide frameworks and standards. For example, to help customers better secure their software supply chain, we introduced SLSA (supply chain levels for software artifacts), an end-to-end framework for ensuring the integrity of artifacts throughout the software supply chain, and an open-source equivalent of many of the processes we have been implementing internally at Google. We also prevented the largest known Layer-7 DDoS attack at 46M requests per second with Cloud Armor Adaptive Protection which detected, analyzed, and recommended protective rules with ML-based technologies that blocked the attack without any service impact.
Delivering continental-scale availability with Cloud Storage
- This year we expanded the number of supported regions with our industry leading Cloud Storage dual-region buckets. By providing organizations with a single continental scale bucket, they can select from nine regions across three continents to provide a business continuity architecture with a Recovery Time Objective (RTO) of zero. In the event of an outage, the application(s) seamlessly access the data in the alternate region. There is no failover and failback process. For organizations requiring ultra availability, they can select to use Turbo replication with dual-region buckets backed by a 15-minute Recovery Point Objective (RPO) SLA.
- This year we are introducing Cloud Storage Autoclass, where customers can easily optimize costs with object placement across storage classes. They can enable, at the bucket level, policy-based automatic object movement to colder storage classes based on the last access time. With Autoclass, there are no early deletion or retrieval fees, nor class transition charges for accessing objects in colder storage classes.
Reducing your operational burden with an easy-to-use platform
- Created by the same developers that built Kubernetes, Google Kubernetes Engine (GKE) makes it easy for customers to recognize the benefits of innovation initiatives without getting bogged down troubleshooting infrastructure issues and managing day-to-day operations related to enterprise-scale container deployment. With fully managed Autopilot mode of operation combined with multi-dimensional auto scaling capabilities, GKE delivers most dimensions of automation to efficiently and easily operate customers’ applications. Only GKE can run 15,000-node clusters, outscaling other cloud providers by up to 10X, letting customers run applications effectively and reliably at scale.
- To help customers proactively prevent down time from the most common misconfigurations and suboptimal configurations, we recently launched Network Analyzer, a new addition to the Network Intelligence Center suite of observability modules. Some examples of where this tool is useful is in the detection of invalid routing policies, incorrect IP addresses, inconsistent firewall rules and load balancing policies. It proactively surfaces insights, network failures and provides root cause information.
- The less time developers and architects spend managing simple, routine jobs, the more time they can spend on meaningful work. One example of how we’re reducing operational burden is with tools like Batch, a fully managed job scheduler to help customers run thousands of batch jobs with just a single command. It’s easy to set up, and jobs run on auto-scalable resources, giving you more time to work on the greatest areas of value.
- Why spend time building things that others have already built and tested? Our new HPC toolkit is an open source tool that lets you easily create repeatable, turnkey HPC clusters based on proven best practices in minutes. It comes with several blueprints and broad support for third-party components such as the Slurm scheduler, Intel DAOS and DDN Lustre storage.
Building for our customers first
Most importantly, our field organizations and partner organizations work with a singular focus to ensure customer success. We have seen strong growth in our partner and ISV ecosystem so that together we can comprehensively meet customer needs. This has made Google Cloud the fastest growing hyperscaler, with a rapidly expanding customer base across the globe.
We are committed to sustaining and accelerating the pace of customer-centric innovation. You can download a complimentary copy of the 2022 Magic Quadrant for Cloud Infrastructure and Platform Services on our website.
For more on our industry-leading infrastructure, please view our recent Google Cloud Next ‘22 sessions on demand. Enjoy!