In 2014, the first echos of the word Kubernetes in tech were heard throughout the industry. Back then, the first thing that usually came to mind was “How do you even pronounce it?” Fast forward seven years and it’s become one of the largest open source projects in the world. One of the early stewards of Kubernetes was Google Fellow Eric Brewer. For over a decade, Eric has taken a driver’s seat in advocating for, building, and externalizing technologies at Google. Though he now focuses on a broad set of Google Cloud services—think Kubernetes, serverless, DevOps, Istio, and services—he previously led groundbreaking efforts to separate storage from compute, drive the use of VM live migration at scale, and shape the use of appliances for disaggregation. I had the chance to sit down with him over a series of sessions to learn from his years of experience and dig into the four Kubernetes and open source insights that Eric says have defined the future of cloud computing. 

1. Kubernetes became central to cloud native computing because it was open sourced, and we must continue to invest in open source technologies.

When Eric joined the UC Berkeley faculty, he focused on what later became cloud computing – a model based on clusters of commodity servers that use many processes, services, and APIs. Once he came to Google in 2011, he brought this view over to develop a new kind of cloud that centered on a higher level of abstraction. This meshed well with the early prototypes that led to Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications.

While cloud was still forming in the early 2010s, Eric knew that Google’s internal container-based approach would lead to a more powerful cloud than just VMs and disks. Though it was relatively easy to attract a group of supporters at Google, widespread industry adoption is often slower for novel and unproven ideas. With that foresight, Eric knew right away that open sourcing the project would be the only viable way to achieve the potential he knew Kubernetes held to revolutionize cloud computing. 

Of course, he faced some resistance. By 2012, Google Cloud already had App Engine and VMs available. The common question from critics was, “Why do we need a third way to do computing?” Well, Google was already running billions of containers per week prior to the emergence of Kubernetes, and Eric saw massive value in further developing the technology for the rest of the industry. Kubernetes’ automation and flexibility makes it much easier to operate compared to raw VMs or raw disks.

After years of open source support, Kubernetes has become the de facto way to run applications in the cloud, with more and more opinionated and vertically oriented services that run on top of it, like Knative and Kubeflow. The project is still maturing, even as we now face another pivotal shift in cloud computing. Eric is currently spearheading efforts to combine the philosophy that underpins Kubernetes with the strict protection needed by security-sensitive industries. His focus is on open source and software supply chain security, with a goal of creating more opinionated tooling from source code to deployment in order to minimize attack points. 

2. As the number of dependencies used in software development grows, the security risks multiply. Investing in software supply chain security is imperative, and a move towards managed services is actually safer than self-managed solutions.

Recent attacks, like those on SolarWinds and CodeCov, have shown that increasing reuse and development velocity across the software industry has created more openings for attacks. Eric is laying the groundwork to address a challenge that he believes should be a P0 for the entire planet. “99% of our vulnerabilities are not in the code you write in your application. They’re in a very deep tree of dependencies, some of which you may know about, some of which you may not know about”. -Eric Brewer

Because of the growing use of open source software and dependencies in software development, it’s critical for organizations to understand what pieces of software they want to bet on and why. Instead of including unvetted software dependencies in code, organizations must take time to evaluate this software and identify the elements that are either not quite up to par or poorly maintained. 

When asked about how Google is investing in Kubernetes (which has several hundred software dependencies), Eric explained that Google Cloud helped form the Cloud Native Computing Foundation (CNCF) in 2015 to serve as the vendor-neutral home for many of the fastest-growing open source projects, including Kubernetes, Prometheus, and Envoy. The foundation’s mission is to make cloud native computing ubiquitous and foster the growth of the ecosystem. Under the auspices of the CNCF, Google has made over 680,000 additional contributions to the project, including over 123,000 contributions in 2020. 

Google has a long history of committing to open source. In fact, Google recently committed another $100M to third-party foundations supporting open source security. In addition, Eric helped found the Open Source Security Foundation (OpenSSF), which focuses on open source security tooling and best practices so that those responsible for their organization’s security are able to understand and verify the security of open source dependency chains. Eric sees this work as absolutely essential in order to set a precedent. Though it will require lots of largely mundane work to get open source to be as secure as possible, this work is necessary and requires financial support. “Open source is a public infrastructure also. And like all public infrastructures, it needs maintenance and support”. -Eric Brewer

As services continue to move to higher levels of abstraction, managed services set a robust foundation for secure software delivery. Managed services allow providers to enable automatic security preventative controls and attestations. GKE Autopilot, for example, provisions and manages the cluster’s underlying infrastructure, including nodes and node pools, giving you an optimized cluster with a hands-off experience. It follows Google Kubernetes Engine (GKE) best practices and recommendations for cluster and workload set up and security, while also enforcing settings that provide enhanced isolation for your containers. In Eric’s view, this model will continue as a dominant trend moving forward: Providers will manage more features (like security) over time, taking responsibility for features that you don’t want to manage yourself while making the most of the proven protocols and best practices they have built up over years.

3. Platform operators should run GKE as a general purpose platform while imposing guidelines the enterprise cares about.

A common question Eric has gotten over the years is how an enterprise should use a managed Kubernetes platform, like GKE. The first thing to remember is that a cloud provider offers more levers, options, and features to tinker with than you really want your developers to use. These levers, however, give platform owners the ability to create secure and maintainable platforms to power their modern apps. For example, it’s wise to apply backups by default and policies to prevent root file system access or creation of public IPs for backend systems. If you’re processing credit card transactions, you don’t want to give your internal developers free rein; instead you want to give them a platform where the transactions they execute are guaranteed by the structure of services to be compliant with the regulations where you operate.

Think of Kubernetes as the way to build customized platforms that enforce rules your enterprise cares about through controls over project creation, the nodes you use, and libraries and repositories you pull from. Background controls are not typically managed by app developers, rather they provide developers with a governed and secure framework to operate within. 

Managed services often provide or support automated policy controls and best practices that platform operators can easily leverage. Anthos Service Mesh, for example, helps control traffic flows and API calls between services. With the ability to automatically and declaratively secure your services, your developers benefit from more productivity, and the organization benefits from the delivery of more features faster. At the same time, you are protected from shipping features that go against company policies or government regulations. 

Google Cloud supports buildpacks—an open-source technology that makes it fast and easy for you to create secure, production-ready container images from source code and without a Dockerfile. Artifact Registry lets you set up secure private-build artifact storage on Google Cloud so you can maintain control over who can access, view, or download artifacts. Container Analysis provides vulnerability scanning on images in Artifact Registry and Container Registry. 

4. Kubernetes will continue to expand to the edge, leverage coprocessors, and run effectively across public and private clouds.

In our final episode of the series, we collected questions from the field where a few themes emerged, including Kubernetes at the edge, Kubernetes on coprocessors, and finding the right balance between public and private clouds.

Kubernetes at the edge

We’re already seeing the potential of Kubernetes being realized at the edge. For example, Kubernetes is being used at the edge in telecommunications and retail spaces. In response to edge security as a concern, Eric explained that Kubernetes can be effectively secured, but it comes down to the full stack. Security can be strengthened through securing hardware through the root of trust, all the way up the stack running on it. 

This is an area Google Cloud continues to invest in. At Next 2021, we announced Google Distributed Cloud, a portfolio of fully managed hardware and software solutions that extends Google Cloud’s infrastructure and services to the edge. It’s enabled by Anthos, which GKE is a major component of, and is ideal for local data processing, edge computing, on-premises modernization, and meeting requirements for sovereignty, strict data security, and privacy. To use Kubernetes at the edge securely, Distributed Cloud provides centralized configuration and control over clusters at Google’s edge network, the operator edge (5G and LTE services offered by our communication service provider partners), or your own edge like retail stores, factory floors, or branch offices.

distrubutedcloud.jpg

Kubernetes running on coprocessors

We are also partnering with NVIDIA to deliver GPU-accelerated computing and networking solutions for running Anthos at the edge. This speaks to the potential of coprocessors for Kubernetes. Eric believes that coprocessors are an important part of the computing future. We’re reaching the end of Moore’s Law, and to make up for it, the industry is adopting domain-specific hardware accelerated for use cases like graphics processing (with GPUs) or machine learning (with TPUs). 

The right balance between public vs. private clouds

Even with all this rapid innovation, companies still face difficult questions in balancing operating in the public cloud versus private sovereign clouds. Eric lays out clear reasons why a public cloud can offer more advantages: “You’d be better off with an open public cloud pretty much all the time if you can use one, because it will have better cost efficiency. It will have a higher rate of innovation. It can do more things over time”. -Eric Brewer

That being said, using a public cloud provider means you must trust your cloud provider and the government in which your provider is based (today, this is usually the US). If you don’t trust those or think they are too risky, you may want to run in your own country, on a private, sovereign cloud. The great thing is that Kubernetes is well-suited to run on a private cloud. Anthos (which Eric helped build) lets you run Kubernetes on GKE for hybrid and multicloud environments, and on bare metal. For those worried about vendor lock in, you can move off of Anthos and continue to run your applications on Kubernetes on-premises. 

Take the next step

Start building on Google Cloud with $500 in free credits and 20+ always free products.