Cloud Run is a fully-managed container runtime that automatically scales your code, in a container, from zero to as many instances as needed to handle all incoming requests. Previously, every instance in a Cloud Run service ran on only one container. Today, we are introducing Cloud Run sidecars, allowing you to start independent sidecar containers that run alongside the main container serving web requests.
Here are a few examples of how you might use Cloud Run sidecars:
- Run application monitoring, logging and tracing
- Use Nginx, Envoy or Apache2 as a proxy in front of your application container
- Add authentication and authorization filters (e.g., Open Policy Agent)
- Run outbound connection proxies such as the Alloy DB Auth proxy
All containers within an instance share the same network namespace and can communicate with each other over localhost:port (on whichever port your container is listening). The containers can also share files via shared volumes.
Cloud Run sidecars unlocks several new patterns and use cases around custom monitoring, logging, networking and security:
Application monitoring, logging and tracing sidecars
A sidecar is an additional container that is running alongside your main container. You can now instrument your Cloud Run service using custom agents like OpenTelemetry to export logs, metrics and traces to the backend of your choice. Here’s an example that lets users deploy a Cloud Run service with OpenTelemetry sidecar for custom logs, metrics and traces.
Proxy
You can also run a container in front of your main container to proxy requests. For example, you can use the official Nginx image from DockerHub as shown in the example below. Such proxies can provide an additional layer of abstraction for a more efficient flow of traffic to the application between client and servers by intercepting requests and forwarding them to the appropriate endpoint.
Here’s a service.yaml that includes an nginx sidecar:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: nginx-example
annotations:
run.googleapis.com/launch-stage: BETA
spec:
template:
metadata:
annotations:
run.googleapis.com/container-dependencies: "{hello: [nginx]}"
spec:
containers:
- image: nginx
name: nginx
ports:
- name: http1
containerPort: 8080
resources:
limits:
cpu: 500m
memory: 256Mi
volumeMounts:
- name: nginx-conf-secret
readOnly: true
mountPath: /etc/nginx/conf.d/
- image: us-docker.pkg.dev/cloudrun/container/hello
name: hello
env:
- name: PORT
value: '8888'
resources:
limits:
cpu: 1000m
memory: 512Mi
volumes:
- name: nginx-conf-secret
secret:
secretName: nginx_config
items:
- key: latest
path: default.conf
Here’s an nginx_config that enables gzip compression, stored in a secret named “nginx_config” in Secret Manager:
server {
listen 8080;
server_name _;
gzip on;
location / {
proxy_pass http://127.0.0.1:8888;
}
}
In the above example, you deploy two containers:
- A hello container serving a web page
- An nginx container which proxies the incoming requests to this hello container.
To pass the nginx config, we are storing it in Secret Manager and mounting it at a specific location in our nginx container. Optionally you can also use container ordering using the annotation run.googleapis.com/container-dependencies
to ensure the nginx container is started before the hello container to ensure the traffic always goes through the nginx proxy.
Networking and security
You can run sidecars that can handle advanced networking scenarios such as hosting Envoy proxies for advanced traffic routing and filtering, or security hardening sidecars that can intercept traffic and prevent attacks by continuous detection and prevention.
One example of this pattern comes from Nasdaq, which is transforming its Data Ingestion tool using sidecars:
“We faced a challenge where we hit the 32mb size limit for non-chunked HTTP1 requests. To circumvent this, we wanted to accept HTTP2 requests, however that involved serious code refactoring. In order to minimize code changes to our frontend and backend code bases, we decided to leverage Envoy to rewrite incoming HTTP2 requests to HTTP1 and forward them directly to our backend service. Cloud Run’s sidecar feature helped us successfully achieve this and we were able to redirect incoming HTTP2 traffic with request payloads greater than 32MB directly to our application in HTTP1 using Envoy sidecar with no code changes, saving us significant engineering costs.” – Philippe Trembley, Software Engineering Director, Nasdaq
Database connection proxies
You can use sidecars to run database client proxies like CloudSQL proxy or AlloyDB proxy alongside your application to readily connect with these peripheral database services using more secure connections, easier authorization, and IAM-based authentication.
Get started today
To add proxies or sidecars alongside your main container, simply edit the YAML of your Cloud Run service using the command line or Cloud Console; you can read more in the documentation. In addition, you can also create in-memory volumes that can be shared between multiple containers, allowing you to share data between containers.
Cloud Run makes it super easy to run your services. With sidecars and proxies, Cloud Run now provides the extensibility needed to accomplish much more.