Kubernetes example
In this example you’ll spin up a local Kubernetes cluster with NativeLink and run some Bazel builds against it.
Requirements
- An
x86_64-linux
system. Either “real” Linux or WSL2. - A functional local Docker setup.
- A recent version of Nix with flake support, for instance installed via the next-gen Nix installer.
☁️ Prepare the cluster
First, enter the NativeLink development environment:
This environment contains Bazel and some cloud tooling, so you don’t need to set up any kubernetes-related software yourself.
Now, start the development cluster:
Next, deploy NativeLink to the cluster:
🔭 Explore deployments
The deployment might take a wile to boot up. You can monitor progress via the dashboards that come with the development cluster:
- localhost:8080: Cilium’s Hubble UI to view the
cluster topology. NativeLink will be deployed into the
default
namespace. - localhost:8081: The Tekton Dashboard to view the
progress of the in-cluster pipelines. You’ll find the pipelines under the
PipelineRuns
tab. - localhost:9000: The Capacitor Dashboard to view Flux Kustomizations. You can view NatieLink’s logs here once it’s fully deployed.
In terminals, the following commands can be helpful to view deployment progress:
tkn pr logs -f
to view the logs of aPipelineRun
in the terminal.flux get all -A
to view the state of the NativeLink deployments.
Once NativeLink is deployed:
kubectl logs deploy/nativelink-cas
for the CAS (cache) logs.kubectl logs deploy/nativelink-scheduler
for the scheduler logs.kubectl logs deploy/nativelink-worker
for the worker logs.
🏗️ Build against NativeLink
The demo setup creates gateways to expose the cas
and scheduler
deployments
via your local docker network. You can pass the Gateway addresses to Bazel
invocations to make builds run against the cluster:
The crucial part is this bit:
It tells us that the compilation ran against the cluster. Let’s clean the Bazel cache and run the build again:
The build now shows cache hits instead of remote actions:
🚀 Bonus: Local Remote Execution
The worker deployment in this example leverages Local Remote Execution.
Local Remote Execution mirrors toolchains for remote execution in your local development environment. This lets you reuse build artifacts with virtually perfect cache hit rate across different repositories, developers, and CI.
To test LRE in the cluster, clean the local cache and invoke another build
against the cluster, but this time omit the remote_executor
flag. This way
you’ll use remote caching without remote execution:
You’ll get remote cache hits as if your local machine was a nativelink-worker
:
🧹 Clean up
When you’re done testing, delete the cluster: