Skip to content

Kubernetes example

In this example you’ll spin up a local Kubernetes cluster with NativeLink and run some Bazel builds against it.

Requirements

  • An x86_64-linux system. Either “real” Linux or WSL2.
  • A functional local Docker setup.
  • A recent version of Nix with flake support, for instance installed via the next-gen Nix installer.

☁️ Prepare the cluster

First, enter the NativeLink development environment:

Terminal window
git clone https://github.com/TraceMachina/nativelink && \
cd nativelink && \
nix develop

This environment contains Bazel and some cloud tooling, so you don’t need to set up any kubernetes-related software yourself.

Now, start the development cluster:

Terminal window
native up

Next, deploy NativeLink to the cluster:

Terminal window
kubectl apply -k \
https://github.com/TraceMachina/nativelink//deploy/kubernetes-example

🔭 Explore deployments

The deployment might take a wile to boot up. You can monitor progress via the dashboards that come with the development cluster:

  • localhost:8080: Cilium’s Hubble UI to view the cluster topology. NativeLink will be deployed into the default namespace.
  • localhost:8081: The Tekton Dashboard to view the progress of the in-cluster pipelines. You’ll find the pipelines under the PipelineRuns tab.
  • localhost:9000: The Capacitor Dashboard to view Flux Kustomizations. You can view NatieLink’s logs here once it’s fully deployed.

In terminals, the following commands can be helpful to view deployment progress:

  • tkn pr logs -f to view the logs of a PipelineRun in the terminal.
  • flux get all -A to view the state of the NativeLink deployments.

Once NativeLink is deployed:

  • kubectl logs deploy/nativelink-cas for the CAS (cache) logs.
  • kubectl logs deploy/nativelink-scheduler for the scheduler logs.
  • kubectl logs deploy/nativelink-worker for the worker logs.

The demo setup creates gateways to expose the cas and scheduler deployments via your local docker network. You can pass the Gateway addresses to Bazel invocations to make builds run against the cluster:

Terminal window
CACHE=$(kubectl get gtw cache-gateway -o=jsonpath='{.status.addresses[0].value}')
SCHEDULER=$(kubectl get gtw scheduler-gateway -o=jsonpath='{.status.addresses[0].value}')
echo "Cache IP: $CACHE"
echo "Scheduler IP: $SCHEDULER"
bazel build \
--remote_cache=grpc://$CACHE \
--remote_executor=grpc://$SCHEDULER \
//local-remote-execution/examples:hello_lre

The crucial part is this bit:

INFO: 11 processes: 9 internal, 2 remote.

It tells us that the compilation ran against the cluster. Let’s clean the Bazel cache and run the build again:

Terminal window
bazel clean && bazel build \
--remote_cache=grpc://$CACHE \
--remote_executor=grpc://$SCHEDULER \
//local-remote-execution/examples:hello_lre

The build now shows cache hits instead of remote actions:

INFO: 11 processes: 2 remote cache hit, 9 internal.

🚀 Bonus: Local Remote Execution

The worker deployment in this example leverages Local Remote Execution.

Local Remote Execution mirrors toolchains for remote execution in your local development environment. This lets you reuse build artifacts with virtually perfect cache hit rate across different repositories, developers, and CI.

To test LRE in the cluster, clean the local cache and invoke another build against the cluster, but this time omit the remote_executor flag. This way you’ll use remote caching without remote execution:

Terminal window
bazel clean && bazel build \
--remote_cache=grpc://$CACHE \
//local-remote-execution/examples:hello_lre

You’ll get remote cache hits as if your local machine was a nativelink-worker:

INFO: 11 processes: 2 remote cache hit, 9 internal.

🧹 Clean up

When you’re done testing, delete the cluster:

Terminal window
# Delete the kind cluster
native down
# Remove the container registry and loadbalancer
docker container stop kind-registry | xargs docker rm
docker container stop kind-loadbalancer | xargs docker rm