Kubernetes example
In this example you’ll spin up a local Kubernetes cluster with NativeLink and run some Bazel builds against it.
Requirements
- An
x86_64-linuxsystem. Either “real” Linux or WSL2. - A functional local Docker setup.
- A recent version of Nix with flake support, for instance installed via the next-gen Nix installer.
First, enter the NativeLink development environment:
git clone https://github.com/TraceMachina/nativelink && \ cd nativelink && \ nix developThis environment contains Bazel and some cloud tooling, so you don’t need to set up any kubernetes-related software yourself.
Now, start the development cluster:
native upNext, deploy NativeLink to the cluster:
kubectl apply -k \ https://github.com/TraceMachina/nativelink//deploy/kubernetes-exampleOnce the infra is ready, trigger the pipelines that build the images:
cat > nativelink-repo.yaml << EOFapiVersion: source.toolkit.fluxcd.io/v1kind: GitRepositorymetadata: name: nativelink namespace: defaultspec: interval: 2m url: https://github.com/TraceMachina/nativelink ref: branch: mainEOFkubectl apply -f nativelink-repo.yamlThe deployment might take a wile to boot up. You can monitor progress via the dashboards that come with the development cluster:
- localhost:8080: Cilium’s Hubble UI to view the
cluster topology. NativeLink will be deployed into the
defaultnamespace. - localhost:8081: The Tekton Dashboard to view the
progress of the in-cluster pipelines. You’ll find the pipelines under the
PipelineRunstab. - localhost:9000: The Capacitor Dashboard to view Flux Kustomizations. You can view NatieLink’s logs here once it’s fully deployed.
In terminals, the following commands can be helpful to view deployment progress:
tkn pr logs -n ci -fto view the logs of aPipelineRunin the terminal.flux get all -Ato view the state of the NativeLink deployments.
Once NativeLink is deployed:
kubectl logs deploy/nativelink-casfor the CAS (cache) logs.kubectl logs deploy/nativelink-schedulerfor the scheduler logs.kubectl logs deploy/nativelink-workerfor the worker logs.
The demo setup creates gateways to expose the cas and scheduler deployments
via your local docker network. You can pass the Gateway addresses to Bazel
invocations to make builds run against the cluster:
NATIVELINK=$(kubectl get gtw nativelink-gateway -o=jsonpath='{.status.addresses[0].value}')
echo "NativeLink IP: $NATIVELINK"
bazel build \ --remote_cache=grpc://$NATIVELINK \ --remote_executor=grpc://$NATIVELINK \ //local-remote-execution/examples:lre-ccThe crucial part is this bit:
INFO: 11 processes: 9 internal, 2 remote.It tells us that the compilation ran against the cluster. Let’s clean the Bazel cache and run the build again:
bazel clean && bazel build \ --remote_cache=grpc://$CACHE \ --remote_executor=grpc://$SCHEDULER \ //local-remote-execution/examples:lre-ccThe build now shows cache hits instead of remote actions:
INFO: 11 processes: 2 remote cache hit, 9 internal.The worker deployment in this example leverages Local Remote Execution.
Local Remote Execution mirrors toolchains for remote execution in your local development environment. This lets you reuse build artifacts with virtually perfect cache hit rate across different repositories, developers, and CI.
To test LRE in the cluster, clean the local cache and invoke another build
against the cluster, but this time omit the remote_executor flag. This way
you’ll use remote caching without remote execution:
bazel clean && bazel build \ --remote_cache=grpc://$CACHE \ //local-remote-execution/examples:lre-ccYou’ll get remote cache hits as if your local machine was a nativelink-worker:
INFO: 11 processes: 2 remote cache hit, 9 internal.When you’re done testing, delete the cluster:
# Delete the kind clusternative down
# Remove the container registry and loadbalancerdocker container stop kind-registry | xargs docker rmdocker container stop kind-loadbalancer | xargs docker rm