Classic remote execution examples
Debugging remote builds can be tricky. These examples provide builds that you can use to test the remote execution capabilities of your worker image.
Getting the test sources
All examples are in a single Bazel module at nativelink/toolchain-examples
.
If you haven’t set up Nix yet, consider consulting the local development setup guide
guide. Then move to the toolchain-examples
directory:
git clone https://github.com/TraceMachina/nativelinkcd nativelink/toolchain-examples
# If you haven't set up `direnv`, remember to activate the Nix flake# manually via `nix develop`.
If you running outside of Nix, install bazelisk. Then move to the toolchain-examples
directory:
git clone https://github.com/TraceMachina/nativelinkcd nativelink/toolchain-examples
Preparing the remote execution infrastructure
Port-forward your NativeLink cas/scheduler service to localhost:50051
:
kubectl port-forward svc/YOURSERVICE 50051
Likely the most straightforward way to test a remote execution image is by creating a custom “test image” that you debug locally. If you have an existing Dockerfile, here is what you need to adjust to test remote execution against a locally running worker:
# INSERT YOUR EXISTING DOCKERFILE CONTENTS HERE.
# ...
# Append something similar to the section below. We assume that you've built# nativelink at a recent commit via# `nix build github:Tracemachina/nativelink`.## Then copy the executable and the `nativelink-config.json` from the# `toolchain-examples` directory into the image and set the entrypoint to# nativelink with that config.
COPY nativelink /usr/bin/nativelinkCOPY nativelink-config.json /etc/nativelink-config.json
RUN chmod +x /usr/bin/nativelink
ENTRYPOINT ["/usr/bin/nativelink"]CMD ["/etc/nativelink-config.json"]
Then build your image and push it to your localhost:
docker build . \ -t rbetests:local
You can now run the remote execution image locally and run builds against it:
docker run \ -e RUST_LOG=info \ -p 50051:50051 \ rbetests:local
All future invocations may now use the --remote_cache=grpc://localhost:50051
and --remote_executor=grpc://localhost:50051
flags to send builds to the
running container.
Available toolchain configurations
This Bazel module comes with some commonly used toolchains that you can enable
via --config
flags. See the .bazelrc
file in the toolchain-examples
directory for details. Here are your options:
Config | Hermetic | Size | Description |
---|---|---|---|
zig-cc | yes | ~100Mb | Hermetic, but slow. The intended use for this toolchain are projects that need a baseline C++ toolchain, but aren’t “real” C++ projects, such as Go projects with a limited number of C FFIs. |
llvm | no | ~1.5Gb | Not hermetic, but fast and standardized. This toolchain tends to be safe to use for C++ projects as long as you don’t require full hermeticity. Your remote execution image needs to bundle glibc <= 2.34 for this toolchain to work. |
java | yes | ? | This sets the JDK to use a remote JDK. Use this one for Java. |
Notes on how to register your toolchains
Toolchains tend to be complex dependencies and you’ll almost always have bugs in
your toolchain that are build-breaking for some users. If you register your
toolchain in your MODULE.bazel
it’ll turn such bugs into hard errors that
might require deep incisions into your toolchain configuration to fix them.
Instead, register platforms and toolchains in your .bazelrc
file. This way you
give your users the option to opt out of your default toolchain and provide
their own. For instance:
build:sometoolchain --platforms @sometoolchain//TODObuild:sometoolchain --extra_toolchains @sometoolchain//TODO
Now --config=sometoolchain
is your happy path, but you keep the ability to
omit the flag so that if your happy path doesn’t work you still have the ability
to build with “unsupported” toolchains.
All examples below require some sort of --config
flag to work with remote
execution.
Minimal example targets
Examples to test whether your worker can function at all.
Since the toolchains used here tend to focus on ease of use rather than performance, expect build times of several minutes even for a small “Hello World” program.
Keep in mind that some remote toolchains first fetch tools to the executor. This can take several minutes and might look like a slow compile action.
C and C++
bazel build //cpp \ --config=zig-cc \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
bazel build //cpp \ --config=llvm \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
Python
bazel test //python \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
Go
bazel test //go \ --config=zig-cc \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
Rust
bazel test //rust \ --config=zig-cc \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
# Should raise and error like this if your toolchain is correctly hermetic:## error: the self-contained linker was requested, but it wasn't found in the# target's sysroot, or in rustc's sysroot
Java
bazel test //java:HelloWorld \ --config=java \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
All at once
bazel test //... \ --config=java \ --config=zig-cc \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051 \ --keep_going
Larger builds
These builds can help fine-tune larger deployments.
Curl (C)
bazel build @curl//... \ --config=zig-cc \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
Zstandard (C)
bazel build @zstd//... \ --config=zig-cc \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
Abseil-cpp (C++)
bazel test @abseil-cpp//... \ --config=zig-cc \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051 \ --keep_going
Abseil-py (Python)
bazel test @abseil-py//... \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051
CIRCL (Go)
bazel test @circl//... \ --config=zig-cc \ --remote_cache=grpc://localhost:50051 \ --remote_executor=grpc://localhost:50051 \ --keep_going