Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
How to configure authentication and authorization on your Bacalhau node.
Bacalhau includes a flexible auth system that supports multiple methods of auth that are appropriate for different deployment environments.
With no specific authentication configuration supplied, Bacalhau runs in "anonymous mode" – which allows unidentified users limited control over the system. "Anonymous mode" is only appropriate for testing or evaluation setups.
In anonymous mode, Bacalhau will allow:
Users identified by a self-generated private key to submit any job and cancel their own jobs.
Users not identified by any key to access other read-only endpoints, such as to read job lists, describe jobs, and query node or agent information.
Bacalhau auth is controlled by policies. Configuring the auth system is done by supplying a different policy file.
Restricting API access to only users that have authenticated requires specifying a new authorization policy. You can download a policy that restricts anonymous access and install it by using:
Once the node is restarted, accessing the node APIs will require the user to be authenticated, but by default will still allow users with a self-generated key to authenticate themselves.
Restricting the list of keys that can authenticate to only a known set requires specifying a new authentication policy. You can download a policy that restricts key-based access and install it by using:
Then, modify the allowed_clients
variable in challange_ns_no_anon.rego
to include acceptable client IDs, found by running bacalhau id
.
Once the node is restarted, only keys in the allowed list will be able to access any API.
Users can authenticate using a username and password instead of specifying a private key for access. Again, this requires installation of an appropriate policy on the server.
Passwords are not stored in plaintext and are salted. The downloaded policy expects password hashes and salts generated by scrypt
. To generate a salted password, the helper script in pkg/authn/ask/gen_password
can be used:
This will ask for a password and generate a salt and hash to authenticate with it. Add the encoded username, salt and hash into the ask_ns_password.rego
.
In principle, Bacalhau can implement any auth scheme that can be described in a structured way by a policy file.
Policies are written in a language called Rego, also used by Kubernetes. Users who want to write their own policies should get familiar with the Rego language.
Bacalhau will pass information pertinent to the current request into every authentication policy query as a field on the input
variable. The exact information depends on the type of authentication used.
challenge
authenticationchallenge
authentication uses identifies the user by the presence of a private key. The user is asked to sign an input phrase to prove they have the key they are identifying with.
Policies used for challenge
authentication do not need to actually implement the challenge verification logic as this is handled by the core code. Instead, they will only be invoked if this verification passes.
Policies for this type will need to implement these rules:
bacalhau.authn.token
: if the user should be authenticated, an access token they should use in subsequent requests. If the user should not be authenticated, should be undefined.
They should expect as fields on the input
variable:
clientId
: an ID derived from the user's private key that identifies them uniquely
nodeId
: the ID of the requester node that this user is authenticating with
signingKey
: the private key (as a JWK) that should be used to sign any access tokens to be returned
The simplest possible policy might therefore be this policy that returns the same opaque token for all users:
A more realistic example that returns a signed JWT is in challenge_ns_anon.rego.
ask
authenticationask
authentication uses credentials supplied manually by the user as identification. For example, an ask
policy could require a username and password as input and check these against a known list. ask
policies do all the verification of the supplied credentials.
Policies for this type will need to implement these rules:
bacalhau.authn.token
: if the user should be authenticated, an access token they should use in subsequent requests. If the user should not be authenticated, should be undefined.
bacalhau.authn.schema
: a static JSON schema that should be used to collect information about the user. The type
of declared fields may be used to pick the input method, and if a field is marked as writeOnly
then it will be collected in a secure way (e.g. not shown on screen). The schema
rule does not receive any input
data.
They should expect as fields on the input
variable:
ask
: a map of field names from the JSON schema to strings supplied by the user. The policy should validate these credentials.
nodeId
: the ID of the requester node that this user is authenticating with
signingKey
: the private key (as a JWK) that should be used to sign any access tokens to be returned
The simplest possible policy might therefore be one that asks for no data and returns the same opaque token for every user:
A more realistic example that returns a signed JWT is in ask_ns_example.rego.
Authorization policies do not vary depending on the type of authentication used – Bacalhau uses one authz policy for all API requests.
Authz policies are invoked for every API request. Authz policies should check the validity of any supplied access tokens and issue an authz decision for the requested API endpoint. It is not required that authz policies enforce that an access token is present – they may choose to grant access to unauthorized users.
Policies will need to implement these rules:
bacalhau.authz.token_valid
: true if the access token in the request is "valid" (but does not necessarily grant access for this request), or false if it is invalid for every request (e.g. because it has expired) and should be discarded.
bacalhau.authz.allow
: true if the user should be permitted to carry out the input request, false otherwise.
They should expect as fields on the input
variable for both rules:
http
: details of the user's HTTP request:
host
: the hostname used in the HTTP request
method
: the HTTP method (e.g. GET
, POST
)
path
: the path requested, as an array of path components without slashes
query
: a map of URL query parameters to their values
headers
: a map of HTTP header names to arrays representing their values
body
: a blob of any content submitted as the body
constraints
: details about the receiving node that should be used to validate any supplied tokens:
cert
: keys that the input token should have been signed with
iss
: the name of a node that this node will recognize as the issuer of any signed tokens
aud
: the name of this node that is receiving the request
Notably, the constraints
data is appropriate to be passed directly to the Rego io.jwt.decode_verify
method which will validate the access token as a JWT against the given constraints.
The simplest possible authz policy might be this one that allows all users to access all endpoints:
A more realistic example (which is the Bacalhau "anonymous mode" default) is in policy_ns_anon.rego.
How to enable GPU support on your Bacalhau node.
Bacalhau supports GPUs out of the box and defaults to allowing execution on all GPUs installed on the node.
Bacalhau makes the assumption that you have installed all the necessary drivers and tools on your node host and have appropriately configured them for use by Docker.
In general for GPUs from any vendor, the Bacalhau client requires:
nvidia-smi
installed and functional
You can test whether you have a working GPU setup with the following command:
Bacalhau requires AMD drivers to be appropriately installed and access to the rocm-smi
tool.
You can test whether you have a working GPU setup with the following command, which should print details of your GPUs:
Bacalhau requires appropriate Intel drivers to be installed and access to the xpu-smi
tool.
You can test whether you have a working GPU setup with the following command, which should print details of your GPUs:
Access to GPUs can be controlled using resource limits. To limit the number of GPUs that can be used per job, set a job resource limit. To limit access to GPUs from all jobs, set a total resource limit.
How to configure your Bacalhau node.
Bacalhau employs the viper and cobra libraries for configuration management. Users can configure their Bacalhau node through a combination of command-line flags, environment variables, and the dedicated configuration file.
Bacalhau manages its configuration, metadata, and internal state within a specialized repository named .bacalhau
. Serving as the heart of the Bacalhau node, this repository holds the data and settings that determine node behavior. It's located on the filesystem, and by default, Bacalhau initializes this repository at $HOME/.bacalhau
, where $HOME
is the home directory of the user running the bacalhau process.
To customize this location, users can:
Set the BACALHAU_DIR
environment variable to specify their desired path.
Utilize the --repo
command line flag to specify their desired path.
Upon executing a Bacalhau command for the first time, the system will initialize the .bacalhau
repository. If such a repository already exists, Bacalhau will seamlessly access its contents.
Structure of a Newly Initialized .bacalhau
Repository
.bacalhau
repository:This repository comprises four directories and seven files:
user_id.pem
:
This file houses the Bacalhau node user's cryptographic private key, used for signing requests sent to a Requester Node.
Format: PEM.
repo.version
:
Indicates the version of the Bacalhau node's repository.
Format: JSON, e.g., {"Version":1}
.
libp2p_private_key
:
Stores the Bacalhau node's libp2p private key, essential for its network identity. The NodeID of a Bacalhau node is derived from this key.
Format: Base64 encoded RSA private key.
config.yaml
:
Contains configuration settings for the Bacalhau node.
Format: YAML.
update.json
:
A file containing the date/time when the last version check was made.
Format: JSON, e.g., {"LastCheck":"2024-01-24T11:06:14.631816Z"}
tokens.json
:
A file containing the tokens obtained through authenticating with bacalhau clusters.
QmdGUjsMHEgtAfdtw7U62yPEcAZFtA33tKMsczLToegZtv-compute
:
Contains the BoltDB executions.db
database, which aids the Compute node in state persistence. Additionally, the jobStats.json
file records the Compute Node's completed jobs tally.
Note: The segment QmdGUjsMHEgtAfdtw7U62yPEcAZFtA33tKMsczLToegZtv
is a unique NodeID for each Bacalhau node, derived from the libp2p_private_key
.
QmdGUjsMHEgtAfdtw7U62yPEcAZFtA33tKMsczLToegZtv-requester
:
Contains the BoltDB jobs.db
database for the Requester node's state persistence.
Note: NodeID derivation is similar to the Compute directory.
executor_storages
:
Storage for data handled by Bacalhau storage drivers.
plugins
:
Houses binaries that allow the Compute node to execute specific tasks.
Note: This feature is currently experimental and isn't active during standard node operations.
Within a .bacalhau
repository, a config.yaml
file may be present. This file serves as the configuration source for the bacalhau node and adheres to the YAML format.
Although the config.yaml
file is optional, its presence allows Bacalhau to load custom configurations; otherwise, Bacalhau is configured with built-in default values, environment variables and command line flags.
Modifications to the config.yaml
file will not be dynamically loaded by the Bacalhau node. A restart of the node is required for any changes to take effect. Bacalhau determines its configuration based on the following precedence order, with each item superseding the subsequent:
Command-line Flag
Environment Variable
Config File
Defaults
config.yaml
and Bacalhau Environment VariablesBacalhau establishes a direct relationship between the value-bearing keys within the config.yaml
file and corresponding environment variables. For these keys that have no further sub-keys, the environment variable name is constructed by capitalizing each segment of the key, and then joining them with underscores, prefixed with BACALHAU_
.
For example, a YAML key with the path Node.IPFS.Connect
translates to the environment variable BACALHAU_NODE_IPFS_CONNECT
and is represented in a file like:
There is no corresponding environment variable for either Node
or Node.IPFS
. Config values may also have other environment variables that set them for simplicity or to maintain backwards compatibility.
Bacalhau leverages the BACALHAU_ENVIRONMENT
environment variable to determine the specific environment configuration when initializing a repository. Notably, if a .bacalhau
repository has already been initialized, the BACALHAU_ENVIRONMENT
setting will be ignored.
By default, if the BACALHAU_ENVIRONMENT
variable is not explicitly set by the user, Bacalhau will adopt the production
environment settings.
Below is a breakdown of the configurations associated with each environment:
1. Production (public network)
Environment Variable: BACALHAU_ENVIRONMENT=production
Configurations:
Node.ClientAPI.Host
: "bootstrap.production.bacalhau.org"
Node.Client.API.Host
: 1234
...other configurations specific to this environment...
2. Staging (staging network)
Environment Variable: BACALHAU_ENVIRONMENT=staging
Configurations:
Node.ClientAPI.Host
: "bootstrap.staging.bacalhau.org"
Node.Client.API.Host
: 1234
...other configurations specific to this environment...
3. Development (development network)
Environment Variable: BACALHAU_ENVIRONMENT=development
Configurations:
Node.ClientAPI.Host
: "bootstrap.development.bacalhau.org"
Node.Client.API.Host
: 1234
...other configurations specific to this environment...
4. Local (private or local networks)
Environment Variable: BACALHAU_ENVIRONMENT=local
Configurations:
Node.ClientAPI.Host
: "0.0.0.0"
Node.Client.API.Host
: 1234
...other configurations specific to this environment...
Note: The above configurations provided for each environment are not exhaustive. Consult the specific environment documentation for a comprehensive list of configurations.
Or
These are the flags that control the capacity of the Bacalhau node, and the limits for jobs that might be run.
The --limit-total-*
flags control the total system resources you want to give to the network. If left blank, the system will attempt to detect these values automatically.
The --limit-job-*
flags control the maximum amount of resources a single job can consume for it to be selected for execution.
Resource limits are not supported for Docker jobs running on Windows. Resource limits will be applied at the job bid stage based on reported job requirements but will be silently unenforced. Jobs will be able to access as many resources as requested at runtime.
Bacalhau is a peer-to-peer network of computing providers that will run jobs submitted by users. A Compute Provider (CP) is anyone who is running a Bacalhau compute node participating in the Bacalhau compute network, regardless of whether they are hosting any Filecoin data.
This section will show you how to configure and run a Bacalhau node and start accepting and running jobs.
To bootstrap your node and join the network as a CP you can leap right into the Ubuntu 22.04 quick start below, or find more setup details in these guides:
If you run on a different system than Ubuntu, drop us a message on Slack! We'll add instructions for your favorite OS.
Estimated time for completion: 10 min. Tested on: Ubuntu 22.04 LTS (x86/64) running on a GCP e2-standard-4 (4 vCPU, 16 GB memory) instance with 40 GB disk size.
Docker Engine - to take on Docker workloads
Connection to storage provider - for storing job's results
Firewall - to ensure your node can communicate with the rest of the network
Physical hardware, Virtual Machine, or cloud-based host. A Bacalhau compute node is not intended to be run from within a Docker container.
To run docker-based workloads, you should have docker installed and running.
If you already have it installed and want to configure the connection to Docker with the following environment variables:
DOCKER_HOST
to set the URL to the docker server.
DOCKER_API_VERSION
to set the version of the API to reach, leave empty for "latest".
DOCKER_CERT_PATH
to load the TLS certificates from.
DOCKER_TLS_VERIFY
to enable or disable TLS verification, off by default.
If you do not have Docker on your system, you can follow the official docker installation instructions or just use the snippet below:
Now make Docker manageable by a non-root user:
We will need to connect our Bacalhau node to a storage server for this we will be using IPFS server so we can run jobs that consume CIDs as inputs.
You can either install and run it locally or you can connect to a remote IPFS server.
In both cases - we should have a multiaddress for our IPFS server that should look something like this:
The multiaddress above is just an example - you need to get the multiaddress of the server you want to connect to.
To install a single IPFS node locally on Ubuntu you can follow the official instructions, or follow the steps below. We advise running the same IPFS version as the Bacalhau main network.
Using the command below:
Install:
Configure:
Now launch the IPFS daemon in a separate terminal (make sure to export the IPFS_PATH
environment variable there as well):
If you want to run the IPFS daemon as a systemd service, here's an example systemd service file.
Don't forget we need to fetch an IPFS multiaddress pointing to our local node.
Use the following command to print out a number of addresses.
I pick the record that combines 127.0.0.1
and tcp
but I replace port 4001
with 5001
:
To ensure that our node can communicate with other nodes on the network - we need to make sure the 1235 port is open.
(Optional) To ensure the CLI can communicate with our node directly (i.e. bacalhau --api-host <MY_NODE_PUBLIC_IP> version
) - we need to make sure the 1234 port is open.
Firewall configuration is very specific to your network and we can't provide generic instructions for this step but if you need any help feel free to reach out on Slack!
Install the bacalhau binary to run bacalhau serve
.
If you want to run Bacalhau as a systemd service, here's an example systemd service file.
Now we can run our bacalhau node:
Alternatively, you can run the following Docker command:
Even though the CLI (by default) submits jobs, each node listens for events on the global network and possibly bids for taking a job: your logs should therefore show the activity of your node bidding for incoming jobs.
To quickly check your node runs properly, let's submit the following dummy job:
If you see logs of your compute node bidding for the job above it means you've successfully joined Bacalhau as a Compute Provider!
At this point, you probably have a number of questions for us. What incentive should you expect for running a public Bacalhau node? Please contact us on Slack to further discuss this topic and for sharing your valuable feedback.
When running a node, you can choose which jobs you want to run by using configuration options, environment variables or flags to specify a job selection policy.
If you want more control over making the decision to take on jobs, you can use the --job-selection-probe-exec
and --job-selection-probe-http
flags.
These are external programs that are passed the following data structure so that they can make a decision about whether or not to take on a job:
The exec
probe is a script to run that will be given the job data on stdin
, and must exit with status code 0 if the job should be run.
The http
probe is a URL to POST the job data to. The job will be rejected if the HTTP request returns a non-positive status code (e.g. >= 400).
If the HTTP response is a JSON blob, it should match the following schema and will be used to respond to the bid directly:
For example, the following response will reject the job:
If the HTTP response is not a JSON blob, the content is ignored and any non-error status code will accept the job.
Good news everyone! You can now run your Bacalhau-IPFS stack in Docker.
This page describes several ways in which to operate Bacalhau. You can choose the method that best suits your needs. The methods are:
This guide works best on a Linux machine. If you're trying to run this on a Mac, you may encounter issues. Remember that network host mode doesn't work.
You need to have Docker installed. If you don't have it, you can .
This method is appropriate for those who:
Provide compute resources to the public Bacalhau network
This is not appropriate for:
Testing and development
Running a private network
This will start a local IPFS node and connect it to the public DHT. If you already have an IPFS node running, then you can skip this step.
Some notes about this command:
It wipes the $(pwd)/ipfs
directory to make sure you have a clean slate
It runs the IPFS container in the specified Docker network
It exposes the IPFS API port to the world on port 4002, to avoid clashes with Bacalhau
It exposes the admin RPC API to the local host only, on port 5001
We are not specifying or removing the bootstrap nodes, so it will default to connecting to public machines
Bacalhau consists of two parts: a "requester" that is responsible for operating the API and managing jobs, and a "compute" element that is responsible for executing jobs. In a public context, you'd typically just run a compute node, and allow the public requesters to handle the traffic.
Notes about the command:
It runs the Bacalhau container in "host" mode. This means that the container will use the same network as the host.
It uses the root
user, which is the default system user that has access to the Docker socket on a Mac. You may need to change this to suit your environment.
It mounts the Docker Socket
It mounts the /tmp
directory
It exposes the Bacalhau API ports to the world
The container version should match that of the current release
The IPFS connect string points to the RPC port of the IPFS node in Docker. Because Bacalhau is running in the same network, it can use DNS to find the IPFS container IP. If you're running your own node, replace it
The --node-type
flag is set to compute
because we only want to run a compute node
The --labels
flag is used to set a human-readable label for the node, and so we can run jobs on our machine later
We specify the --peer env
flag so that it uses the environment specified by BACALHAU_ENVIRONMENT=production
and therefore connects to the public network peers
There are several ways to ensure that the Bacalhau compute node is connected to the network.
First, check that the Bacalhau libp2p port is open and connected. On Linux you can run lsof
and it should look something like this:
Note the three established connections at the bottom. These are the production bootstrap nodes that Bacalhau is now connected to.
You can also check that the node is connected by listing the current network peers and grepping for your IP address or node ID. The node ID can be obtained from the Bacalhau logs. It will look something like this:
Finally, submit a job with the label you specified when you ran the compute node. If this label is unique, there should be only one node with this label. The job should succeed. Run the following:
If instead, your job fails with the following error, it means that the compute node is not connected to the network:
This method is insecure. It does not lock down the IPFS node. Anyone connected to your network can access the IPFS node and read/write data. This is not recommended for production use.
This method is appropriate for:
Testing and development
Evaluating the Bacalhau platform before scaling jobs via the public network
This method is useful for testing and development. It's easier to use because it doesn't require a secret IPFS swarm key -- this is essentially an authentication token that allows you to connect to the node.
This method is not appropriate for:
Secure, private use
Production use
To run an insecure, private node, you need to initialize your IPFS configuration by removing all of the default public bootstrap nodes. Then we run the node in the normal way, without the special LIBP2P_FORCE_PNET
flag that checks for a secure private connection.
Some notes about this command:
It wipes the $(pwd)/ipfs
directory to make sure you have a clean slate
It removes the default bootstrap nodes
It runs the IPFS container in the specified Docker network
It exposes the IPFS API port to the local host only, to prevent accidentally exposing the IPFS node, on 4002, to avoid clashes with Bacalhau
It exposes the admin RPC API to the local host only, on port 5001
Bacalhau consists of two parts: a "requester" that is responsible for operating the API and managing jobs, and a "compute" element that is responsible for executing jobs. In a public context, you'd typically just run a compute node, and allow the public requesters to handle the traffic. But in a private context, you'll want to run both.
Notes about the command:
It runs the Bacalhau container in the specified Docker network
It uses the root
user, which is the default system user that has access to the Docker socket on a Mac. You may need to change this to suit your environment
It mounts the Docker Socket
It mounts the /tmp
directory and specifies this as the location where Bacalhau will write temporary execution data (BACALHAU_NODE_COMPUTESTORAGEPATH
)
It exposes the Bacalhau API ports to the local host only, to prevent accidentally exposing the API to the public internet
The container version should match that of the Bacalhau installed on your system
The IPFS connect string points to the RPC port of the IPFS node. Because Bacalhau is running in the same network, it can use DNS to find the IPFS container IP.
The --node-type
flag is set to requester,compute
because we want to run both a requester and a compute node
Now it's time to run a job. Recall that you exposed the Bacalhau API on the default ports to the local host only. So you'll need to use the --api-host
flag to tell Bacalhau where to find the API. Everything else is a standard part of the Bacalhau CLI.
The job should succeed. Run it again but this time capture the job ID to make it easier to retrieve the results.
To retrieve the results using the Bacalhau CLI, you need to know the p2p swarm multiaddress of the IPFS node because you don't want to connect to the public global IPFS network. To do that you can run the IPFS id command (and parse to remove the trub at the bottom of the barrel):
Note that the command above changes the reported port from 4001 to 4002. This is because the IPFS node is running on port 4002, but the IPFS id command reports the port as 4001.
Now get the results:
Alternatively, you can use the Docker container, mount the results volume, and change the --api-host
to the name of the Bacalhau container and the --ipfs-swarm-addrs
back to port 4001:
Running a private secure network is useful in a range of scenarios, including:
Running a private network for a private project
You need two things. A private IPFS node to store data and a Bacalhau node to execute over that data. To keep the nodes private you need to tell the nodes to shush and use a secret key. This is a bit harder to use, and a bit more involved than the insecure version.
First, you need to bootstrap a new IPFS cluster for your own private use. This consists of a process of generating a swarm key, removing any bootstrap nodes, and then starting the IPFS node.
Some notes about this command:
It wipes the $(pwd)/ipfs
directory to make sure you have a clean slate
It generates a new swarm key -- this is the token that is required to connect to this node
It removes the default bootstrap nodes
It runs the IPFS container in the specified Docker network
It exposes the IPFS API port to the local host only, to prevent accidentally exposing the IPFS node, on 4002, to avoid clashes with Bacalhau
It exposes the admin RPC API to the local host only, on port 5001
The same process as above can be used to retrieve results from the IPFS node as long as the Bacalhau get
command has access to the IPFS swarm key.
Running the Bacalhau binary from outside of Docker:
Alternatively, you can use the Docker container, mount the results volume, and change the --api-host
to the name of the Bacalhau container and the --ipfs-swarm-addrs
back to port 4001:
Without this, inter-container DNS will not work, and internet access may not work either.
Double check that this network can access the internet (so Bacalhau can call external URLs).
This should be successful. If it is not, then please troubleshoot your docker networking. For example, on my Mac, I had to totally uninstall Docker, restart the computer, and then reinstall Docker. Then it worked. Also check https://docs.docker.com/desktop/troubleshoot/known-issues/. Apparently "ping from inside a container to the Internet does not work as expected.". No idea what that means. How do you break ping?
You can now browse the IPFS web UI at http://127.0.0.1:5001/webui.
Ensure that the Bacalhau logs (docker logs bacalhau
) have no errors.
Check that your Bacalhau installation is the same version:
The versions should match. Alternatively, you can use the Docker container:
Perform a list command to ensure you can connect to the Bacalhau API.
It should return empty.
Should you wish to authenticate with Docker Hub when pulling images, you can do so by specifying credentials as environment variables wherever your compute node is running.
Bacalhau can limit the total time a job spends executing. A job that spends too long executing will be cancelled and no results will be published.
By default, a Bacalhau node does not enforce any limit on job execution time. Both node operators and job submitters can supply a maximum execution time limit. If a job submitter asks for a longer execution time than permitted by a node operator, their job will be rejected.
Applying job timeouts allows node operators to more fairly distribute the work submitted to their nodes. It also protects users from transient errors that results in their jobs waiting indefinitely.
Job submitters can pass the --timeout
flag to any Bacalhau job submission CLI to set a maximum job execution time. The supplied value should be a whole number of seconds with no unit.
The timeout can also be added to an existing job spec by adding the Timeout
property to the Spec
.
Node operators can pass the --max-job-execution-timeout
flag to bacalhau serve
to configure the maximum job time limit. The supplied value should be a numeric value followed by a time unit (one of s
for seconds, m
for minutes or h
for hours).
Node operators can also use configuration properties to configure execution limits.
Compute nodes will use the properties:
Config property | Meaning |
---|
Requester nodes will use the properties:
Config property | Meaning |
---|
Bacalhau has two ways to make use of external storage providers:
Inputs storage resources consumed as inputs to jobs
Publishers storage resources created with the results of jobs
To start, you'll need to connect the Bacalhau node to an IPFS server so that you can run jobs that consume CIDs as inputs.
You can either and run it locally, or you can connect to a remote IPFS server.
In both cases, you should have an for the IPFS server that should look something like this:
The multiaddress above is just an example - you'll need to get the multiaddress of the IPFS server you want to connect to.
You can then configure your Bacalhau node to use this IPFS server by passing the --ipfs-connect
argument to the serve
command:
Or, set the Node.IPFS.Connect
property in the Bacalhau configuration file.
The IPFS publisher works using the same setup as above - you'll need to have an IPFS server running and a multiaddress for it. You'll then you pass that multiaddress using the --ipfs-connect
argument to the serve
command.
If you are publishing to a public IPFS node, you can use bacalhau get
with no further arguments to download the results. However, you may experience a delay in results becoming available as indexing of new data by public nodes takes time.
To speed up the download or to retrieve results from a private IPFS node, pass the swarm multiaddress to bacalhau get
to download results.
Pass the swarm key to bacalhau get
if the IPFS swarm is a private swarm.
How to configure TLS for the requester node APIs
By default, the requester node APIs used by the Bacalhau CLI are accessible over HTTP, but it is possible to configure it to use Transport Level Security (TLS) so that they are accessible over HTTPS instead. There are several ways to obtain the necessary certificates and keys, and Bacalhau supports obtaining them via ACME and Certificate Authorities or even self-signing them.
Once configured, you must ensure that instead of using http://IP:PORT you use https://IP:PORT to access the Bacalhau API
Automatic Certificate Management Environment (ACME) is a protocol that allows for automating the deployment of Public Key Infrastructure, and is the protocol used to obtain a free certificate from the Certificate Authority.
Using the --autocert [hostname]
parameter to the CLI (in the serve
and devstack
commands), a certificate is obtained automatically from Lets Encrypt. The provided hostname should be a comma-separated list of hostnames, but they should all be publicly resolvable as Lets Encrypt will attempt to connect to the server to verify ownership (using the challenge). On the very first request this can take a short time whilst the first certificate is issued, but afterwards they are then cached in the bacalhau repository.
Alternatively, you may set these options via the environment variable, BACALHAU_AUTO_TLS
. If you are using a configuration file, you can set the values inNode.ServerAPI.TLS.AutoCert
instead.
As a result of the Lets Encrypt verification step, it is necessary for the server to be able to handle requests on port 443. This typically requires elevated privileges, and rather than obtain these through a privileged account (such as root), you should instead use setcap to grant the executable the right to bind to ports <1024.
A cache of ACME data is held in the config repository, by default ~/.bacalhau/autocert-cache
, and this will be used to manage renewals to avoid rate limits.
Obtaining a TLS certificate from a Certificate Authority (CA) without using the Automated Certificate Management Environment (ACME) protocol involves a manual process that typically requires the following steps:
Choose a Certificate Authority: First, you need to select a trusted Certificate Authority that issues TLS certificates. Popular CAs include DigiCert, GlobalSign, Comodo (now Sectigo), and others. You may also consider whether you want a free or paid certificate, as CAs offer different pricing models.
Generate a Certificate Signing Request (CSR): A CSR is a text file containing information about your organization and the domain for which you need the certificate. You can generate a CSR using various tools or directly on your web server. Typically, this involves providing details such as your organization's name, common name (your domain name), location, and other relevant information.
Submit the CSR: Access your chosen CA's website and locate their certificate issuance or order page. You'll typically find an option to "Submit CSR" or a similar option. Paste the contents of your CSR into the provided text box.
Verify Domain Ownership: The CA will usually require you to verify that you own the domain for which you're requesting the certificate. They may send an email to one of the standard domain-related email addresses (e.g., admin@yourdomain.com, webmaster@yourdomain.com). Follow the instructions in the email to confirm domain ownership.
Complete Additional Verification: Depending on the CA's policies and the type of certificate you're requesting (e.g., Extended Validation or EV certificates), you may need to provide additional documentation to verify your organization's identity. This can include legal documents or phone calls from the CA to confirm your request.
Payment and Processing: If you're obtaining a paid certificate, you'll need to make the payment at this stage. Once the CA has received your payment and completed the verification process, they will issue the TLS certificate.
Once you have obtained your certificates, you will need to put two files in a location that bacalhau can read them. You need the server certificate, often called something like server.cert
or server.cert.pem
, and the server key which is often called something like server.key
or server.key.pem
.
Once you have these two files available, you must start bacalhau serve
which two new flags. These are tlscert
and tlskey
flags, whose arguments should point to the relevant file. An example of how it is used is:
Alternatively, you may set these options via the environment variables, BACALHAU_TLS_CERT
and BACALHAU_TLS_KEY
. If you are using a configuration file, you can set the values inNode.ServerAPI.TLS.ServerCertificate
and Node.ServerAPI.TLS.ServerKey
instead.
Once you have generated the necessary files, the steps are much like above, you must start bacalhau serve
which two new flags. These are tlscert
and tlskey
flags, whose arguments should point to the relevant file. An example of how it is used is:
Alternatively, you may set these options via the environment variables, BACALHAU_TLS_CERT
and BACALHAU_TLS_KEY
. If you are using a configuration file, you can set the values inNode.ServerAPI.TLS.ServerCertificate
and Node.ServerAPI.TLS.ServerKey
instead.
If you use self-signed certificates, it is unlikely that any clients will be able to verify the certificate when connecting to the Bacalhau APIs. There are three options available to work around this problem:
Provide a CA certificate file of trusted certificate authorities, which many software libraries support in addition to system authorities.
Install the CA certificate file in the system keychain of each machine that needs access to the Bacalhau APIs.
Instruct the software library you are using not to verify HTTPS requests.
Before you join the main Bacalhau network, you can test locally.
To test, you can use the bacalhau devstack
command, which offers a way to get a 3 node cluster running locally.
By settings PREDICTABLE_API_PORT=1
, the first node of our 3 node cluster will always listen on port 20000
In another window, export the following environment variables so that the Bacalhau client binary connects to our local development cluster:
You can now interact with Bacalhau - all jobs are running by the local devstack cluster.
How to configure compute/requester persistence
Both compute nodes, and requester nodes, maintain state. How that state is maintained is configurable, although the defaults are likely adequate for most use-cases. This page describes how to configure the persistence of compute and requester nodes should the defaults not be suitable.
The computes nodes maintain information about the work that has been allocated to them, including:
The current state of the execution, and
The original job that resulted in this allocation
This information is used by the compute and requester nodes to ensure allocated jobs are completed successfully. By default, compute nodes store their state in a bolt-db database and this is located in the bacalhau repository along with configuration data. For a compute node whose ID is "abc", the database can be found in ~/.bacalhau/abc-compute/executions.db
.
In some cases, it may be preferable to maintain the state in memory, with the caveat that should the node restart, all state will be lost. This can be configured using the environment variables in the table below.
Environment Variable | Flag alternative | Value | Effect |
---|
When running a requester node, it maintains state about the jobs it has been requested to orchestrate and schedule, the evaluation of those jobs, and the executions that have been allocated. By default, this state is stored in a bolt db database that, with a node ID of "xyz" can be found in ~/.bacalhau/xyz-requester/jobs.db
.
Environment Variable | Flag alternative | Value | Effect |
---|
Bacalhau supports the three main 'pillars' of observability - logging, metrics, and tracing. Bacalhau uses the for metrics and tracing, which can be configured using the . Exporting metrics and traces can be as simple as setting the OTEL_EXPORTER_OTLP_PROTOCOL
and OTEL_EXPORTER_OTLP_ENDPOINT
environment variables. Custom code is used for logging as the .
Logging in Bacalhau outputs in human-friendly format to stderr at INFO
level by default, but this can be changed by two environment variables:
LOG_LEVEL
- Can be one of trace
, debug
, error
, warn
or fatal
to output more or fewer logging messages as required
LOG_TYPE
- Can be one of the following values:
default
- output logs to stderr in a human-friendly format
json
- log messages outputted to stdout in JSON format
combined
- log JSON formatted messages to stdout and human-friendly format to stderr
Log statements should include the relevant trace, span and job ID so it can be tracked back to the work being performed.
Bacalhau produces a number of different metrics including those around the libp2p resource manager (rcmgr
), performance of the requester HTTP API and the number of jobs accepted/completed/received.
Traces are produced for all major pieces of work when processing a job, although the naming of some spans is still being worked on. You can find relevant traces covering working on a job by searching for the jobid
attribute.
The metrics and traces can easily be forwarded to a variety of different services as we use OpenTelemetry, such as Honeycomb or Datadog.
To view the data locally, or simply to not use a SaaS offering, you can start up Jaeger and Prometheus placing these three files into a directory then running docker compose start
while running Bacalhau with the OTEL_EXPORTER_OTLP_PROTOCOL=grpc
and OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
environment variables.
These commands join this node to the public Bacalhau network, congrats!
You can now .
You can now .
You can now .
Private IPFS nodes are experimental. for more information.
The instructions to run a secure private Bacalhau network are the same as the insecure version, .
The instructions to run a job are the same as the insecure version, .
Read more about the IPFS docker image .
As described in , never expose the RPC API port (port 5001) to the public internet.
If you are retrieving and running images from you may encounter issues with rate-limiting. Docker provides higher limits when authenticated, the size of the limit is based on the type of your account.
Environment variable | Description |
---|
Currently, this authentication is only available (and required) by the
If you wish, it is possible to use Bacalhau with a self-signed certificate which does not rely on an external Certificate Authority. This is an involved process and so is not described in detail here although there is which should provide a good starting point.
Config property
serve
flag
Default value
Meaning
Node.Compute.JobSelection.Locality
--job-selection-data-locality
Anywhere
Only accept jobs that reference data we have locally ("local") or anywhere ("anywhere").
Node.Compute.JobSelection.ProbeExec
--job-selection-probe-exec
unused
Use the result of an external program to decide if we should take on the job.
Node.Compute.JobSelection.ProbeHttp
--job-selection-probe-http
unused
Use the result of a HTTP POST to decide if we should take on the job.
Node.Compute.JobSelection.RejectStatelessJobs
--job-selection-reject-stateless
False
Reject jobs that don't specify any input data.
Node.Compute.JobSelection.AcceptNetworkedJobs
--job-selection-accept-networked
False
Accept jobs that require network connections.
| The minimum acceptable value for a job timeout. A job will only be accepted if it is submitted with a timeout of longer than this value. |
| The maximum acceptable value for a job timeout. A job will only be accepted if it is submitted with a timeout of shorter than this value. |
| The job timeout that will be applied to jobs that are submitted without a timeout value. |
| If a job is submitted with a timeout less than this value, the default job execution timeout will be used instead. |
| The timeout to use in the job if a timeout is missing or too small. |
BACALHAU_COMPUTE_STORE_TYPE | --compute-execution-store-type | boltdb | Uses the bolt db execution store (default) |
BACALHAU_COMPUTE_STORE_PATH | --compute-execution-store-path | A path (inc. filename) | Specifies where the boltdb database should be stored. Default is |
BACALHAU_JOB_STORE_TYPE | --requester-job-store-type | boltdb | Uses the bolt db job store (default) |
BACALHAU_JOB_STORE_PATH | --requester-job-store-path | A path (inc. filename) | Specifies where the boltdb database should be stored. Default is |
Running a Windows-based node is not officially supported, so your mileage may vary. Some features (like resource limits) are not present in Windows-based nodes.
Bacalhau currently makes the assumption that all containers are Linux-based. Users of the Docker executor will need to manually ensure that their Docker engine is running and configured appropriately to support Linux containers, e.g. using the WSL-based backend.
How to run the WebUI.
The Bacalhau WebUI offers an intuitive interface for interacting with the Bacalhau network. This guide provides comprehensive instructions for setting up, deploying, and utilizing the WebUI.
For contributing to the WebUI's development, please refer to the Bacalhau WebUI GitHub Repository.
Ensure you have a Bacalhau v1.1.7 or later installed.
To launch the WebUI locally, execute the following command:
This command initializes a requester and compute node, configured to listen on HOST=0.0.0.0
and PORT=1234
.
Once started, the WebUI is accessible at http://127.0.0.1/. This local instance allows you to interact with your local Bacalhau network setup.
For observational purposes, a development version of the WebUI is available at bootstrap.development.bacalhau.org. This instance displays jobs from the development server.
N.b. The development version of the WebUI is for observation only and may not reflect the latest changes or features available in the local setup.
DOCKER_USERNAME |
DOCKER_PASSWORD |
The username with which you are registered at
A read-only access token, generated from the page at >