Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This directory contains examples relating to performing common tasks with Bacalhau.
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This directory contains examples relating to data engineering workloads. The goal is to provide a range of examples that show you how to work with Bacalhau in a variety of use cases.
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This is an older version of Bacalhau. For the latest version, go to this link.
Bacalhau is a platform for fast, cost efficient, and secure computation by running jobs where the data is generated and stored. With Bacalhau, you can streamline your existing workflows without the need of extensive rewriting by running arbitrary Docker containers and WebAssembly (wasm) images as tasks. This architecture is also referred to as Compute Over Data (or CoD). Bacalhau was coined from the Portuguese word for salted Cod fish.
Bacalhau seeks to transform data processing for large-scale datasets to improve cost and efficiency, and to open up data processing to larger audiences. Our goals is to create an open, collaborative compute ecosystem that enables unparalleled collaboration. We (Expanso.io) offer a demo network so you can try out jobs without even installing. Give it a shot!
⚡️ Bacalhau simplifies the process of managing compute jobs by providing a unified platform for managing jobs across different regions, clouds, and edge devices.
🤝 Bacalhau provides reliable and network-partition resistant orchestration, ensuring that your jobs will complete even if there are network disruptions.
🚨 Bacalhau provides a complete and permanent audit log of exactly what happened, so you can be confident that your jobs are being executed securely.
🔐 You can run private workloads to reduce the chance of leaking private information or inadvertently sharing your data outside of your organization.
💸 Bacalhau reduces ingress/egress costs since jobs are processed closer to the source.
🤓 You can mount your data anywhere on your machine, and Bacalhau will be able to run against that data.
💥 You can integrate with services running on nodes to run a jobs, such as on DuckDB.
📚 Bacalhau operates at scale over parallel jobs. You can batch process petabytes (quadrillion bytes) of data.
Bacalhau concists of a peer-to-peer network of nodes that enables decentralized communication between computers. The network consists of two types of nodes:
Requester Node: responsible for handling user requests, discovering and ranking compute nodes, forwarding jobs to compute nodes, and monitoring the job lifecycle.
Compute Node: responsible for executing jobs and producing results. Different compute nodes can be used for different types of jobs, depending on their capabilities and resources.
For a more detailed tutorial, check out our Getting Started Tutorial.
The goal of the Bacalhau project is to make it easy to perform distributed computation next to where the data resides. In order to do this, first you need to ingest some data.
Data is identified by its content identifier (CID) and can be accessed by anyone who knows the CID. Here are some options that can help you mount your data:
The options are not limited to the above mentioned. You can mount your data anywhere on your machine, and Bacalhau will be able to run against that data
All workloads run under restricted Docker or WASM permissions on the node. Additionally, you can use existing (locked down) binaries that are pre-installed through Pluggable Executors.
Best practices in 12-factor apps is to use environment variables to store sensitive data such as access tokens, API keys, or passwords. These variables can be accessed by Bacalhau at runtime and are not visible to anyone who has access to the code or the server.
Alternatively, you can pre-provision credentials to the nodes and access those on a node by node basis.
Finally, endpoints (such as vaults) can also be used to provide secure access to Bacalhau. This way, the client can authenticate with Bacalhau using the token without exposing their credentials.
Bacalhau can be used for a variety of data processing workloads, including machine learning, data analytics, and scientific computing. It is well-suited for workloads that require processing large amounts of data in a distributed and parallelized manner.
Once you have more than 10 devices generating or storing around 100GB of data, you're likely to face challenges with processing that data efficiently. Traditional computing approaches may struggle to handle such large volumes, and that's where distributed computing solutions like Bacalhau can be extremely useful. Bacalhau can be used in various industries, including security, web serving, financial services, IoT, Edge, Fog, and multi-cloud. Bacalhau shines when it comes to data-intensive applications like data engineering, model training, model inference, molecular dynamics, etc.
Here are some example tutorials on how you can process your data with Bacalhau:
For more tutorials, visit our example page
Bacalhau has a very friendly community and we are always happy to help you get started:
GitHub Discussions – ask anything about the project, give feedback or answer questions that will help other users.
Join the Slack Community and go to #bacalhau channel – it is the easiest way engage with other members in the community and get help.
Contributing – learn how to contribute to the Bacalhau project.
👉 Continue with Bacalhau Getting Started guide to learn how to install and run a job with the Bacalhau client.
👉 Or jump directly to try out the different Examples that showcases Bacalhau abilities.
How to use docker containers with Bacalhau
Bacalhau executes jobs by running them within containers. Bacalhau employs a syntax closely resembling Docker, allowing you to utilize the same containers. The key distinction lies in how input and output data are transmitted to the container via IPFS, enabling scalability on a global level.
This section describes how to migrate a workload based on a Docker container into a format that will work with the Bacalhau client.
You can check out this example tutorial on how to work with custom containers in Bacalhau to see how we used all these steps together.
Here are few things to note before getting started:
Container Registry: Ensure that the container is published to a public container registry that is accessible from the Bacalhau network.
Architecture Compatibility: Bacalhau supports only images that match the host node's architecture. Typically, most nodes run on linux/amd64
, so containers in arm64
format are not able to run.
Input Flags: The --input ipfs://...
flag supports only directories and does not support CID subpaths. The --input https://...
flag supports only single files and does not support URL directories. The --input s3://...
flag supports S3 keys and prefixes. For example, s3://bucket/logs-2023-04*
includes all logs for April 2023.
You can check to see a list of example public containers used by the Bacalhau team
Note: Only about a third of examples have their containers here. If you can't find one, feel free to contact the team.
To help provide a safe, secure network for all users, we add the following runtime restrictions:
Limited Ingress/Egress Networking:
All ingress/egress networking is limited as described in the networking documentation. You won't be able to pull data/code/weights/
etc. from an external source.
Data Passing with Docker Volumes:
A job includes the concept of input and output volumes, and the Docker executor implements support for these. This means you can specify your CIDs, URLs, and/or S3 objects as input
paths and also write results to an output
volume. This can be seen in the following example:
The above example demonstrates an input volume flag -i s3://mybucket/logs-2023-04*
, which mounts all S3 objects in bucket mybucket
with logs-2023-04
prefix within the docker container at location /input
(root).
Output volumes are mounted to the Docker container at the location specified. In the example above, any content written to /output_folder
will be made available within the apples
folder in the job results CID.
Once the job has run on the executor, the contents of stdout
and stderr
will be added to any named output volumes the job has used (in this case apples
), and all those entities will be packaged into the results folder which is then published to a remote location by the publisher.
If you need to pass data into your container you will do this through a Docker volume. You'll need to modify your code to read from a local directory.
We make the assumption that you are reading from a directory called /inputs
, which is set as the default.
You can specify which directory the data is written to with the --input
CLI flag.
If you need to return data from your container you will do this through a Docker volume. You'll need to modify your code to write to a local directory.
We make the assumption that you are writing to a directory called /outputs
, which is set as the default.
You can specify which directory the data is written to with the --output-volumes
CLI flag.
At this step, you create (or update) a Docker image that Bacalhau will use to perform your task. You build your image from your code and dependencies, then push it to a public registry so that Bacalhau can access it. This is necessary for other Bacalhau nodes to run your container and execute the task.
Most Bacalhau nodes are of an x86_64
architecture, therefore containers should be built for x86_64
systems.
For example:
To test your docker image locally, you'll need to execute the following command, changing the environment variables as necessary:
Let's see what each command will be used for:
Exports the current working directory of the host system to the LOCAL_INPUT_DIR
variable. This variable will be used for binding a volume and transferring data into the container.
Exports the current working directory of the host system to the LOCAL_OUTPUT_DIR variable. Similarly, this variable will be used for binding a volume and transferring data from the container.
Creates an array of commands CMD that will be executed inside the container. In this case, it is a simple command executing 'ls' in the /inputs directory and writing text to the /outputs/stdout file.
Launches a Docker container using the specified variables and commands. It binds volumes to facilitate data exchange between the host and the container.
Bacalhau will use the default ENTRYPOINT if your image contains one. If you need to specify another entrypoint, use the --entrypoint
flag to bacalhau docker run
.
For example:
The result of the commands' execution is shown below:
Data is identified by its content identifier (CID) and can be accessed by anyone who knows the CID. You can use either of these methods to upload your data:
You can choose to
You can mount your data anywhere on your machine, and Bacalhau will be able to run against that data
To launch your workload in a Docker container, using the specified image and working with input
data specified via IPFS CID, run the following command.
To check the status of your job, run the following command.
To get more information on your job, you can run the following command.
To download your job, run.
To put this all together into one would look like the following.
This outputs the following.
The --input
flag does not support CID subpaths for ipfs://
content.
Alternatively, you can run your workload with a publicly accessible http(s) URL, which will download the data temporarily into your public storage:
The --input
flag does not support URL directories.
If you run into this compute error while running your docker image
This can often be resolved by re-tagging your docker image
If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).
In this tutorial, you'll learn how to install and run a job with the Bacalhau client using the Bacalhau CLI or Docker.
The Bacalhau client is a command-line interface (CLI) that allows you to submit jobs to the Bacalhau. The client is available for Linux, macOS, and Windows. You can also run the Bacalhau client in a Docker container.
By default, you will submit to the Bacalhau public network, but the same CLI can be configured to submit to a private Bacalhau network. For more information, please read Running Bacalhau on a Private Network.
You can install or update the Bacalhau CLI by running the commands in a terminal. You may need sudo mode or root password to install the local Bacalhau binary to /usr/local/bin
:
Windows users can download the latest release tarball from Github and extract bacalhau.exe
to any location available in the PATH environment variable.
To run a specific version of Bacalhau using Docker, use the command docker run -it ghcr.io/bacalhau-project/bacalhau:v1.0.3
, where v1.0.3
is the version you want to run; note that the latest
tag will not re-download the image if you have an older version. For more information on running the Docker image, check out the Bacalhau docker image example.
To verify installation and check the version of the client and server, use the version
command. To run a Bacalhau client command with Docker, prefix it with docker run ghcr.io/bacalhau-project/bacalhau:latest
.
If you're wondering which server is being used, the Bacalhau Project has a demo network that's shared with the community. This network allows you to familiarize with Bacalhau's capabilities and launch jobs from your computer without maintaining a compute cluster on your own.
To submit a job in Bacalhau, we will use the bacalhau docker run
command. The command runs a job using the Docker executor on the node. Let's take a quick look at its syntax:
We will use the command to submit a Hello World job that runs an echo program within an Ubuntu container.
Let's take a look at the results of the command execution in the terminal:
After the above command is run, the job is submitted to the public network, which processes the job and Bacalhau prints out the related job id:
The job_id
above is shown in its full form. For convenience, you can use the shortened version, in this case: 9d20bbad
.
While this command is designed to resemble Docker's run command which you may be familiar with, Bacalhau introduces a whole new set of flags to support its computing model.
Let's take a look at the results of the command execution in the terminal:
After having deployed the job, we now can use the CLI for the interaction with the network. The jobs were sent to the public demo network, where it was processed and we can call the following functions. The job_id
will differ for every submission.
You can check the status of the job using bacalhau list
command adding the --id-filter
flag and specifying your job id.
Let's take a look at the results of the command execution in the terminal:
When it says Completed
, that means the job is done, and we can get the results.
For a comprehensive list of flags you can pass to the list command check out the related CLI Reference page
You can find out more information about your job by using bacalhau describe
.
Let's take a look at the results of the command execution in the terminal:
This outputs all information about the job, including stdout, stderr, where the job was scheduled, and so on.
You can download your job results directly by using bacalhau get
.
This results in
In the command below, we created a directory called myfolder
and download our job output to be stored in that directory.
While executing this command, you may encounter warnings regarding receive and send buffer sizes: failed to sufficiently increase receive buffer size
. These warnings can arise due to limitations in the UDP buffer used by Bacalhau to process tasks. Additional information can be found in https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes.
After the download has finished you should see the following contents in the results directory.
That should print out the string Hello World
.
Here are few resources that provide a deeper dive into running jobs with Bacalhau:
How Bacalhau works, Setting up Bacalhau, Examples & Use Cases
If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).
In this tutorial we will go over the components and the architecture of Bacalhau. You will learn how it is built, what components are used, how you could interact and how you could use Bacalhau.
Bacalhau is a peer-to-peer network of nodes that enables decentralized communication between computers. The network consists of two types of nodes, which can communicate with each other.
The requester and compute nodes together form a p2p network and use gossiping to discover each other, share information about node capabilities, available resources and health status. Bacalhau is a peer-to-peer network of nodes that enables decentralized communication between computers.
Requester Node: responsible for handling user requests, discovering and ranking compute nodes, forwarding jobs to compute nodes, and monitoring the job lifecycle.
Compute Node: responsible for executing jobs and producing results. Different compute nodes can be used for different types of jobs, depending on their capabilities and resources.
To interact with the Bacalhau network, users can use the Bacalhau CLI (command-line interface) to send requests to a requester node in the network. These requests are sent using the JSON format over HTTP, a widely-used protocol for transmitting data over the internet. Bacalhau's architecture involves two main sections which are the core components and interfaces.
The core components are responsible for handling requests and connecting different nodes. The network includes two different components:
The interfaces handle the distribution, execution, storage and publishing of jobs. In the following all the different components are described and their respective protocols are shown.
Bacalhau provides an interface to interact with the server via a REST API. Bacalhau uses 127.0.0.1 as the localhost and 1234 as the port by default.
When a job is submitted to a requester node, it selects compute nodes that are capable and suitable to execute the job, and communicate with them directly. The compute node has a collection of named executors, storage sources, and publishers, and it will choose the most appropriate ones based on the job specifications.
When the Compute node completes the job, it publishes the results to S3's remote storage, IPFS.
The Bacalhau client receives updates on the task execution status and results. A user can access the results and manage tasks through the command line interface.
To Get the results of a job you can run the following command.
One can choose from a wide range of flags, from which a few are shown below.
To describe a specific job, inserting the ID to the CLI or API gives back an overview of the job.
If you run more then one job or you want to find a specific job ID
To list executions follow the following commands.
How to configure authentication and authorization on your Bacalhau node.
Bacalhau includes a flexible auth system that supports multiple methods of auth that are appropriate for different deployment environments.
With no specific authentication configuration supplied, Bacalhau runs in "anonymous mode" – which allows unidentified users limited control over the system. "Anonymous mode" is only appropriate for testing or evaluation setups.
In anonymous mode, Bacalhau will allow:
Users identified by a self-generated private key to submit any job and cancel their own jobs.
Users not identified by any key to access other read-only endpoints, such as to read job lists, describe jobs, and query node or agent information.
Bacalhau auth is controlled by policies. Configuring the auth system is done by supplying a different policy file.
Restricting API access to only users that have authenticated requires specifying a new authorization policy. You can download a policy that restricts anonymous access and install it by using:
Once the node is restarted, accessing the node APIs will require the user to be authenticated, but by default will still allow users with a self-generated key to authenticate themselves.
Restricting the list of keys that can authenticate to only a known set requires specifying a new authentication policy. You can download a policy that restricts key-based access and install it by using:
Then, modify the allowed_clients
variable in challange_ns_no_anon.rego
to include acceptable client IDs, found by running bacalhau id
.
Once the node is restarted, only keys in the allowed list will be able to access any API.
Users can authenticate using a username and password instead of specifying a private key for access. Again, this requires installation of an appropriate policy on the server.
Passwords are not stored in plaintext and are salted. The downloaded policy expects password hashes and salts generated by scrypt
. To generate a salted password, the helper script in pkg/authn/ask/gen_password
can be used:
This will ask for a password and generate a salt and hash to authenticate with it. Add the encoded username, salt and hash into the ask_ns_password.rego
.
In principle, Bacalhau can implement any auth scheme that can be described in a structured way by a policy file.
Bacalhau will pass information pertinent to the current request into every authentication policy query as a field on the input
variable. The exact information depends on the type of authentication used.
challenge
authenticationchallenge
authentication uses identifies the user by the presence of a private key. The user is asked to sign an input phrase to prove they have the key they are identifying with.
Policies used for challenge
authentication do not need to actually implement the challenge verification logic as this is handled by the core code. Instead, they will only be invoked if this verification passes.
Policies for this type will need to implement these rules:
bacalhau.authn.token
: if the user should be authenticated, an access token they should use in subsequent requests. If the user should not be authenticated, should be undefined.
They should expect as fields on the input
variable:
clientId
: an ID derived from the user's private key that identifies them uniquely
nodeId
: the ID of the requester node that this user is authenticating with
signingKey
: the private key (as a JWK) that should be used to sign any access tokens to be returned
The simplest possible policy might therefore be this policy that returns the same opaque token for all users:
ask
authenticationask
authentication uses credentials supplied manually by the user as identification. For example, an ask
policy could require a username and password as input and check these against a known list. ask
policies do all the verification of the supplied credentials.
Policies for this type will need to implement these rules:
bacalhau.authn.token
: if the user should be authenticated, an access token they should use in subsequent requests. If the user should not be authenticated, should be undefined.
bacalhau.authn.schema
: a static JSON schema that should be used to collect information about the user. The type
of declared fields may be used to pick the input method, and if a field is marked as writeOnly
then it will be collected in a secure way (e.g. not shown on screen). The schema
rule does not receive any input
data.
They should expect as fields on the input
variable:
ask
: a map of field names from the JSON schema to strings supplied by the user. The policy should validate these credentials.
nodeId
: the ID of the requester node that this user is authenticating with
signingKey
: the private key (as a JWK) that should be used to sign any access tokens to be returned
The simplest possible policy might therefore be one that asks for no data and returns the same opaque token for every user:
Authorization policies do not vary depending on the type of authentication used – Bacalhau uses one authz policy for all API requests.
Authz policies are invoked for every API request. Authz policies should check the validity of any supplied access tokens and issue an authz decision for the requested API endpoint. It is not required that authz policies enforce that an access token is present – they may choose to grant access to unauthorized users.
Policies will need to implement these rules:
bacalhau.authz.token_valid
: true if the access token in the request is "valid" (but does not necessarily grant access for this request), or false if it is invalid for every request (e.g. because it has expired) and should be discarded.
bacalhau.authz.allow
: true if the user should be permitted to carry out the input request, false otherwise.
They should expect as fields on the input
variable for both rules:
http
: details of the user's HTTP request:
host
: the hostname used in the HTTP request
method
: the HTTP method (e.g. GET
, POST
)
path
: the path requested, as an array of path components without slashes
query
: a map of URL query parameters to their values
headers
: a map of HTTP header names to arrays representing their values
body
: a blob of any content submitted as the body
constraints
: details about the receiving node that should be used to validate any supplied tokens:
cert
: keys that the input token should have been signed with
iss
: the name of a node that this node will recognize as the issuer of any signed tokens
aud
: the name of this node that is receiving the request
Notably, the constraints
data is appropriate to be passed directly to the Rego io.jwt.decode_verify
method which will validate the access token as a JWT against the given constraints.
The simplest possible authz policy might be this one that allows all users to access all endpoints:
When running a node, you can choose which jobs you want to run by using configuration options, environment variables or flags to specify a job selection policy.
If you want more control over making the decision to take on jobs, you can use the --job-selection-probe-exec
and --job-selection-probe-http
flags.
These are external programs that are passed the following data structure so that they can make a decision about whether or not to take on a job:
The exec
probe is a script to run that will be given the job data on stdin
, and must exit with status code 0 if the job should be run.
The http
probe is a URL to POST the job data to. The job will be rejected if the HTTP request returns a non-positive status code (e.g. >= 400).
For example, the following response will reject the job:
If the HTTP response is not a JSON blob, the content is ignored and any non-error status code will accept the job.
Different jobs may require different amounts of resources to execute. Some jobs may have specific hardware requirements, such as GPU. This page describes how to specify hardware requirements for your job.
Please bear in mind that each executor is implemented independently and these docs might be slightly out of date. Double check the man page for the executor you are using with bacalhau [executor] --help
.
The following table describes how to specify hardware requirements for the Docker executor.
Flag | Default | Description |
---|
When you specify hardware requirements, the job will be offered out to the network to see if there are any nodes that can satisfy the requirements. If there are, the job will be scheduled on the node and the executor will be started.
Bacalhau supports GPU workloads. Learn how to run a job using GPU workloads with the Bacalhau client.
The Bacalhau network must have an executor node with a GPU exposed
Your container must include the CUDA runtime (cudart) and must be compatible with the CUDA version running on the node
Use following command to see available resources amount:
To submit a request for a job that requires more than the standard set of resources, add the --cpu
and --memory
flags. For example, for a job that requires 2 CPU cores and 4Gb of RAM, use --cpu=2 --memory=4Gb
, e.g.:
To submit a GPU job request, use the --gpu
flag under the docker run
command to select the number of GPUs your job requires. For example:
The following limitations currently exist within Bacalhau.
Maximum CPU and memory limits depend on the participants in the network
For GPU:
NVIDIA, Intel or AMD GPUs only
Only the Docker Executor supports GPUs
Bacalhau supports running programs that are compiled to . With the Bacalhau client, you can upload WASM programs, retrieve data from public storage, read and write data, receive program arguments, and access environment variables.
Supported WebAssembly System Interface (WASI) Bacalhau can run compiled WASM programs that expect the WebAssembly System Interface (WASI) Snapshot 1. Through this interface, WebAssembly programs can access data, environment variables, and program arguments.
Networking Restrictions All ingress/egress networking is disabled – you won't be able to pull data/code/weights
etc. from an external source. WASM jobs can say what data they need using URLs or CIDs (Content IDentifier) and can then access the data by reading from the filesystem.
Single-Threading There is no multi-threading as WASI does not expose any interface for it.
If your program typically involves reading from and writing to network endpoints, follow these steps to adapt it for Bacalhau:
Replace Network Operations: Instead of making HTTP requests to external servers (e.g., example.com), modify your program to read data from the local filesystem.
Input Data Handling: Specify the input data location in Bacalhau using the --input
flag when running the job. For instance, if your program used to fetch data from example.com
, read from the /inputs
folder locally, and provide the URL as input when executing the Bacalhau job. For example, --input http://example.com
.
Output Handling: Adjust your program to output results to standard output (stdout
) or standard error (stderr
) pipes. Alternatively, you can write results to the filesystem, typically into an output mount. In the case of WASM jobs, a default folder at /outputs
is available, ensuring that data written there will persist after the job concludes.
By making these adjustments, you can effectively transition your program to operate within the Bacalhau environment, utilizing filesystem operations instead of traditional network interactions.
You can specify additional or different output mounts using the -o
flag.
You will need to compile your program to WebAssembly that expects WASI. Check the instructions for your compiler to see how to do this.
Data is identified by its content identifier (CID) and can be accessed by anyone who knows the CID. You can use either of these methods to upload your data:
You can mount your data anywhere on your machine, and Bacalhau will be able to run against that data
You can run a WebAssembly program on Bacalhau using the bacalhau wasm run
command.
Run Locally Compiled Program:
If your program is locally compiled, specify it as an argument. For instance, running the following command will upload and execute the main.wasm
program:
The program you specify will be uploaded to a Bacalhau storage node and will be publicly available.
Alternative Program Specification:
You can use a Content IDentifier (CID) for a specific WebAssembly program.
Input Data Specification:
Make sure to specify any input data using --input
flag.
This ensures the necessary data is available for the program's execution.
You can give the WASM program arguments by specifying them after the program path or CID. If the WASM program is already compiled and located in the current directory, you can run it by adding arguments after the file name:
For a specific WebAssembly program, run:
Write your program to use program arguments to specify input and output paths. This makes your program more flexible in handling different configurations of input and output volumes.
For example, instead of hard-coding your program to read from /inputs/data.txt
, accept a program argument that should contain the path and then specify the path as an argument to bacalhau wasm run
:
Your language of choice should contain a standard way of reading program arguments that will work with WASI.
You can also specify environment variables using the -e
flag.
With that, you have just successfully run a job on Bacalhau!
You can create jobs in the Bacalhau network using various introduced in version 1.2. Each job may need specific variables, resource requirements and data details that are described in the .
Prepare data with Bacalhau by , or . Mount data anywhere for Bacalhau to run against. Refer to , , and Source Specifications for data source usage.
Optimize workflows without completely redesigning them. Run arbitrary tasks using Docker containers and WebAssembly images. Follow the Onboarding guides for and workloads.
Explore GPU workload support with Bacalhau. Learn how to run using the Bacalhau client in the GPU Workloads section. Integrate Python applications with Bacalhau using the .
For node operation, refer to the section for configuring and running a Bacalhau node. If you prefer an isolated environment, explore the for performing tasks without connecting to the main Bacalhau network.
You should use the Bacalhau client to send a task to the network. The client transmits the job information to the Bacalhau network via established protocols and interfaces. Jobs submitted via the Bacalhau CLI are forwarded to a Bacalhau network node at via port 1234
by default. This Bacalhau node will act as the requester node for the duration of the job lifecycle.
You can use the command with to create a job in Bacalhau using JSON and YAML formats.
You can use to submit a new job for execution.
You can use the bacalhau docker run
to start a job in a Docker container. Below, you can see an excerpt of the commands:
You can also use the bacalhau wasm run
to run a job compiled into the (WASM) format. Below, you can find an excerpt of the commands in the Bacalhau CLI:
The selected compute node receives the job and starts its execution inside a container. The container can use different executors to work with the data and perform the necessary actions. A job can use the docker executor, WASM executor or a library storage volumes. Use to view the parameters to configure the Docker Engine. If you want tasks to be executed in a WebAssembly environment, pay attention to .
Bacalhau's seamless integration with IPFS ensures that users have a decentralized option for publishing their task results, enhancing accessibility and resilience while reducing dependence on a single point of failure. View to get the detailed information.
Bacalhau's S3 Publisher provides users with a secure and efficient method to publish task results to any S3-compatible storage service. This publisher supports not just AWS S3, but other S3-compatible services offered by cloud providers like Google Cloud Storage and Azure Blob Storage, as well as open-source options like MinIO. View to get the detailed information.
You can use the command with to get a full description of a job in yaml format.
You can use to retrieve the specification and current status of a particular job.
You can use the command with to list jobs on the network in yaml format.
You can use to retrieve a list of jobs.
You can use the command with to list all executions associated with a job, identified by its ID, in yaml format.
You can use to retrieve all executions for a particular job.
The Bacalhau client provides the user with tools to monitor and manage the execution of jobs. You can get information about status, progress and decide on next steps. View the if you want to know the node's health, capabilities, and deployed Bacalhau version. To get information about the status and characteristics of the nodes in the cluster use .
You can use the command with to cancel a job that was previously submitted and stop it running if it has not yet completed.
You can use to terminate a specific job asynchronously.
You can use the command with to enumerate the historical events related to a job, identified by its ID.
You can use to retrieve historical events for a specific job.
You can use this to retrieve the log output (stdout, and stderr) from a job. If the job is still running it is possible to follow the logs after the previously generated logs are retrieved.
To familiarize yourself with all the commands used in Bacalhau, please view
Policies are written in a language called , also used by Kubernetes. Users who want to write their own policies should get familiar with the Rego language.
A more realistic example that returns a signed JWT is in .
A more realistic example that returns a signed JWT is in .
A more realistic example (which is the Bacalhau "anonymous mode" default) is in .
If the HTTP response is a JSON blob, it should match the and will be used to respond to the bid directly:
For example, Rust users can specify the wasm32-wasi
target to rustup
and cargo
to get programs compiled for WASI WebAssembly. See for more information on this.
.
See for a workload that leverages WebAssembly support.
If you have questions or need support or guidance, please reach out to the (#general channel).
| 500m | Job CPU cores (e.g. 500m, 2, 8) |
| 1Gb | Job Memory requirement (e.g. 500Mb, 2Gb, 8Gb). |
| 0 | Job GPU requirement (e.g. 1). |
How to enable GPU support on your Bacalhau node.
Bacalhau supports GPUs out of the box and defaults to allowing execution on all GPUs installed on the node.
Bacalhau makes the assumption that you have installed all the necessary drivers and tools on your node host and have appropriately configured them for use by Docker.
In general for GPUs from any vendor, the Bacalhau client requires:
nvidia-smi
installed and functional
You can test whether you have a working GPU setup with the following command:
Bacalhau requires AMD drivers to be appropriately installed and access to the rocm-smi
tool.
You can test whether you have a working GPU setup with the following command, which should print details of your GPUs:
Bacalhau requires appropriate Intel drivers to be installed and access to the xpu-smi
tool.
You can test whether you have a working GPU setup with the following command, which should print details of your GPUs:
Access to GPUs can be controlled using resource limits. To limit the number of GPUs that can be used per job, set a job resource limit. To limit access to GPUs from all jobs, set a total resource limit.
How to configure your Bacalhau node.
Bacalhau employs the viper and cobra libraries for configuration management. Users can configure their Bacalhau node through a combination of command-line flags, environment variables, and the dedicated configuration file.
Bacalhau manages its configuration, metadata, and internal state within a specialized repository named .bacalhau
. Serving as the heart of the Bacalhau node, this repository holds the data and settings that determine node behavior. It's located on the filesystem, and by default, Bacalhau initializes this repository at $HOME/.bacalhau
, where $HOME
is the home directory of the user running the bacalhau process.
To customize this location, users can:
Set the BACALHAU_DIR
environment variable to specify their desired path.
Utilize the --repo
command line flag to specify their desired path.
Upon executing a Bacalhau command for the first time, the system will initialize the .bacalhau
repository. If such a repository already exists, Bacalhau will seamlessly access its contents.
Structure of a Newly Initialized .bacalhau
Repository
.bacalhau
repository:This repository comprises four directories and seven files:
user_id.pem
:
This file houses the Bacalhau node user's cryptographic private key, used for signing requests sent to a Requester Node.
Format: PEM.
repo.version
:
Indicates the version of the Bacalhau node's repository.
Format: JSON, e.g., {"Version":1}
.
libp2p_private_key
:
Stores the Bacalhau node's libp2p private key, essential for its network identity. The NodeID of a Bacalhau node is derived from this key.
Format: Base64 encoded RSA private key.
config.yaml
:
Contains configuration settings for the Bacalhau node.
Format: YAML.
update.json
:
A file containing the date/time when the last version check was made.
Format: JSON, e.g., {"LastCheck":"2024-01-24T11:06:14.631816Z"}
tokens.json
:
A file containing the tokens obtained through authenticating with bacalhau clusters.
QmdGUjsMHEgtAfdtw7U62yPEcAZFtA33tKMsczLToegZtv-compute
:
Contains the BoltDB executions.db
database, which aids the Compute node in state persistence. Additionally, the jobStats.json
file records the Compute Node's completed jobs tally.
Note: The segment QmdGUjsMHEgtAfdtw7U62yPEcAZFtA33tKMsczLToegZtv
is a unique NodeID for each Bacalhau node, derived from the libp2p_private_key
.
QmdGUjsMHEgtAfdtw7U62yPEcAZFtA33tKMsczLToegZtv-requester
:
Contains the BoltDB jobs.db
database for the Requester node's state persistence.
Note: NodeID derivation is similar to the Compute directory.
executor_storages
:
Storage for data handled by Bacalhau storage drivers.
plugins
:
Houses binaries that allow the Compute node to execute specific tasks.
Note: This feature is currently experimental and isn't active during standard node operations.
Within a .bacalhau
repository, a config.yaml
file may be present. This file serves as the configuration source for the bacalhau node and adheres to the YAML format.
Although the config.yaml
file is optional, its presence allows Bacalhau to load custom configurations; otherwise, Bacalhau is configured with built-in default values, environment variables and command line flags.
Modifications to the config.yaml
file will not be dynamically loaded by the Bacalhau node. A restart of the node is required for any changes to take effect. Bacalhau determines its configuration based on the following precedence order, with each item superseding the subsequent:
Command-line Flag
Environment Variable
Config File
Defaults
config.yaml
and Bacalhau Environment VariablesBacalhau establishes a direct relationship between the value-bearing keys within the config.yaml
file and corresponding environment variables. For these keys that have no further sub-keys, the environment variable name is constructed by capitalizing each segment of the key, and then joining them with underscores, prefixed with BACALHAU_
.
For example, a YAML key with the path Node.IPFS.Connect
translates to the environment variable BACALHAU_NODE_IPFS_CONNECT
and is represented in a file like:
There is no corresponding environment variable for either Node
or Node.IPFS
. Config values may also have other environment variables that set them for simplicity or to maintain backwards compatibility.
Bacalhau leverages the BACALHAU_ENVIRONMENT
environment variable to determine the specific environment configuration when initializing a repository. Notably, if a .bacalhau
repository has already been initialized, the BACALHAU_ENVIRONMENT
setting will be ignored.
By default, if the BACALHAU_ENVIRONMENT
variable is not explicitly set by the user, Bacalhau will adopt the production
environment settings.
Below is a breakdown of the configurations associated with each environment:
1. Production (public network)
Environment Variable: BACALHAU_ENVIRONMENT=production
Configurations:
Node.ClientAPI.Host
: "bootstrap.production.bacalhau.org"
Node.Client.API.Host
: 1234
...other configurations specific to this environment...
2. Staging (staging network)
Environment Variable: BACALHAU_ENVIRONMENT=staging
Configurations:
Node.ClientAPI.Host
: "bootstrap.staging.bacalhau.org"
Node.Client.API.Host
: 1234
...other configurations specific to this environment...
3. Development (development network)
Environment Variable: BACALHAU_ENVIRONMENT=development
Configurations:
Node.ClientAPI.Host
: "bootstrap.development.bacalhau.org"
Node.Client.API.Host
: 1234
...other configurations specific to this environment...
4. Local (private or local networks)
Environment Variable: BACALHAU_ENVIRONMENT=local
Configurations:
Node.ClientAPI.Host
: "0.0.0.0"
Node.Client.API.Host
: 1234
...other configurations specific to this environment...
Note: The above configurations provided for each environment are not exhaustive. Consult the specific environment documentation for a comprehensive list of configurations.
Or
Bacalhau supports the three main 'pillars' of observability - logging, metrics, and tracing. Bacalhau uses the OpenTelemetry Go SDK for metrics and tracing, which can be configured using the standard environment variables. Exporting metrics and traces can be as simple as setting the OTEL_EXPORTER_OTLP_PROTOCOL
and OTEL_EXPORTER_OTLP_ENDPOINT
environment variables. Custom code is used for logging as the OpenTelemetry Go SDK currently doesn't support logging.
Logging in Bacalhau outputs in human-friendly format to stderr at INFO
level by default, but this can be changed by two environment variables:
LOG_LEVEL
- Can be one of trace
, debug
, error
, warn
or fatal
to output more or fewer logging messages as required
LOG_TYPE
- Can be one of the following values:
default
- output logs to stderr in a human-friendly format
json
- log messages outputted to stdout in JSON format
combined
- log JSON formatted messages to stdout and human-friendly format to stderr
Log statements should include the relevant trace, span and job ID so it can be tracked back to the work being performed.
Bacalhau produces a number of different metrics including those around the libp2p resource manager (rcmgr
), performance of the requester HTTP API and the number of jobs accepted/completed/received.
Traces are produced for all major pieces of work when processing a job, although the naming of some spans is still being worked on. You can find relevant traces covering working on a job by searching for the jobid
attribute.
The metrics and traces can easily be forwarded to a variety of different services as we use OpenTelemetry, such as Honeycomb or Datadog.
To view the data locally, or simply to not use a SaaS offering, you can start up Jaeger and Prometheus placing these three files into a directory then running docker compose start
while running Bacalhau with the OTEL_EXPORTER_OTLP_PROTOCOL=grpc
and OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
environment variables.
How to configure TLS for the requester node APIs
By default, the requester node APIs used by the Bacalhau CLI are accessible over HTTP, but it is possible to configure it to use Transport Level Security (TLS) so that they are accessible over HTTPS instead. There are several ways to obtain the necessary certificates and keys, and Bacalhau supports obtaining them via ACME and Certificate Authorities or even self-signing them.
Once configured, you must ensure that instead of using http://IP:PORT you use https://IP:PORT to access the Bacalhau API
Automatic Certificate Management Environment (ACME) is a protocol that allows for automating the deployment of Public Key Infrastructure, and is the protocol used to obtain a free certificate from the Let's Encrypt Certificate Authority.
Using the --autocert [hostname]
parameter to the CLI (in the serve
and devstack
commands), a certificate is obtained automatically from Lets Encrypt. The provided hostname should be a comma-separated list of hostnames, but they should all be publicly resolvable as Lets Encrypt will attempt to connect to the server to verify ownership (using the ACME HTTP-01 challenge). On the very first request this can take a short time whilst the first certificate is issued, but afterwards they are then cached in the bacalhau repository.
Alternatively, you may set these options via the environment variable, BACALHAU_AUTO_TLS
. If you are using a configuration file, you can set the values inNode.ServerAPI.TLS.AutoCert
instead.
As a result of the Lets Encrypt verification step, it is necessary for the server to be able to handle requests on port 443. This typically requires elevated privileges, and rather than obtain these through a privileged account (such as root), you should instead use setcap to grant the executable the right to bind to ports <1024.
A cache of ACME data is held in the config repository, by default ~/.bacalhau/autocert-cache
, and this will be used to manage renewals to avoid rate limits.
Obtaining a TLS certificate from a Certificate Authority (CA) without using the Automated Certificate Management Environment (ACME) protocol involves a manual process that typically requires the following steps:
Choose a Certificate Authority: First, you need to select a trusted Certificate Authority that issues TLS certificates. Popular CAs include DigiCert, GlobalSign, Comodo (now Sectigo), and others. You may also consider whether you want a free or paid certificate, as CAs offer different pricing models.
Generate a Certificate Signing Request (CSR): A CSR is a text file containing information about your organization and the domain for which you need the certificate. You can generate a CSR using various tools or directly on your web server. Typically, this involves providing details such as your organization's name, common name (your domain name), location, and other relevant information.
Submit the CSR: Access your chosen CA's website and locate their certificate issuance or order page. You'll typically find an option to "Submit CSR" or a similar option. Paste the contents of your CSR into the provided text box.
Verify Domain Ownership: The CA will usually require you to verify that you own the domain for which you're requesting the certificate. They may send an email to one of the standard domain-related email addresses (e.g., admin@yourdomain.com, webmaster@yourdomain.com). Follow the instructions in the email to confirm domain ownership.
Complete Additional Verification: Depending on the CA's policies and the type of certificate you're requesting (e.g., Extended Validation or EV certificates), you may need to provide additional documentation to verify your organization's identity. This can include legal documents or phone calls from the CA to confirm your request.
Payment and Processing: If you're obtaining a paid certificate, you'll need to make the payment at this stage. Once the CA has received your payment and completed the verification process, they will issue the TLS certificate.
Once you have obtained your certificates, you will need to put two files in a location that bacalhau can read them. You need the server certificate, often called something like server.cert
or server.cert.pem
, and the server key which is often called something like server.key
or server.key.pem
.
Once you have these two files available, you must start bacalhau serve
which two new flags. These are tlscert
and tlskey
flags, whose arguments should point to the relevant file. An example of how it is used is:
Alternatively, you may set these options via the environment variables, BACALHAU_TLS_CERT
and BACALHAU_TLS_KEY
. If you are using a configuration file, you can set the values inNode.ServerAPI.TLS.ServerCertificate
and Node.ServerAPI.TLS.ServerKey
instead.
If you wish, it is possible to use Bacalhau with a self-signed certificate which does not rely on an external Certificate Authority. This is an involved process and so is not described in detail here although there is a helpful script in the Bacalhau github repository which should provide a good starting point.
Once you have generated the necessary files, the steps are much like above, you must start bacalhau serve
which two new flags. These are tlscert
and tlskey
flags, whose arguments should point to the relevant file. An example of how it is used is:
Alternatively, you may set these options via the environment variables, BACALHAU_TLS_CERT
and BACALHAU_TLS_KEY
. If you are using a configuration file, you can set the values inNode.ServerAPI.TLS.ServerCertificate
and Node.ServerAPI.TLS.ServerKey
instead.
If you use self-signed certificates, it is unlikely that any clients will be able to verify the certificate when connecting to the Bacalhau APIs. There are three options available to work around this problem:
Provide a CA certificate file of trusted certificate authorities, which many software libraries support in addition to system authorities.
Install the CA certificate file in the system keychain of each machine that needs access to the Bacalhau APIs.
Instruct the software library you are using not to verify HTTPS requests.
This directory contains instructions on how to setup the networking in Bacalhau.
Config property |
| Default value | Meaning |
Node.Compute.JobSelection.Locality |
| Anywhere | Only accept jobs that reference data we have locally ("local") or anywhere ("anywhere"). |
Node.Compute.JobSelection.ProbeExec |
| unused | Use the result of an external program to decide if we should take on the job. |
Node.Compute.JobSelection.ProbeHttp |
| unused | Use the result of a HTTP POST to decide if we should take on the job. |
Node.Compute.JobSelection.RejectStatelessJobs |
| False |
Node.Compute.JobSelection.AcceptNetworkedJobs |
| False |
Good news everyone! You can now run your Bacalhau-IPFS stack in Docker.
This page describes several ways in which to operate Bacalhau. You can choose the method that best suits your needs. The methods are:
This guide works best on a Linux machine. If you're trying to run this on a Mac, you may encounter issues. Remember that network host mode doesn't work.
You need to have Docker installed. If you don't have it, you can install it here.
This method is appropriate for those who:
Provide compute resources to the public Bacalhau network
This is not appropriate for:
Testing and development
Running a private network
This will start a local IPFS node and connect it to the public DHT. If you already have an IPFS node running, then you can skip this step.
Some notes about this command:
It wipes the $(pwd)/ipfs
directory to make sure you have a clean slate
It runs the IPFS container in the specified Docker network
It exposes the IPFS API port to the world on port 4002, to avoid clashes with Bacalhau
It exposes the admin RPC API to the local host only, on port 5001
We are not specifying or removing the bootstrap nodes, so it will default to connecting to public machines
You can now test that the IPFS node is working.
Bacalhau consists of two parts: a "requester" that is responsible for operating the API and managing jobs, and a "compute" element that is responsible for executing jobs. In a public context, you'd typically just run a compute node, and allow the public requesters to handle the traffic.
Notes about the command:
It runs the Bacalhau container in "host" mode. This means that the container will use the same network as the host.
It uses the root
user, which is the default system user that has access to the Docker socket on a Mac. You may need to change this to suit your environment.
It mounts the Docker Socket
It mounts the /tmp
directory
It exposes the Bacalhau API ports to the world
The container version should match that of the current release
The IPFS connect string points to the RPC port of the IPFS node in Docker. Because Bacalhau is running in the same network, it can use DNS to find the IPFS container IP. If you're running your own node, replace it
The --node-type
flag is set to compute
because we only want to run a compute node
The --labels
flag is used to set a human-readable label for the node, and so we can run jobs on our machine later
We specify the --peer env
flag so that it uses the environment specified by BACALHAU_ENVIRONMENT=production
and therefore connects to the public network peers
There are several ways to ensure that the Bacalhau compute node is connected to the network.
First, check that the Bacalhau libp2p port is open and connected. On Linux you can run lsof
and it should look something like this:
Note the three established connections at the bottom. These are the production bootstrap nodes that Bacalhau is now connected to.
You can also check that the node is connected by listing the current network peers and grepping for your IP address or node ID. The node ID can be obtained from the Bacalhau logs. It will look something like this:
Finally, submit a job with the label you specified when you ran the compute node. If this label is unique, there should be only one node with this label. The job should succeed. Run the following:
If instead, your job fails with the following error, it means that the compute node is not connected to the network:
This method is insecure. It does not lock down the IPFS node. Anyone connected to your network can access the IPFS node and read/write data. This is not recommended for production use.
This method is appropriate for:
Testing and development
Evaluating the Bacalhau platform before scaling jobs via the public network
This method is useful for testing and development. It's easier to use because it doesn't require a secret IPFS swarm key -- this is essentially an authentication token that allows you to connect to the node.
This method is not appropriate for:
Secure, private use
Production use
To run an insecure, private node, you need to initialize your IPFS configuration by removing all of the default public bootstrap nodes. Then we run the node in the normal way, without the special LIBP2P_FORCE_PNET
flag that checks for a secure private connection.
Some notes about this command:
It wipes the $(pwd)/ipfs
directory to make sure you have a clean slate
It removes the default bootstrap nodes
It runs the IPFS container in the specified Docker network
It exposes the IPFS API port to the local host only, to prevent accidentally exposing the IPFS node, on 4002, to avoid clashes with Bacalhau
It exposes the admin RPC API to the local host only, on port 5001
You can now test that the IPFS node is working.
Bacalhau consists of two parts: a "requester" that is responsible for operating the API and managing jobs, and a "compute" element that is responsible for executing jobs. In a public context, you'd typically just run a compute node, and allow the public requesters to handle the traffic. But in a private context, you'll want to run both.
Notes about the command:
It runs the Bacalhau container in the specified Docker network
It uses the root
user, which is the default system user that has access to the Docker socket on a Mac. You may need to change this to suit your environment
It mounts the Docker Socket
It mounts the /tmp
directory and specifies this as the location where Bacalhau will write temporary execution data (BACALHAU_NODE_COMPUTESTORAGEPATH
)
It exposes the Bacalhau API ports to the local host only, to prevent accidentally exposing the API to the public internet
The container version should match that of the Bacalhau installed on your system
The IPFS connect string points to the RPC port of the IPFS node. Because Bacalhau is running in the same network, it can use DNS to find the IPFS container IP.
The --node-type
flag is set to requester,compute
because we want to run both a requester and a compute node
You can now test that Bacalhau is working.
Now it's time to run a job. Recall that you exposed the Bacalhau API on the default ports to the local host only. So you'll need to use the --api-host
flag to tell Bacalhau where to find the API. Everything else is a standard part of the Bacalhau CLI.
The job should succeed. Run it again but this time capture the job ID to make it easier to retrieve the results.
To retrieve the results using the Bacalhau CLI, you need to know the p2p swarm multiaddress of the IPFS node because you don't want to connect to the public global IPFS network. To do that you can run the IPFS id command (and parse to remove the trub at the bottom of the barrel):
Note that the command above changes the reported port from 4001 to 4002. This is because the IPFS node is running on port 4002, but the IPFS id command reports the port as 4001.
Now get the results:
Alternatively, you can use the Docker container, mount the results volume, and change the --api-host
to the name of the Bacalhau container and the --ipfs-swarm-addrs
back to port 4001:
Running a private secure network is useful in a range of scenarios, including:
Running a private network for a private project
You need two things. A private IPFS node to store data and a Bacalhau node to execute over that data. To keep the nodes private you need to tell the nodes to shush and use a secret key. This is a bit harder to use, and a bit more involved than the insecure version.
Private IPFS nodes are experimental. See the IPFS documentation for more information.
First, you need to bootstrap a new IPFS cluster for your own private use. This consists of a process of generating a swarm key, removing any bootstrap nodes, and then starting the IPFS node.
Some notes about this command:
It wipes the $(pwd)/ipfs
directory to make sure you have a clean slate
It generates a new swarm key -- this is the token that is required to connect to this node
It removes the default bootstrap nodes
It runs the IPFS container in the specified Docker network
It exposes the IPFS API port to the local host only, to prevent accidentally exposing the IPFS node, on 4002, to avoid clashes with Bacalhau
It exposes the admin RPC API to the local host only, on port 5001
The instructions to run a secure private Bacalhau network are the same as the insecure version, please follow those instructions.
The instructions to run a job are the same as the insecure version, please follow those instructions.
The same process as above can be used to retrieve results from the IPFS node as long as the Bacalhau get
command has access to the IPFS swarm key.
Running the Bacalhau binary from outside of Docker:
Alternatively, you can use the Docker container, mount the results volume, and change the --api-host
to the name of the Bacalhau container and the --ipfs-swarm-addrs
back to port 4001:
Without this, inter-container DNS will not work, and internet access may not work either.
Double check that this network can access the internet (so Bacalhau can call external URLs).
This should be successful. If it is not, then please troubleshoot your docker networking. For example, on my Mac, I had to totally uninstall Docker, restart the computer, and then reinstall Docker. Then it worked. Also check https://docs.docker.com/desktop/troubleshoot/known-issues/. Apparently "ping from inside a container to the Internet does not work as expected.". No idea what that means. How do you break ping?
You can now browse the IPFS web UI at http://127.0.0.1:5001/webui.
Read more about the IPFS docker image here.
As described in their documentation, never expose the RPC API port (port 5001) to the public internet.
Ensure that the Bacalhau logs (docker logs bacalhau
) have no errors.
Check that your Bacalhau installation is the same version:
The versions should match. Alternatively, you can use the Docker container:
Perform a list command to ensure you can connect to the Bacalhau API.
It should return empty.
If you are retrieving and running images from docker hub you may encounter issues with rate-limiting. Docker provides higher limits when authenticated, the size of the limit is based on the type of your account.
Should you wish to authenticate with Docker Hub when pulling images, you can do so by specifying credentials as environment variables wherever your compute node is running.
Currently, this authentication is only available (and required) by the Docker Hub
How to configure compute/requester persistence
Both compute nodes, and requester nodes, maintain state. How that state is maintained is configurable, although the defaults are likely adequate for most use-cases. This page describes how to configure the persistence of compute and requester nodes should the defaults not be suitable.
The computes nodes maintain information about the work that has been allocated to them, including:
The current state of the execution, and
The original job that resulted in this allocation
This information is used by the compute and requester nodes to ensure allocated jobs are completed successfully. By default, compute nodes store their state in a bolt-db database and this is located in the bacalhau repository along with configuration data. For a compute node whose ID is "abc", the database can be found in ~/.bacalhau/abc-compute/executions.db
.
In some cases, it may be preferable to maintain the state in memory, with the caveat that should the node restart, all state will be lost. This can be configured using the environment variables in the table below.
Environment Variable | Flag alternative | Value | Effect |
---|---|---|---|
When running a requester node, it maintains state about the jobs it has been requested to orchestrate and schedule, the evaluation of those jobs, and the executions that have been allocated. By default, this state is stored in a bolt db database that, with a node ID of "xyz" can be found in ~/.bacalhau/xyz-requester/jobs.db
.
Environment Variable | Flag alternative | Value | Effect |
---|---|---|---|
These are the flags that control the capacity of the Bacalhau node, and the limits for jobs that might be run.
The --limit-total-*
flags control the total system resources you want to give to the network. If left blank, the system will attempt to detect these values automatically.
The --limit-job-*
flags control the maximum amount of resources a single job can consume for it to be selected for execution.
Resource limits are not supported for Docker jobs running on Windows. Resource limits will be applied at the job bid stage based on reported job requirements but will be silently unenforced. Jobs will be able to access as many resources as requested at runtime.
Bacalhau is a peer-to-peer network of computing providers that will run jobs submitted by users. A Compute Provider (CP) is anyone who is running a Bacalhau compute node participating in the Bacalhau compute network, regardless of whether they are hosting any Filecoin data.
This section will show you how to configure and run a Bacalhau node and start accepting and running jobs.
To bootstrap your node and join the network as a CP you can leap right into the below, or find more setup details in these guides:
If you run on a different system than Ubuntu, drop us a message on ! We'll add instructions for your favorite OS.
Estimated time for completion: 10 min. Tested on: Ubuntu 22.04 LTS (x86/64) running on a GCP e2-standard-4 (4 vCPU, 16 GB memory) instance with 40 GB disk size.
Docker Engine - to take on Docker workloads
Connection to storage provider - for storing job's results
Firewall - to ensure your node can communicate with the rest of the network
Physical hardware, Virtual Machine, or cloud-based host. A Bacalhau compute node is not intended to be run from within a Docker container.
To run docker-based workloads, you should have docker installed and running.
If you already have it installed and want to configure the connection to Docker with the following environment variables:
DOCKER_HOST
to set the URL to the docker server.
DOCKER_API_VERSION
to set the version of the API to reach, leave empty for "latest".
DOCKER_CERT_PATH
to load the TLS certificates from.
DOCKER_TLS_VERIFY
to enable or disable TLS verification, off by default.
We will need to connect our Bacalhau node to a storage server for this we will be using IPFS server so we can run jobs that consume CIDs as inputs.
You can either install and run it locally or you can connect to a remote IPFS server.
The multiaddress above is just an example - you need to get the multiaddress of the server you want to connect to.
Using the command below:
Install:
Configure:
Now launch the IPFS daemon in a separate terminal (make sure to export the IPFS_PATH
environment variable there as well):
Use the following command to print out a number of addresses.
I pick the record that combines 127.0.0.1
and tcp
but I replace port 4001
with 5001
:
To ensure that our node can communicate with other nodes on the network - we need to make sure the 1235 port is open.
(Optional) To ensure the CLI can communicate with our node directly (i.e. bacalhau --api-host <MY_NODE_PUBLIC_IP> version
) - we need to make sure the 1234 port is open.
Now we can run our bacalhau node:
Alternatively, you can run the following Docker command:
Even though the CLI (by default) submits jobs, each node listens for events on the global network and possibly bids for taking a job: your logs should therefore show the activity of your node bidding for incoming jobs.
To quickly check your node runs properly, let's submit the following dummy job:
If you see logs of your compute node bidding for the job above it means you've successfully joined Bacalhau as a Compute Provider!
Bacalhau can limit the total time a job spends executing. A job that spends too long executing will be cancelled and no results will be published.
By default, a Bacalhau node does not enforce any limit on job execution time. Both node operators and job submitters can supply a maximum execution time limit. If a job submitter asks for a longer execution time than permitted by a node operator, their job will be rejected.
Applying job timeouts allows node operators to more fairly distribute the work submitted to their nodes. It also protects users from transient errors that results in their jobs waiting indefinitely.
Job submitters can pass the --timeout
flag to any Bacalhau job submission CLI to set a maximum job execution time. The supplied value should be a whole number of seconds with no unit.
The timeout can also be added to an existing job spec by adding the Timeout
property to the Spec
.
Node operators can pass the --max-job-execution-timeout
flag to bacalhau serve
to configure the maximum job time limit. The supplied value should be a numeric value followed by a time unit (one of s
for seconds, m
for minutes or h
for hours).
Node operators can also use configuration properties to configure execution limits.
Compute nodes will use the properties:
Config property | Meaning |
---|
Requester nodes will use the properties:
Config property | Meaning |
---|
Before you join the main Bacalhau network, you can test locally.
To test, you can use the bacalhau devstack
command, which offers a way to get a 3 node cluster running locally.
By settings PREDICTABLE_API_PORT=1
, the first node of our 3 node cluster will always listen on port 20000
In another window, export the following environment variables so that the Bacalhau client binary connects to our local development cluster:
You can now interact with Bacalhau - all jobs are running by the local devstack cluster.
Bacalhau has two ways to make use of external storage providers:
Inputs storage resources consumed as inputs to jobs
Publishers storage resources created with the results of jobs
To start, you'll need to connect the Bacalhau node to an IPFS server so that you can run jobs that consume CIDs as inputs.
You can either and run it locally, or you can connect to a remote IPFS server.
In both cases, you should have an for the IPFS server that should look something like this:
The multiaddress above is just an example - you'll need to get the multiaddress of the IPFS server you want to connect to.
You can then configure your Bacalhau node to use this IPFS server by passing the --ipfs-connect
argument to the serve
command:
Or, set the Node.IPFS.Connect
property in the Bacalhau configuration file.
The IPFS publisher works using the same setup as above - you'll need to have an IPFS server running and a multiaddress for it. You'll then you pass that multiaddress using the --ipfs-connect
argument to the serve
command.
If you are publishing to a public IPFS node, you can use bacalhau get
with no further arguments to download the results. However, you may experience a delay in results becoming available as indexing of new data by public nodes takes time.
To speed up the download or to retrieve results from a private IPFS node, pass the swarm multiaddress to bacalhau get
to download results.
Pass the swarm key to bacalhau get
if the IPFS swarm is a private swarm.
How to run the WebUI.
The Bacalhau WebUI offers an intuitive interface for interacting with the Bacalhau network. This guide provides comprehensive instructions for setting up, deploying, and utilizing the WebUI.
For contributing to the WebUI's development, please refer to the .
Ensure you have a Bacalhau v1.1.7 or later installed.
To launch the WebUI locally, execute the following command:
This command initializes a requester and compute node, configured to listen on HOST=0.0.0.0
and PORT=1234
.
Once started, the WebUI is accessible at . This local instance allows you to interact with your local Bacalhau network setup.
N.b. The development version of the WebUI is for observation only and may not reflect the latest changes or features available in the local setup.
Jupyter Notebooks have become an essential tool for data scientists, researchers, and developers for interactive computing and the development of data-driven projects. They provide an efficient way to share code, equations, visualizations, and narrative text with support for multiple programming languages. In this tutorial, we will introduce you to running Jupyter Notebooks on Bacalhau, a powerful and flexible container orchestration platform. By leveraging Bacalhau, you can execute Jupyter Notebooks in a scalable and efficient manner using Docker containers, without the need for manual setup or configuration.
In the following sections, we will explore two examples of executing Jupyter Notebooks on Bacalhau:
Executing a Simple Hello World Notebook: We will begin with a basic example to familiarize you with the process of running a Jupyter Notebook on Bacalhau. We will execute a simple "Hello, World!" notebook to demonstrate the steps required for running a notebook in a containerized environment.
Notebook to Train an MNIST Model: In this section, we will dive into a more advanced example. We will execute a Jupyter Notebook that trains a machine-learning model on the popular MNIST dataset. This will showcase the potential of Bacalhau to handle more complex tasks while providing you with insights into utilizing containerized environments for your data science projects.
To get started, you need to install the Bacalhau client, see more information
There are no external dependencies that we need to install. All dependencies are already there in the container.
/inputs/hello.ipynb
: This is the path of the input Jupyter Notebook inside the Docker container.
-i
: This flag stands for "input" and is used to provide the URL of the input Jupyter Notebook you want to execute.
https://raw.githubusercontent.com/js-ts/hello-notebook/main/hello.ipynb
: This is the URL of the input Jupyter Notebook.
jsacex/jupyter
: This is the name of the Docker image used for running the Jupyter Notebook. It is a minimal Jupyter Notebook stack based on the official Jupyter Docker Stacks.
--
: This double dash is used to separate the Bacalhau command options from the command that will be executed inside the Docker container.
jupyter nbconvert
: This is the primary command used to convert and execute Jupyter Notebooks. It allows for the conversion of notebooks to various formats, including execution.
--execute
: This flag tells nbconvert
to execute the notebook and store the results in the output file.
--to notebook
: This option specifies the output format. In this case, we want to keep the output as a Jupyter Notebook.
--output /outputs/hello_output.ipynb
: This option specifies the path and filename for the output Jupyter Notebook, which will contain the results of the executed input notebook.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
After the download has finished you can see the contents in the results
directory, running the command below:
Install Docker on your local machine.
Sign up for a DockerHub account if you don't already have one.
Step 1: Create a Dockerfile
Create a new file named Dockerfile in your project directory with the following content:
This Dockerfile creates a Docker image based on the official TensorFlow
GPU-enabled image, sets the working directory to the root, updates the package list, and copies an IPython notebook (mnist.ipynb
) and a requirements.txt
file. It then upgrades pip
and installs Python packages from the requirements.txt
file, along with scikit-learn
. The resulting image provides an environment ready for running the mnist.ipynb
notebook with TensorFlow
and scikit-learn
, as well as other specified dependencies.
Step 2: Build the Docker Image
In your terminal, navigate to the directory containing the Dockerfile and run the following command to build the Docker image:
Replace "your-dockerhub-username" with your actual DockerHub username. This command will build the Docker image and tag it with your DockerHub username and the name "your-dockerhub-username/jupyter-mnist-tensorflow".
Step 3: Push the Docker Image to DockerHub
Once the build process is complete, push the Docker image to DockerHub using the following command:
Again, replace "your-dockerhub-username" with your actual DockerHub username. This command will push the Docker image to your DockerHub repository.
--gpu 1
: Flag to specify the number of GPUs to use for the execution. In this case, 1 GPU will be used.
-i gitlfs://huggingface.co/datasets/VedantPadwal/mnist.git
: The -i
flag is used to clone the MNIST dataset from Hugging Face's repository using Git LFS. The files will be mounted inside the container.
jsacex/jupyter-tensorflow-mnist:v02
: The name and the tag of the Docker image.
--
: This double dash is used to separate the Bacalhau command options from the command that will be executed inside the Docker container.
jupyter nbconvert --execute --to notebook --output /outputs/mnist_output.ipynb mnist.ipynb
: The command to be executed inside the container. In this case, it runs the jupyter nbconvert
command to execute the mnist.ipynb
notebook and save the output as mnist_output.ipynb
in the /outputs
directory.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
After the download has finished you can see the contents in the results
directory, running the command below:
The outputs include our trained model and the Jupyter notebook with the output cells.
Bacalhau allows you to easily execute batch jobs via the CLI. But sometimes you need to do more than that. You might need to execute a script that requires user input, or you might need to execute a script that requires a lot of parameters. In any case, you probably want to execute your jobs in a repeatable manner.
This example demonstrates a simple Python script that is able to orchestrate the execution of lots of jobs in a repeatable manner.
To get started, you need to install the Bacalhau client, see more information
To demonstrate this example, I will use the data generated from an Ethereum example. This produced a list of hashes that I will iterate over and execute a job for each one.
Now let's create a file called bacalhau.py
. The script below automates the submission, monitoring, and retrieval of results for multiple Bacalhau jobs in parallel. It is designed to be used in a scenario where there are multiple hash files, each representing a job, and the script manages the execution of these jobs using Bacalhau commands.
This code has a few interesting features:
Change the value in the main
call (main("hashes.txt", 10)
) to change the number of jobs to execute.
Because all jobs are complete at different times, there's a loop to check that all jobs have been completed before downloading the results. If you don't do this, you'll likely see an error when trying to download the results. The while True
loop is used to monitor the status of jobs and wait for them to complete.
When downloading the results, the IPFS get often times out, so I wrapped that in a loop. The for i in range(0, 5)
loop in the getResultsFromJob
function involves retrying the bacalhau get
operation if it fails to complete successfully.
Let's run it!
Hopefully, the results
directory contains all the combined results from the jobs we just executed. Here's we're expecting to see CSV files:
Success! We've now executed a bunch of jobs in parallel using Python. This is a great way to execute lots of jobs in a repeatable manner. You can alter the file above for your purposes.
You might also be interested in the following examples:
How to use the Bacalhau Docker image
This documentation explains how to use the Bacalhau Docker image to run tasks and manage them using the Bacalhau client.
To get started, you need to install the Bacalhau client (see more information ) and Docker.
The first step is to pull the Bacalhau Docker image from the .
You can also pull a specific version of the image, e.g.:
:::warning Remember that the "latest" tag is just a string. It doesn't refer to the latest version of the Bacalhau client, it refers to an image that has the "latest" tag. Therefore, if your machine has already downloaded the "latest" image, it won't download it again. To force a download, you can use the --no-cache
flag. :::
To check the version of the Bacalhau client, run:
In the example below, an Ubuntu-based job runs to print the message 'Hello from Docker Bacalhau:
ghcr.io/bacalhau-project/bacalhau:latest
: Name of the Bacalhau Docker image
--id-only
: Output only the job id
--wait
: Wait for the job to finish
ubuntu:latest.
Ubuntu container
--
: Separate Bacalhau parameters from the command to be executed inside the container
sh -c 'uname -a && echo "Hello from Docker Bacalhau!"'
: The command executed inside the container
Let's have a look at the command execution in the terminal:
The output you're seeing is in two parts: The first line: 13:53:46.478 | INF pkg/repo/fs.go:81 > Initializing repo at '/root/.bacalhau' for environment 'production'
is an informational message indicating the initialization of a repository at the specified directory ('/root/.bacalhau')
for the production
environment. The second line: ab95a5cc-e6b7-40f1-957d-596b02251a66
is a job ID
, which represents the result of executing a command inside a Docker container. It can be used to obtain additional information about the executed job or to access the job's results. We store that in an environment variable so that we can reuse it later on (env: JOB_ID=ab95a5cc-e6b7-40f1-957d-596b02251a66
)
To print out the content of the Job ID, run the following command:
One inconvenience that you'll see is that you'll need to mount directories into the container to access files. This is because the container is running in a separate environment from your host machine. Let's take a look at the example below:
The first part of the example should look familiar, except for the Docker commands.
When a job is submitted, Bacalhau prints out the related job_id
(a46a9aa9-63ef-486a-a2f8-6457d7bafd2e
):
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in the result
directory.
After the download has finished you should see the following contents in the results directory.
Running a Windows-based node is not officially supported, so your mileage may vary. Some features (like ) are not present in Windows-based nodes.
Bacalhau currently makes the assumption that all containers are Linux-based. Users of the Docker executor will need to manually ensure that their Docker engine is running and to support Linux containers, e.g. using the WSL-based backend.
How to use docker containers with Bacalhau
Bacalhau executes jobs by running them within containers. Bacalhau employs a syntax closely resembling Docker, allowing you to utilize the same containers. The key distinction lies in how input and output data are transmitted to the container via IPFS, enabling scalability on a global level.
This section describes how to migrate a workload based on a Docker container into a format that will work with the Bacalhau client.
You can check out this example tutorial on to see how we used all these steps together.
Here are few things to note before getting started:
Container Registry: Ensure that the container is published to a public container registry that is accessible from the Bacalhau network.
Architecture Compatibility: Bacalhau supports only images that match the host node's architecture. Typically, most nodes run on linux/amd64
, so containers in arm64
format are not able to run.
Input Flags: The --input ipfs://...
flag supports only directories and does not support CID subpaths. The --input https://...
flag supports only single files and does not support URL directories. The --input s3://...
flag supports S3 keys and prefixes. For example, s3://bucket/logs-2023-04*
includes all logs for April 2023.
You can check to see a used by the Bacalhau team
Note: Only about a third of examples have their containers here. The rest are under random docker hub registries (mostly Vedants).
To help provide a safe, secure network for all users, we add the following runtime restrictions:
Limited Ingress/Egress Networking:
All ingress/egress networking is limited as described in the documentation. You won't be able to pull data/code/weights/
etc. from an external source.
Data Passing with Docker Volumes:
A job includes the concept of input and output volumes, and the Docker executor implements support for these. This means you can specify your CIDs, URLs, and/or S3 objects as input
paths and also write results to an output
volume. This can be seen in the following example:
The above example demonstrates an input volume flag -i s3://mybucket/logs-2023-04*
, which mounts all S3 objects in bucket mybucket
with logs-2023-04
prefix within the docker container at location /input
(root).
Output volumes are mounted to the Docker container at the location specified. In the example above, any content written to /output_folder
will be made available within the apples
folder in the job results CID.
Once the job has run on the executor, the contents of stdout
and stderr
will be added to any named output volumes the job has used (in this case apples
), and all those entities will be packaged into the results folder which is then published to a remote location by the publisher.
If you need to pass data into your container you will do this through a Docker volume. You'll need to modify your code to read from a local directory.
We make the assumption that you are reading from a directory called /inputs
, which is set as the default.
If you need to return data from your container you will do this through a Docker volume. You'll need to modify your code to write to a local directory.
We make the assumption that you are writing to a directory called /outputs
, which is set as the default.
For example:
To test your docker image locally, you'll need to execute the following command, changing the environment variables as necessary:
Let's see what each command will be used for:
For example:
The result of the commands' execution is shown below:
Data is identified by its content identifier (CID) and can be accessed by anyone who knows the CID. You can use either of these methods to upload your data:
You can mount your data anywhere on your machine, and Bacalhau will be able to run against that data
To launch your workload in a Docker container, using the specified image and working with input
data specified via IPFS CID, run the following command:
To check the status of your job, run the following command:
To get more information on your job,run:
To download your job, run:
For example, running:
outputs:
The --input
flag does not support CID subpaths for ipfs://
content.
Alternatively, you can run your workload with a publicly accessible http(s) URL, which will download the data temporarily into your public storage:
The --input
flag does not support URL directories.
If you run into this compute error while running your docker image
This can often be resolved by re-tagging your docker image
Bacalhau supports running programs that are compiled to . With the Bacalhau client, you can upload WASM programs, retrieve data from public storage, read and write data, receive program arguments, and access environment variables.
Supported WebAssembly System Interface (WASI) Bacalhau can run compiled WASM programs that expect the WebAssembly System Interface (WASI) Snapshot 1. Through this interface, WebAssembly programs can access data, environment variables, and program arguments.
Networking Restrictions All ingress/egress networking is disabled – you won't be able to pull data/code/weights
etc. from an external source. WASM jobs can say what data they need using URLs or CIDs (Content IDentifier) and can then access the data by reading from the filesystem.
Single-Threading There is no multi-threading as WASI does not expose any interface for it.
If your program typically involves reading from and writing to network endpoints, follow these steps to adapt it for Bacalhau:
Replace Network Operations: Instead of making HTTP requests to external servers (e.g., example.com), modify your program to read data from the local filesystem.
Input Data Handling: Specify the input data location in Bacalhau using the --input
flag when running the job. For instance, if your program used to fetch data from example.com
, read from the /inputs
folder locally, and provide the URL as input when executing the Bacalhau job. For example, --input http://example.com
.
Output Handling: Adjust your program to output results to standard output (stdout
) or standard error (stderr
) pipes. Alternatively, you can write results to the filesystem, typically into an output mount. In the case of WASM jobs, a default folder at /outputs
is available, ensuring that data written there will persist after the job concludes.
By making these adjustments, you can effectively transition your program to operate within the Bacalhau environment, utilizing filesystem operations instead of traditional network interactions.
You can specify additional or different output mounts using the -o
flag.
You will need to compile your program to WebAssembly that expects WASI. Check the instructions for your compiler to see how to do this.
Data is identified by its content identifier (CID) and can be accessed by anyone who knows the CID. You can use either of these methods to upload your data:
You can mount your data anywhere on your machine, and Bacalhau will be able to run against that data
You can run a WebAssembly program on Bacalhau using the bacalhau wasm run
command.
Run Locally Compiled Program:
If your program is locally compiled, specify it as an argument. For instance, running the following command will upload and execute the main.wasm
program:
The program you specify will be uploaded to a Bacalhau storage node and will be publicly available.
Alternative Program Specification:
You can use a Content IDentifier (CID) for a specific WebAssembly program.
Input Data Specification:
Make sure to specify any input data using --input
flag.
This ensures the necessary data is available for the program's execution.
You can give the WASM program arguments by specifying them after the program path or CID. If the WASM program is already compiled and located in the current directory, you can run it by adding arguments after the file name:
For a specific WebAssembly program, run:
Write your program to use program arguments to specify input and output paths. This makes your program more flexible in handling different configurations of input and output volumes.
For example, instead of hard-coding your program to read from /inputs/data.txt
, accept a program argument that should contain the path and then specify the path as an argument to bacalhau wasm run
:
Your language of choice should contain a standard way of reading program arguments that will work with WASI.
You can also specify environment variables using the -e
flag.
This example will walk you through building Time Series Forecasting using . Prophet is a forecasting procedure implemented in R and Python. It is fast and provides completely automated forecasts that can be tuned by hand by data scientists and analysts.
:::info Quick script to run custom R container on Bacalhau:
:::
To get started, you need to install the Bacalhau client, see more information
Open R studio or R-supported IDE. If you want to run this on a notebook server, then make sure you use an R kernel. Prophet is a CRAN package, so you can use install.packages to install the prophet package:
After installation is finished, you can download the example data that is stored in IPFS:
The code below instantiates the library and fits a model to the data.
Create a new file called Saturating-Forecasts.R
and in it paste the following script:
This script performs time series forecasting using the Prophet library in R, taking input data from a CSV file, applying the forecasting model, and generating plots for analysis.
Let's have a look at the command below:
This command uses Rscript to execute the script that was created and written to the Saturating-Forecasts.R
file.
The input parameters provided in this case are the names of input and output files:
example_wp_log_R.csv
- the example data that was previously downloaded.
outputs/output0.pdf
- the name of the file to save the first forecast plot.
outputs/output1.pdf
- the name of the file to save the second forecast plot.
To build your own docker container, create a Dockerfile
, which contains instructions to build your image.
These commands specify how the image will be built, and what extra requirements will be included. We use r-base
as the base image and then install the prophet
package. We then copy the Saturating-Forecasts.R
script into the container and set the working directory to the R
folder.
We will run docker build
command to build the container:
Before running the command replace:
repo-name
with the name of the container, you can name it anything you want
tag
this is not required but you can use the latest
tag
In our case:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name, or tag.
In our case:
The following command passes a prompt to the model and generates the results in the outputs directory. It takes approximately 2 minutes to run.
bacalhau docker run
: call to Bacalhau
-i ipfs://QmY8BAftd48wWRYDf5XnZGkhwqgjpzjyUG3hN1se6SYaFt:/example_wp_log_R.csv
: Mounting the uploaded dataset at /inputs
in the execution. It takes two arguments, the first is the IPFS CID (QmY8BAftd48wWRYDf5XnZGkhwqgjpzjyUG3hN1se6SYaFtz
) and the second is file path within IPFS (/example_wp_log_R.csv
)
ghcr.io/bacalhau-project/examples/r-prophet:0.0.2
: the name and the tag of the docker image we are using
/example_wp_log_R.csv
: path to the input dataset
/outputs/output0.pdf
, /outputs/output1.pdf
: paths to the output
Rscript Saturating-Forecasts.R
: execute the R script
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on:
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the file, run the following command:
You can't natively display PDFs in notebooks, so here are some static images of the PDFs:
output0.pdf
output1.pdf
Bacalhau operates by executing jobs within containers. This example shows you how to build and use a custom docker container.
To get started, you need to install the Bacalhau client, see more information
This example requires Docker. If you don't have Docker installed, you can install it from . Docker commands will not work on hosted notebooks like Google Colab, but the Bacalhau commands will.
You're likely familiar with executing Docker commands to start a container:
This command runs a container from the docker/whalesay
image. The container executes the cowsay sup old fashioned container run
command:
This command also runs a container from the docker/whalesay
image, using Bacalhau. We use the bacalhau docker run
command to start a job in a Docker container. It contains additional flags such as --wait
to wait for job completion and --id-only
to return only the job identifier. Inside the container, the bash -c 'cowsay hello web3 uber-run'
command is executed.
When a job is submitted, Bacalhau prints out the related job_id
(7e41b9b9-a9e2-4866-9fce-17020d8ec9e0
):
We store that in an environment variable so that we can reuse it later on.
You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
Viewing your job output
Both commands execute cowsay in the docker/whalesay
container, but Bacalhau provides additional features for working with jobs at scale.
Bacalhau uses a syntax that is similar to Docker, and you can use the same containers. The main difference is that input and output data is passed to the container via IPFS, to enable planetary scale. In the example above, it doesn't make too much difference except that we need to download the stdout.
The --wait
flag tells Bacalhau to wait for the job to finish before returning. This is useful in interactive sessions like this, but you would normally allow jobs to complete in the background and use the bacalhau list
command to check on their status.
Another difference is that by default Bacalhau overwrites the default entry point for the container, so you have to pass all shell commands as arguments to the run
command after the --
flag.
To use your own custom container, you must publish the container to a container registry that is accessible from the Bacalhau network. At this time, only public container registries are supported.
To demonstrate this, you will develop and build a simple custom container that comes from an old Docker example. I remember seeing cowsay at a Docker conference about a decade ago. I think it's about time we brought it back to life and distribute it across the Bacalhau network.
Next, the Dockerfile adds the script and sets the entry point.
Now let's build and test the container locally.
Once your container is working as expected then you should push it to a public container registry. In this example, I'm pushing to Github's container registry, but we'll skip the step below because you probably don't have permission. Remember that the Bacalhau nodes expect your container to have a linux/amd64
architecture.
Now we're ready to submit a Bacalhau job using your custom container. This code runs a job, downloads the results, and prints the stdout.
:::tip The bacalhau docker run
command strips the default entry point, so don't forget to run your entry point in the command line arguments. :::
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Download your job results directly by using bacalhau get
command.
View your job output
Bacalhau, a powerful and versatile data processing platform, has recently integrated Amazon Web Services (AWS) S3, allowing users to seamlessly access and process data stored in S3 buckets within their Bacalhau jobs. This integration not only simplifies data input, output, and processing operations but also streamlines the overall workflow by enabling users to store and manage their data effectively in S3 buckets. With Bacalhau, you can process several Large s3 buckets in parallel. In this example, we will walk you through the process of reading data from multiple S3 buckets, converting TIFF images to JPEG format.
There are several advantages to converting images from TIFF to JPEG format:
Reduced File Size: JPEG images use lossy compression, which significantly reduces file size compared to lossless formats like TIFF. Smaller file sizes lead to faster upload and download times, as well as reduced storage requirements.
Efficient Processing: With smaller file sizes, image processing tasks tend to be more efficient and require less computational resources when working with JPEG images compared to TIFF images.
Training Machine Learning Models: Smaller file sizes and reduced computational requirements make JPEG images more suitable for training machine learning models, particularly when dealing with large datasets, as they can help speed up the training process and reduce the need for extensive computational resources.
We will use the S3 mount feature to mount bucket objects from s3 buckets. Let’s have a look at the example below:
-i src=s3://sentinel-s1-rtc-indigo/tiles/RTC/1/IW/10/S/DH/2017/S1A_20170125_10SDH_ASC/Gamma0_VH.tif,dst=/sentinel-s1-rtc-indigo/,opt=region=us-west-2
It defines S3 object as input to the job:
sentinel-s1-rtc-indigo
: bucket’s name
tiles/RTC/1/IW/10/S/DH/2017/S1A_20170125_10SDH_ASC/Gamma0_VH.tif
: represents the key of the object in that bucket. The object to be processed is called Gamma0_VH.tif
and is located in the subdirectory with the specified path.
But if you want to specify the entire objects located in the path, you can simply add *
to the end of the path (tiles/RTC/1/IW/10/S/DH/2017/S1A_20170125_10SDH_ASC/*
)
dst=/sentinel-s1-rtc-indigo
: the destination to which to mount the s3 bucket object
opt=region=us-west-2
: specifying the region in which the bucket is located
In the example below, we will mount several bucket objects from public s3 buckets located in a specific region:
The job has been submitted and Bacalhau has printed out the related job_id
. We store that in an environment variable so that we can reuse it later on:
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the images, we will use glob to return all file paths that match a specific pattern.
The code processes and displays all images in the specified directory by applying cropping and resizing with a specified reduction factor.
Bacalhau supports running jobs as a program. This example demonstrates how to compile a project into WebAssembly and run the program on Bacalhau.
To get started, you need to install the Bacalhau client, see more information .
A working Rust installation with the wasm32-wasi
target. For example, you can use to install Rust and configure it to build WASM targets. For those using the notebook, these are installed in hidden cells below.
We can use cargo
(which will have been installed by rustup
) to start a new project (my-program
) and compile it:
We can then write a Rust program. Rust programs that run on Bacalhau can read and write files, access a simple clock, and make use of pseudo-random numbers. They cannot memory-map files or run code on multiple threads.
The program below will use the Rust imageproc
crate to resize an image through seam carving, based on .
In the main function main()
an image is loaded, the original is saved, and then a loop is performed to reduce the width of the image by removing "seams." The results of the process are saved, including the original image with drawn seams and a gradient image with highlighted seams.
We also need to install the imageproc
and image
libraries and switch off the default features to make sure that multi-threading is disabled (default-features = false
). After disabling the default features, you need to explicitly specify only the features that you need:
We can now build the Rust program into a WASM blob using cargo
:
This command navigates to the my-program
directory and builds the project using Cargo with the target set to wasm32-wasi
in release mode.
This will generate a WASM file at ./my-program/target/wasm32-wasi/release/my-program.wasm
which can now be run on Bacalhau.
Now that we have a WASM binary, we can upload it to IPFS and use it as input to a Bacalhau job.
The -i
flag allows specifying a URI to be mounted as a named volume in the job, which can be an IPFS CID, HTTP URL, or S3 object.
For this example, we are using an image of the Statue of Liberty that has been pinned to a storage facility.
bacalhau wasm run
: call to Bacalhau
./my-program/target/wasm32-wasi/release/my-program.wasm
: the path to the WASM file that will be executed
_start
: the entry point of the WASM program, where its execution begins
--id-only
: this flag indicates that only the identifier of the executed job should be returned
-i ipfs://bafybeifdpl6dw7atz6uealwjdklolvxrocavceorhb3eoq6y53cbtitbeu:/inputs
: input data volume that will be accessible within the job at the specified destination path
When a job is submitted, Bacalhau prints out the related job_id. We store that in an environment variable so that we can reuse it later on:
You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (wasm_results
) and downloaded our job output to be stored in that directory.
We can now get the results.
When we view the files, we can see the original image, the resulting shrunk image, and the seams that were removed.
Reject jobs that don't specify any .
Accept jobs that require .
Environment variable | Description |
---|---|
If you do not have Docker on your system, you can follow the official or just use the snippet below:
Now make :
In both cases - we should have a for our IPFS server that should look something like this:
To install a single IPFS node locally on Ubuntu you can follow the , or follow the steps below. We advise running the same IPFS version as the Bacalhau main network.
If you want to run the IPFS daemon as a service, here's an example .
Don't forget we need to fetch an pointing to our local node.
Firewall configuration is very specific to your network and we can't provide generic instructions for this step but if you need any help feel free to reach out on
to run bacalhau serve
.
If you want to run Bacalhau as a service, here's an example .
These commands join this node to the public Bacalhau network, congrats!
At this point, you probably have a number of questions for us. What incentive should you expect for running a public Bacalhau node? Please contact us on to further discuss this topic and for sharing your valuable feedback.
For observational purposes, a development version of the WebUI is available at . This instance displays jobs from the development server.
To get started, you need to install the Bacalhau client, see more information
If you have questions or need support or guidance, please reach out to the (#general channel).
If you have questions or need support or guidance, please reach out to the (#general channel).
If have questions or need support or guidance, please reach out to the (#general channel).
You can specify which directory the data is written to with the CLI flag.
You can specify which directory the data is written to with the CLI flag.
At this step, you create (or update) a Docker image that Bacalhau will use to perform your task. You from your code and dependencies, then to a public registry so that Bacalhau can access it. This is necessary for other Bacalhau nodes to run your container and execute the given task.
Most Bacalhau nodes are of an x86_64
architecture, therefore containers should be built for .
Bacalhau will use the if your image contains one. If you need to specify another entrypoint, use the --entrypoint
flag to bacalhau docker run
.
If you have questions or need support or guidance, please reach out to the (#general channel)
For example, Rust users can specify the wasm32-wasi
target to rustup
and cargo
to get programs compiled for WASI WebAssembly. See for more information on this.
See for a workload that leverages WebAssembly support.
If you have questions or need support or guidance, please reach out to the (#general channel)
To use Bacalhau, you need to package your code in an appropriate format. The developers have already pushed a container for you to use, but if you want to build your own, you can follow the steps below. You can view a in the documentation.
hub-user
with your docker hub username. If you don’t have a docker hub account , and use the username of the account you created
If you have questions or need support or guidance, please reach out to the (#general channel).
If you have questions or need support or guidance, please reach out to the (#general channel).
To get started, you need to install the Bacalhau client, see more information
If you have questions or need support or guidance, please reach out to the (#general channel).
If you have questions or need support or guidance, please reach out to the (#general channel).
DOCKER_USERNAME
The username with which you are registered at https://hub.docker.com/
DOCKER_PASSWORD
A read-only access token, generated from the page at https://hub.docker.com/settings/security>
BACALHAU_COMPUTE_STORE_TYPE
--compute-execution-store-type
boltdb
Uses the bolt db execution store (default)
BACALHAU_COMPUTE_STORE_PATH
--compute-execution-store-path
A path (inc. filename)
Specifies where the boltdb database should be stored. Default is ~/.bacalhau/{NODE-ID}-compute/executions.db
if not set
BACALHAU_JOB_STORE_TYPE
--requester-job-store-type
boltdb
Uses the bolt db job store (default)
BACALHAU_JOB_STORE_PATH
--requester-job-store-path
A path (inc. filename)
Specifies where the boltdb database should be stored. Default is ~/.bacalhau/{NODE-ID}-requester/jobs.db
if not set
| The minimum acceptable value for a job timeout. A job will only be accepted if it is submitted with a timeout of longer than this value. |
| The maximum acceptable value for a job timeout. A job will only be accepted if it is submitted with a timeout of shorter than this value. |
| The job timeout that will be applied to jobs that are submitted without a timeout value. |
| If a job is submitted with a timeout less than this value, the default job execution timeout will be used instead. |
| The timeout to use in the job if a timeout is missing or too small. |
To upload a file from a URL we will use the bacalhau docker run
command.
The job has been submitted and Bacalhau has printed out the related job id. We store that in an environment variable so that we can reuse it later on.
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
ghcr.io/bacalhau-project/examples/upload:v1
: the name and the tag of the docker image we are using
--input=https://raw.githubusercontent.com/filecoin-project/bacalhau/main/README.md \
: URL path of the input data volumes downloaded from a URL source.
The bacalhau docker run
command takes advantage of the --input
parameter. This will download a file from a public URL and place it in the /inputs
directory of the container (by default). Then we will use a helper container to move that data to the /outputs
directory so that it is published to your public storage via IPFS. In our case we are using Filecoin as our public storage.
You can find out more about the helper container in the examples repository.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
Each job creates 3 subfolders: the combined_results, per_shard files, and the raw directory. To view the file, run the following command:
To get the output CID from a completed job, run the following command:
The job will upload the CID to your public storage via IPFS. We will store the cid that in an environment variable so that we can reuse it later on.
Now that we have the CID, we can use it in a new job. This time we will use the --input
parameter to tell Bacalhau to use the CID we just uploaded.
In this case, our "job" is just to list the contents of the /inputs
directory. You can see that the "input" data is located under /inputs/outputs/README.md
.
The job has been submitted and Bacalhau has printed out the related job id. We store that in an environment variable so that we can reuse it later on.
For questions and feedback, please reach out in our forum
Here is a quick tutorial on how to copy Data from S3 to a public storage. In this tutorial, we will scrape all the links from a public AWS S3 buckets and then copy the data to IPFS using Bacalhau.
To get started, you need to install the Bacalhau client, see more information here
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
-i "s3://noaa-goes16/ABI-L1b-RadC/2000/001/12/OR_ABI-L1b-RadC-M3C01*:/inputs,opt=region=us-east-1
: defines S3 objects as inputs to the job. In this case, it will download all objects that match the prefix ABI-L1b-RadC/2000/001/12/OR_ABI-L1b-RadC-M3C01
from the bucket noaa-goes16
in us-east-1
region, and mount the objects under /inputs
path inside the docker job.
-- sh -c "cp -r /inputs/* /outputs/"
: copies all files under /inputs
to /outputs
, which is by default the result output directory which all of its content will be published to the specified destination, which is IPFS by default
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
:::tip This only works with datasets that are publicly accessible and don't require an AWS account or pay to use buckets. :::
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory.
To view your file, run the following command:
Installing jq to extract CID from the result description
You can publish your results to Amazon s3 or other S3-compatible destinations like MinIO, Ceph, or SeaweedFS to conveniently store and share your outputs.
To facilitate publishing results, define publishers and their configurations using the PublisherSpec structure.
For S3-compatible destinations, the configuration is as follows:
For Amazon S3, you can specify the PublisherSpec configuration as shown below:
Let's explore some examples to illustrate how you can use this:
Publishing results to S3 using default settings
Publishing results to S3 with a custom endpoint and region:
Publishing results to S3 as a single compressed file
Utilizing naming placeholders in the object key
Tracking content identification and maintaining lineage across different jobs' inputs and outputs can be challenging. To address this, the publisher encodes the SHA-256 checksum of the published results, specifically when publishing a single compressed file.
Here's an example of a sample result:
To enable support for the S3-compatible storage provider, no additional dependencies are required. However, valid AWS credentials are necessary to sign the requests. The storage provider uses the default credentials chain, which checks the following sources for credentials:
Environment variables, such as AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
Credentials file ~/.aws/credentials
IAM Roles for Amazon EC2 Instances
For questions, feedback, please reach out in our forum
A synthetic dataset is generated by algorithms or simulations which has similar characteristics to real-world data. Collecting real-world data, especially data that contains sensitive user data like credit card information, is not possible due to security and privacy concerns. If a data scientist needs to train a model to detect credit fraud, they can use synthetically generated data instead of using real data without compromising the privacy of users.
The advantage of using Bacalhau is that you can generate terabytes of synthetic data without having to install any dependencies or store the data locally.
In this example, we will learn how to run Bacalhau on a synthetic dataset. We will generate synthetic credit card transaction data using the Sparkov program and store the results in IPFS.
To get started, you need to install the Bacalhau client, see more information here
To run Sparkov locally, you'll need to clone the repo and install dependencies:
Go to the Sparkov_Data_Generation
directory:
Create a temporary directory (outputs
) to store the outputs:
The command above executes the Python script datagen.py
, passing the following arguments to it:
-n 1000
: Number of customers to generate
-o ../outputs
: path to store the outputs
"01-01-2022"
: Start date
"10-01-2022"
: End date
Thus, this command uses a Python script to generate synthetic credit card transaction data for the period from 01-01-2022
to 10-01-2022
and saves the results in the ../outputs
directory.
To see the full list of options, use:
To build your own docker container, create a Dockerfile
, which contains instructions to build your image:
These commands specify how the image will be built, and what extra requirements will be included. We use python:3.8
as the base image, install git
, clone the Sparkov_Data_Generation
repository from GitHub, set the working directory inside the container to /Sparkov_Data_Generation/
, and install Python dependencies listed in the requirements.txt
file."
See more information on how to containerize your script/app here
We will run docker build
command to build the container:
Before running the command replace:
hub-user
with your docker hub username. If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name
with the name of the container, you can name it anything you want
tag
this is not required but you can use the latest
tag
In our case:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case:
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau
Now we're ready to run a Bacalhau job:
bacalhau docker run
: call to Bacalhau
jsacex/sparkov-data-generation
: the name of the docker image we are using
-- python3 datagen.py -n 1000 -o ../outputs "01-01-2022" "10-01-2022"
: the arguments passed into the container, specifying the execution of the Python script datagen.py
with specific parameters, such as the amount of data, output path, and time range.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on:
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the contents of the current directory, run the following command:
If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).
Templating Support in Bacalhau Job Run
This documentation introduces templating support for bacalhau job run
, providing users with the ability to dynamically inject variables into their job specifications. This feature is particularly useful when running multiple jobs with varying parameters such as DuckDB query, S3 buckets, prefixes, and time ranges without the need to edit each job specification file manually.
The motivation behind this feature arises from the need to streamline the process of preparing and running multiple jobs with different configurations. Rather than manually editing job specs for each run, users can leverage placeholders and pass actual values at runtime.
The templating functionality in Bacalhau is built upon the Go text/template
package. This powerful library offers a wide range of features for manipulating and formatting text based on template definitions and input variables.
For more detailed information about the Go text/template
library and its syntax, please refer to the official documentation: Go text/template
Package.
You can also use environment variables for templating:
To preview the final templated job spec without actually submitting the job, you can use the --dry-run
flag:
This will output the processed job specification, showing you how the placeholders have been replaced with the provided values.
This is an ops
job that runs on all nodes that match the job selection criteria. It accepts duckdb query
variable, and two optional start-time
and end-time
variables to define the time range for the query.
To run this job, you can use the following command:
This is a batch
job that runs on a single node. It accepts duckdb query
variable, and four other variables to define the S3 bucket, prefix, pattern for the logs and the AWS region.
To run this job, you can use the following command:
A Job
represents a discrete unit of work that can be scheduled and executed. It carries all the necessary information to define the nature of the work, how it should be executed, and the resources it requires.
job
ParametersName (string : <optional>)
: A logical name to refer to the job. Defaults to job ID.
Namespace (string: "default")
: The namespace in which the job is running. ClientID
is used as a namespace in the public demo network.
Type (string: <required>)
: The type of the job, such as batch
, ops
, daemon
or service
. You can learn more about the supported jobs types in the Job Types guide.
Priority (int: 0
): Determines the scheduling priority.
Count (int: <required)
: Number of replicas to be scheduled. This is only applicable for jobs of type batch
and service
.
Meta (
Meta
: nil)
: Arbitrary metadata associated with the job.
Labels (
Label
[] : nil)
: Arbitrary labels associated with the job for filtering purposes.
Constraints (
Constraint
[] : nil)
: These are selectors which must be true for a compute node to run this job.
Tasks (
Task
[] : <required>)
:: Task associated with the job, which defines a unit of work within the job. Today we are only supporting single task per job, but with future plans to extend this.
The following parameters are generated by the server and should not be set directly.
ID (string)
: A unique identifier assigned to this job. It's auto-generated by the server and should not be set directly. Used for distinguishing between jobs with similar names.
State (
State
)
: Represents the current state of the job.
Version (int)
: A monotonically increasing version number incremented on job specification update.
Revision (int)
: A monotonically increasing revision number incremented on each update to the job's state or specification.
CreateTime (int)
: Timestamp of job creation.
ModifyTime (int)
: Timestamp of last job modification.
The different job types available in Bacalhau
Bacalhau has recently introduced different job types in v1.1, providing more control and flexibility over the orchestration and scheduling of those jobs - depending on their type.
Despite the differences in job types, all jobs benefit from core functionalities provided by Bacalhau, including:
Node selection - the appropriate nodes are selected based on several criteria, including resource availability, priority and feedback from the nodes.
Job monitoring - jobs are monitored to ensure they complete, and that they stay in a healthy state.
Retries - within limits, Bacalhau will retry certain jobs a set number of times should it fail to complete successfully when requested.
Batch jobs are executed on demand, running on a specified number of Bacalhau nodes. These jobs either run until completion or until they reach a timeout. They are designed to carry out a single, discrete task before finishing.
Ideal for intermittent yet intensive data dives, for instance performing computation over large datasets before publishing the response. This approach eliminates the continuous processing overhead, focusing on specific, in-depth investigations and computation.
Similar to batch jobs, ops jobs have a broader reach. They are executed on all nodes that align with the job specification, but otherwise behave like batch jobs.
Ops jobs are perfect for urgent investigations, granting direct access to logs on host machines, where previously you may have had to wait for the logs to arrive at a central locartion before being able to query them. They can also be used for delivering configuration files for other systems should you wish to deploy an update to many machines at once.
Daemon jobs run continuously on all nodes that meet the criteria given in the job specification. Should any new compute nodes join the cluster after the job was started, and should they meet the criteria, the job will be scheduled to run on that node too.
A good application of daemon jobs is to handle continuously generated data on every compute node. This might be from edge devices like sensors, or cameras, or from logs where they are generated. The data can then be aggregated and compressed them before sending it onwards. For logs, the aggregated data can be relayed at regular intervals to platforms like Kafka or Kinesis, or directly to other logging services with edge devices potentially delivering results via MQTT.
Service jobs run continuously on a specified number of nodes that meet the criteria given in the job specification. Bacalhau's orchestrator selects the optimal nodes to run the job, and continuously monitors its health, performance. If required it will reschedule on other nodes.
This job type is good for long running consumers such as streaming or queueing services, or real-time event listeners.
A Constraint
represents a condition that must be met for a compute node to be eligible to run a given job. Operators have the flexibility to manually define node labels when initiating a node using the bacalhau serve command. Additionally, Bacalhau boasts features like automatic resource detection and dynamic labeling, further enhancing its capability.
By defining constraints, you can ensure that jobs are scheduled on nodes that have the necessary requirements or conditions.
Constraint
Parameters:Key: The name of the attribute or property to check on the compute node. This could be anything from a specific hardware feature, operating system version, or any other node property.
Operator: Determines the kind of comparison to be made against the Key
's value, which can be:
in
: Checks if the Key's value exists within the provided list of values.
notin
: Ensures the Key's value doesn't match any in the provided list of values.
exists
: Verifies that a value for the specified Key is present, regardless of its actual value.
!
: Confirms the absence of the specified Key. i.e DoesNotExist
gt
: Assesses if the Key's value is greater than the provided value.
lt
: Assesses if the Key's value is less than the provided value.
=
& ==
: Both are used to compare the Key's value for an exact match with the provided value.
!=
: Ensures the Key's value is not the same as the provided value.
Values (optional): A list of values that the node attribute, specified by the Key
, is compared against using the Operator
. This is not needed for operators like exists
or !
.
Consider a scenario where a job should only run on nodes with a GPU and an operating system version greater than 2.0
. The constraints for such a requirement might look like:
In this example, the first constraint checks if the node has a GPU, the second constraint ensures the OS is linux, and deployed in eu-west-1 or eu-west-2`.
Constraints are evaluated as a logical AND, meaning all constraints must be satisfied for a node to be eligible.
Using too many specific constraints can lead to a job not being scheduled if no nodes satisfy all the conditions.
It's essential to balance the specificity of constraints with the broader needs and resources available in the cluster.
An InputSource
defines where and how to retrieve specific artifacts needed for a Task
, such as files or data, and where to mount them within the task's context. This ensures the necessary data is present before the task's execution begins.
Bacalhau's InputSource
natively supports fetching data from remote sources like S3 and IPFS and can also mount local directories. It is intended to be flexible for future expansion.
InputSource
Parameters:Source (
SpecConfig
: <required>)
: Specifies the origin of the artifact, which could be a URL, an S3 bucket, or other locations.
Alias (string: <optional>)
: An optional identifier for this input source. It's particularly useful for dynamic operations within a task, such as dynamically importing data in WebAssembly using an alias.
Target (string: <required>)
: Defines the path inside the task's environment where the retrieved artifact should be mounted or stored. This ensures that the task can access the data during its execution.
In this example, the first input source fetches data from an S3 bucket and mounts it at /my_s3_data
within the task. The second input source mounts a local directory at /my_local_data
and allows the task to read and write data to it.
How to pin data to public storage
If you have data that you want to make available to your Bacalhau jobs (or other people), you can pin it using a pinning service like Pinata, NFT.Storage, Thirdweb, etc. Pinning services store data on behalf of users. The pinning provider is essentially guaranteeing that your data will be available if someone knows the CID. Most pinning services offer you a free tier, so you can try them out without spending any money.
To use a pinning service, you will almost always need to create an account. After registration, you get an API token, which is necessary to control and access the files. Then you need to upload files - usually services provide a web interface, CLI and code samples for integration into your application. Once you upload the files you will get its CID, which looks like this: QmUyUg8en7G6RVL5uhyoLBxSWFgRMdMraCRWFcDdXKWEL9
. Now you can access pinned data from the jobs via this CID.
:::info Data source can be specified via --input
flag, see the CLI Guide for more details :::
SpecConfig
provides a unified structure to specify configurations for various components in Bacalhau, including engines, publishers, and input sources. Its flexible design allows seamless integration with multiple systems like Docker, WebAssembly (Wasm), AWS S3, and local directories, among others.
SpecConfig
ParametersType (string : <required>)
: Specifies the type of the configuration. Examples include docker
and wasm
for execution engines, S3
for input sources and publishers, etc.
Params (map[string]any : <optional>)
: A set of key-value pairs that provide the specific configurations for the chosen type. The keys and values are flexible and depend on the Type
. For instance, parameters for a Docker engine might include image name and version, while an S3 publisher would require configurations like the bucket name and AWS region. If not provided, it defaults to nil
.
Here are a few hypothetical examples to demonstrate how you might define SpecConfig
for different components:
Full Docker spec can be found here.
Full S3 Publisher can be found here.
Full local source can be found here.
Remember, the exact keys and values in the Params
map will vary depending on the specific requirements of the component being configured. Always refer to the individual component's documentation to understand the available parameters.
The Resources
provides a structured way to detail the computational resources a Task
requires. By specifying these requirements, you ensure that the task is scheduled on a node with adequate resources, optimizing performance and avoiding potential issues linked to resource constraints.
Resources
Parameters:CPU (string: <optional>)
: Defines the CPU resources required for the task. Units can be specified in cores (e.g., 2
for 2 CPU cores) or in milliCPU units (e.g., 250m
or 0.25
for 250 milliCPU units). For instance, if you have half a CPU core, you can represent it as 500m
or 0.5
.
Memory (string: <optional>)
: Highlights the amount of RAM needed for the task. You can specify the memory in various units such as:
Kb
for Kilobytes
Mb
for Megabytes
Gb
for Gigabytes
Tb
for Terabytes
Disk (string: <optional>)
: States the disk storage space needed for the task. Similarly, the disk space can be expressed in units like Gb
for Gigabytes, Mb
for Megabytes, and so on. As an example, 10Gb
indicates 10 Gigabytes of storage space.
GPU (string: <optional>)
: Denotes the number of GPU units required. For example, 2
signifies the requirement of 2 GPU units. This is crucial for tasks involving heavy computational processes, machine learning models, or tasks that leverage GPU acceleration.
The Network
object offers a method to specify the networking requirements of a Task
. It defines the scope and constraints of the network connectivity based on the demands of the task.
Network
Parameters:Type (string: "None")
: Indicates the network configuration's nature. There are several network modes available:
None
: This mode implies that the task does not necessitate any networking capabilities.
Full
: Specifies that the task mandates unrestricted, raw IP networking without any imposed filters.
HTTP
: This mode constrains the task to only require HTTP networking with specific domains. In this model:
The job specifier puts forward a job, stipulating the domain(s) it intends to communicate with.
The compute provider assesses the inherent risk of the job based on these domains and bids accordingly.
At runtime, the network traffic remains strictly confined to the designated domain(s).
:::info A typical command for this might resemble: bacalhau docker run —network=http —domain=crates.io —domain=github.com -i ipfs://Qmy1234myd4t4,dst=/code rust/compile
:::
Domains (string[]: <optional>)
: A list of domain strings, relevant primarily when the Type
is set to HTTP. It dictates the specific domains the task can communicate with over HTTP.
Understanding and utilizing these configurations aptly can ensure that tasks are executed in an environment that aligns with their networking requirements, bolstering efficiency and security.
A Task
signifies a distinct unit of work within the broader context of a Job
. It defines the specifics of how the task should be executed, where the results should be published, what environment variables are needed, among other configurations
Task
ParametersName (string : <required>)
: A unique identifier representing the name of the task.
Engine (
SpecConfig
: required)
: Configures the execution engine for the task, such as Docker or WebAssembly.
Publisher (
SpecConfig
: optional)
: Specifies where the results of the task should be published, such as S3 and IPFS publishers. Only applicable for tasks of type batch
and ops
.
Env (map[string]string : optional)
: A set of environment variables for the driver.
Meta (
Meta
: optional)
: Allows association of arbitrary metadata with this task.
InputSources (
InputSource
[] : optional)
: Lists remote artifacts that should be downloaded before task execution and mounted within the task, such as from S3 or HTTP/HTTPs.
ResultPaths (
ResultPath
[] : optional)
: Indicates volumes within the task that should be included in the published result. Only applicable for tasks of type batch
and ops
.
Resources (
Resources
: optional)
: Details the resources that this task requires.
Network (
Network
: optional)
: Configurations related to the networking aspects of the task.
Timeouts (
Timeouts
: optional)
: Configurations concerning any timeouts associated with the task.
A ResultPath
denotes a specific location within a Task
that contains meaningful output or results. By specifying a ResultPath
, you can pinpoint which files or directories are essential and should be retained or published after the task's execution.
ResultPath
Parameters:Name: A descriptive label or identifier for the result, allowing for easier referencing and understanding of the output's nature or significance.
Path: Specifies the exact location, either a file or a directory, within the task's environment where the result or output is stored. This ensures that after the task completes, the critical data at this path can be accessed, retained, or published as necessary.
In both the Job
and Task
specifications within Bacalhau, the Meta
block is a versatile element used to attach arbitrary metadata. This metadata isn't utilized for filtering or categorizing jobs; there's a separate Labels
block specifically designated for that purpose. Instead, the Meta
block is instrumental for embedding additional information for operators or external systems, enhancing clarity and context.
Meta
Parameters in Job and Task SpecsThe Meta
block is comprised of key-value pairs, with both keys and values being strings. These pairs aren't constrained by a predefined structure, offering flexibility for users to annotate jobs and tasks with diverse metadata.
Users can incorporate any arbitrary key-value pairs to convey descriptive information or context about the job or task.
project: Identifies the associated project.
version: Specifies the version of the application or service.
owner: Names the responsible team or individual.
environment: Indicates the stage in the development lifecycle.
Beyond user-defined metadata, Bacalhau automatically injects specific metadata keys for identification and security purposes.
bacalhau.org/requester.id: A unique identifier for the orchestrator that handled the job.
bacalhau.org/requester.publicKey: The public key of the requester, aiding in security and validation.
bacalhau.org/client.id: The ID for the client submitting the job, enhancing traceability.
Identification: The metadata aids in uniquely identifying jobs and tasks, connecting them to their originators and executors.
Context Enhancement: Metadata can supplement jobs and tasks with additional data, offering insights and context that aren't captured by standard parameters.
Security Enhancement: Auto-generated keys like the requester's public key contribute to the secure handling and execution of jobs and tasks.
While the Meta
block is distinct from the Labels
block used for filtering, its contribution to providing context, security, and traceability is integral in managing and understanding the diverse jobs and tasks within the Bacalhau ecosystem effectively.
The Timeouts
object provides a mechanism to impose timing constraints on specific task operations, particularly execution. By setting these timeouts, users can ensure tasks don't run indefinitely and align them with intended durations.
Timeouts
Parameters:ExecutionTimeout (int: <optional>)
: Defines the maximum duration (in seconds) that a task is permitted to run. A value of zero indicates that there's no set timeout. This could be particularly useful for tasks that function as daemons and are designed to run indefinitely.
Utilizing the Timeouts
judiciously helps in managing resource utilization and ensures tasks adhere to expected timelines, thereby enhancing the efficiency and predictability of job executions.
State
Structure SpecificationWithin Bacalhau, the State
structure is designed to represent the status or state of an object (like a Job
), coupled with a human-readable message for added context. Below is a breakdown of the structure:
State
ParametersStateType (T : <required>)
: Represents the current state of the object. This is a generic parameter that will take on a specific value from a set of defined state types for the object in question. For jobs, this will be one of the JobStateType
values.
Message (string : <optional>)
: A human-readable message giving more context about the current state. Particularly useful for states like Failed
to provide insight into the nature of any error.
When State
is used for a job, the StateType
can be one of the following:
Pending
: This indicates that the job is submitted but is not yet scheduled for execution.
Running
: The job is scheduled and is currently undergoing execution.
Completed
: This state signifies that a job has successfully executed its task. Only applicable for batch jobs.
Failed
: A state indicating that the job encountered errors and couldn't successfully complete.
JobStateTypeStopped
: The job has been intentionally halted by the user before its natural completion.
The inclusion of the Message
field can offer detailed insights, especially in states like Failed
, aiding in error comprehension and debugging.
The Labels
block within a Job
specification plays a crucial role in Bacalhau, serving as a mechanism for filtering jobs. By attaching specific labels to jobs, users can quickly and effectively filter and manage jobs via both the Command Line Interface (CLI) and Application Programming Interface (API) based on various criteria.
Labels
ParametersLabels are essentially key-value pairs attached to jobs, allowing for detailed categorizations and filtrations. Each label consists of a Key
and a Value
. These labels can be filtered using operators to pinpoint specific jobs fitting certain criteria.
Jobs can be filtered using the following operators:
in
: Checks if the key's value matches any within a specified list of values.
notin
: Validates that the key's value isn’t within a provided list of values.
exists
: Checks for the presence of a specified key, regardless of its value.
!
: Validates the absence of a specified key. (i.e., DoesNotExist)
gt
: Checks if the key's value is greater than a specified value.
lt
: Checks if the key's value is less than a specified value.
= & ==
: Used for exact match comparisons between the key’s value and a specified value.
!=
: Validates that the key’s value doesn't match a specified value.
Filter jobs with a label whose key is "environment" and value is "development":
Filter jobs with a label whose key is "version" and value is greater than "2.0":
Filter jobs with a label "project" existing:
Filter jobs without a "project" label:
Job Management: Enables efficient management of jobs by categorizing them based on distinct attributes or criteria.
Automation: Facilitates the automation of job deployment and management processes by allowing scripts and tools to target specific categories of jobs.
Monitoring & Analytics: Enhances monitoring and analytics by grouping jobs into meaningful categories, allowing for detailed insights and analysis.
The Labels
block is instrumental in the enhanced management, filtering, and operation of jobs within Bacalhau. By understanding and utilizing the available operators and label parameters effectively, users can optimize their workflow, automate processes, and achieve detailed insights into their jobs.
This directory contains instructions on how to setup the networking in Bacalhau.
By default, Bacalhau jobs do not have any access to the internet. This is to keep both compute providers and users safe from malicious activities.
However, by using data volumes you can read and access your data from within jobs and write back results.
When you submit a Bacalhau job, you'll need to specify the internet locations to download data from and write results to. Both Docker and WebAssembly jobs support these features.
When submitting a Bacalhau job, you can specify the CID (Content IDentifier) or HTTP(S) URL to download data from. The data will be retrieved before the job starts and made available to the job as a directory on the filesystem. When running Bacalhau jobs, you can specify as many CIDs or URLs as needed using --input
which is accepted by both bacalhau docker run
and bacalhau wasm run
. See command line flags for more information.
You can write back results from your Bacalhau jobs to your public storage location. By default, jobs will write results to the storage provider using the --publisher
command line flag. See command line flags on how to configure this.
To use these features, the data to be downloaded has to be known before the job starts. For some workloads, the required data is computed as part of the job if the purpose of the job is to process web results. In these cases, networking may be possible during job execution.
To run Docker jobs on Bacalhau to access the internet, you'll need to specify one of the following:
full: unfiltered networking for any protocol --network=full
http: HTTP(S)-only networking to a specified list of domains --network=http
none: no networking at all, the default --network=none
:::tip Specifying none
will still allow Bacalhau to download and upload data before and after the job. :::
Jobs using http
must specify the domains they want to access when the job is submitted. When the job runs, only HTTP requests to those domains will be possible and data transfer will be rate limited to 10Mbit/sec in either direction to prevent ddos.
Jobs will be provided with http_proxy
and https_proxy
environment variables which contain a TCP address of an HTTP proxy to connect through. Most tools and libraries will use these environment variables by default. If not, they must be used by user code to configure HTTP proxy usage.
The required networking can be specified using the --network
flag. For http
networking, the required domains can be specified using the --domain
flag, multiple times for as many domains as required. Specifying a domain starting with a .
means that all sub-domains will be included. For example, specifying .example.com
will cover some.thing.example.com
as well as example.com
.
:::caution Bacalhau jobs are explicitly prevented from starting other Bacalhau jobs, even if a Bacalhau requester node is specified on the HTTP allowlist. :::
Bacalhau has support for describing jobs that can access the internet during job execution. The ability for compute nodes to run jobs that require internet access depends on what compute nodes are currently part of the network.
Compute nodes that join the Bacalhau network do not accept networked jobs by default (i.e. they only accept jobs that specify --network=none
, which is also the default).
The public compute nodes provided by the Bacalhau network will accept jobs that require HTTP networking as long as the domains are from this allowlist.
If you need to access a domain that isn't on the allowlist, you can make a request to the Bacalhau Project team to include your required domains. You can also set up your own compute node that implements the allowlist you need.
DuckDB is an embedded SQL database tool that is designed to analyze data without external dependencies or state, that can be embedded locally on any machine.
Because DuckDB allows you to process and store data such as Parquet files and text logs, DuckDB can be an invaluable tool in analyzing system created data such as logs while still allowing you to use SQL as a first-class way to interact with it.
However, many organizations only want to present DuckDB on local interfaces for security and compliance purposes, so having a central system that can interact with embedded DuckDBs would not be acceptable. Bacalhau + DuckDB provides a distributed way to execute many queries against local logs, without having to move the files at all.
With data being generated everywhere, it can be a challenge to centralize and process the information before getting insights. Moving data to a lake can be time consuming, costly, and insecure; often just moving the data risks enormous data protection fines.
Further, the sheer number of log files alone being generated from servers, IoT devices, embedded machines, and more present a huge surface area for managing generated data. As files are written to a local data store, organizations are faced with either building remote connectivity tooling to access the files in place, or pushing these files into a data lake costing both time and money.
Ideally, a users would be able to gain insights from the remote files WITHOUT having to centralize first. This is where Bacalhau and DuckDB can step in.
In order to speed results and deliver more cost-effective processing of log files generated, we can use Bacalhau and DuckDB to run directly to the nodes.
The flow looks like the following:
Execute a command against the network to execute “local to machine” queries against the set of nodes with log files on them
Return the results of the queries that require immediate action (e.g. emergency alerts)
Archive the existing logs into cold storage.
This is laid out in the architecture below.
DuckDB
Docker
Python
Terraform
gcloud CLI
Follow the steps below to set up log processing and storage for 3 VMs in different regions or zones these VMs produce logs:
Step 1: Set up a “fake log creating” job
Output something that looks like real logs -
Each fake log entry should look something like this:
For service name - just use one of “Auth”, “AppStack”, “Database” - each one should produce one per 5 seconds
For category, select one from [INFO], [WARN], [CRITICAL], [SECURITY]
This needs to be running reliably - so have the script run in system.d
Step 2 Configure logrotate on the machine
Create a new logrotate configuration file at /etc/logrotate.d/my_logs
with the content:
Each time the log rotates - put it into a special directory /var/logs/raw_logs
or something. (this is a setting in log rotate - where you output the rotate to)
Step 3: The Bacalhau Job
On a second machine, once per hour, trigger a job to run across all nodes identified across regions
Pass the log path to the job spec. (Use the local mount feature (can’t use it currently))
This job should do the following:
If the file is not present in raw_logs, write information to stdout: “{ warning: raw_logs_not_found, date: <-ISO9660 Timestamp->}”
- and quit
If file is present:
Step 3a: Use DuckDB to process the logs:
We should NOT use David Gasquez’s current one - we should use the generic one.
Inside the container, use a command that loads the file - e.g. “duckdb -s "select count(*) from '0_yellow_taxi_trips.parquet'”
Except, we want to select only a subset of the files e.g. “duckdb -s "select count(*) from '0_yellow_taxi_trips.parquet' contains('abc','a')”
Output the match to a file on the disk - /tmp/Region-Zone-NodeName-Security-yyyymmddhhmm.json
Step 3b - Compress the file:
/tmp/Region-Zone-NodeName-SECURITY-yyyymmddhhmm.json.gz
Step 3c - Push the file to an S3 bucket:
Push the processed logs to s3 (s3 push functionality isn’t implemented yet - just use a standard aws CLI command - figure out with Walid how to do credentials)
Step 3d - Compress the raw log file
/tmp/Region-Zone-NodeName-RAW-yyyymmddhhmm.json.gz
Step 3e - Push the compressed raw log to Iceberg
Just use standard Iceberg API - talk with Walid about
Step 3f:
Delete the raw log file
When running a node, you can choose which jobs you want to run by using configuration options, environment variables or flags to specify a job selection policy.
setting-up/networking-instructions/networking.md
If you want more control over making the decision to take on jobs, you can use the --job-selection-probe-exec
and --job-selection-probe-http
flags.
These are external programs that are passed the following data structure so that they can make a decision about whether or not to take on a job:
The exec
probe is a script to run that will be given the job data on stdin
, and must exit with status code 0 if the job should be run.
The http
probe is a URL to POST the job data to. The job will be rejected if the HTTP request returns a non-positive status code (e.g. >= 400).
If the HTTP response is a JSON blob, it should match the following schema and will be used to respond to the bid directly:
For example, the following response will reject the job:
If the HTTP response is not a JSON blob, the content is ignored and any non-error status code will accept the job.
Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations. Prolog is well-suited for specific tasks that benefit from rule-based logical queries such as searching databases, voice control systems, and filling templates.
This tutorial is a quick guide on how to run a hello world script on Bacalhau.
To get started, you need to install the Bacalhau client, see more information here
To get started, install swipl
Create a file called helloworld.pl
. The following script prints ‘Hello World’ to the stdout:
Running the script to print out the output:
After the script has run successfully locally we can now run it on Bacalhau.
Before running it on Bacalhau we need to upload it to IPFS.
Using the IPFS cli
Run the command below to check if our script has been uploaded.
This command outputs the CID. Copy the CID of the file, which in our case is QmYq9ipYf3vsj7iLv5C67BXZcpLHxZbvFAJbtj7aKN5qii
Since the data uploaded to IPFS isn’t pinned, we will need to do that manually. Check this information on how to pin your data We recommend using NFT.Storage.
We will mount the script to the container using the -i
flag: -i: ipfs://< CID >:/< name-of-the-script >
.
To submit a job, run the following Bacalhau command:
-i ipfs://QmYq9ipYf3vsj7iLv5C67BXZcpLHxZbvFAJbtj7aKN5qii:/helloworld.pl
: Sets the input data for the container. QmYq9ipYf3vsj7iLv5C67BXZcpLHxZbvFAJbtj7aKN5qii
is our CID which points to the helloworld.pl
file on the IPFS network. This file will be accessible within the container.
-- swipl -q -s helloworld.pl -g hello_world
: instructs SWI-Prolog to load the program from the helloworld.pl
file and execute the hello_world
function in quiet mode:
-q
: running in quiet mode
-s
: load file as a script. In this case we want to run the helloworld.pl
script
-g
: is the name of the function you want to execute. In this case its hello_world
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on:
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the file, run the following command:
If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).
Bacalhau is a peer-to-peer network of computing providers that will run jobs submitted by users. A Compute Provider (CP) is anyone who is running a Bacalhau compute node participating in the Bacalhau compute network, regardless of whether they are hosting any Filecoin data.
This section will show you how to configure and run a Bacalhau node and start accepting and running jobs.
To bootstrap your node and join the network as a CP you can leap right into the below, or find more setup details in these guides:
(with limitations)
:::info
If you run on a different system than Ubuntu, drop us a message on ! We'll add instructions for your favorite OS.
:::
Estimated time for completion: 10 min.
Tested on: Ubuntu 22.04 LTS (x86/64) running on a GCP e2-standard-4 (4 vCPU, 16 GB memory) instance with 40 GB disk size.
Docker Engine - to take on Docker workloads
Connection to storage provider - for storing job's results
Firewall - to ensure your node can communicate with the rest of the network
Physical hardware, Virtual Machine, or cloud-based host. A Bacalhau compute node is not intended to be run from within a Docker container.
To run docker-based workloads, you should have docker installed and running.
If you already have it installed and want to configure the connection to Docker with the following environment variables:
DOCKER_HOST
to set the URL to the docker server.
DOCKER_API_VERSION
to set the version of the API to reach, leave empty for "latest".
DOCKER_CERT_PATH
to load the TLS certificates from.
DOCKER_TLS_VERIFY
to enable or disable TLS verification, off by default.
We will need to connect our Bacalhau node to a storage server for this we will be using IPFS server so we can run jobs that consume CIDs as inputs.
You can either install and run it locally or you can connect to a remote IPFS server.
:::caution
The multiaddress above is just an example - you need to get the multiaddress of the server you want to connect to.
:::
Using the command below:
Install:
Configure:
Now launch the IPFS daemon in a separate terminal (make sure to export the IPFS_PATH
environment variable there as well):
:::info
:::
Use the following command to print out a number of addresses.
I pick the record that combines 127.0.0.1
and tcp
but I replace port 4001
with 5001
:
To ensure that our node can communicate with other nodes on the network - we need to make sure the 1235 port is open.
(Optional) To ensure the CLI can communicate with our node directly (i.e. bacalhau --api-host <MY_NODE_PUBLIC_IP> version
) - we need to make sure the 1234 port is open.
:::info
:::
Now we can run our bacalhau node:
Alternatively, you can run the following Docker command:
Even though the CLI (by default) submits jobs, each node listens for events on the global network and possibly bids for taking a job: your logs should therefore show the activity of your node bidding for incoming jobs.
To quickly check your node runs properly, let's submit the following dummy job:
If you see logs of your compute node bidding for the job above it means you've successfully joined Bacalhau as a Compute Provider!
Bacalhau has an update checking service to automatically detect whether a newer version of the software is available.
Users who are both running CLI commands and operating nodes will be regularly informed that a new release can be downloaded and installed.
Bacalhau will run an update check regularly when client commands are executed. If an update is available, explanatory text will be printed at the end of the command.
To force a manual update check, run the bacalhau version
command, which will explicitly list the latest software release alongside the server and client versions.
Bacalhau will run an update check regularly as part of the normal operation of the node.
If an update is available, an INFO level message will be printed to the log.
Bacalhau has some configuration options for controlling how often checks are performed. By default, an update check will run no more than once every 24 hours. Users can opt out of automatic update checks using the configuration described below.
:::info It's important to note that disabling the automatic update checks may lead to potential issues, arising from mismatched versions of different actors within Bacalhau. :::
To output update check config, run bacalhau config list
:
Bacalhau supports GPU workloads. In this tutorial, learn how to run a job using GPU workloads with the Bacalhau client.
The Bacalhau network must have an executor node with a GPU exposed
Your container must include the CUDA runtime (cudart) and must be compatible with the CUDA version running on the node
To submit a job request, use the --gpu
flag under the docker run
command to select the number of GPUs your job requires. For example:
The following limitations currently exist within Bacalhau. Bacalhau supports:
NVIDIA, Intel or AMD GPUs only
GPUs for the Docker executor only
DuckDB is a relational table-oriented database management system that supports SQL queries for producing analytical results. It also comes with various features that are useful for data analytics.
DuckDB is suited for the following use cases:
Processing and storing tabular datasets, e.g. from CSV or Parquet files
Interactive data analysis, e.g. Joining & aggregate multiple large tables
Concurrent large changes, to multiple large tables, e.g. appending rows, adding/removing/updating columns
Large result set transfer to client
In this example tutorial, we will show how to use DuckDB with Bacalhau. The advantage of using DuckDB with Bacalhau is that you don’t need to install, and there is no need to download the datasets since the datasets are already there on IPFS or on the web.
How to run a relational database (like DUCKDB) on Bacalhau
To get started, you need to install the Bacalhau client, see more information
You can skip this entirely and directly go to running on Bacalhau.
If you want any additional dependencies to be installed along with DuckDB, you need to build your own container.
To build your own docker container, create a Dockerfile
, which contains instructions to build your DuckDB docker container.
We will run docker build
command to build the container;
Before running the command replace;
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. To submit a job, run the following Bacalhau command:
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
davidgasquez/datadex:v0.2.0
: the name and the tag of the docker image we are using
/inputs/
: path to input dataset
'duckdb -s "select 1"'
: execute DuckDB
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
Each job creates 3 subfolders: the combined_results,per_shard files, and the raw directory. To view the file, run the following command:
Below is the bacalhau docker run
command to to run arbitrary SQL commands over the yellow taxi trips dataset
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
-i ipfs://bafybeiejgmdpwlfgo3dzfxfv3cn55qgnxmghyv7vcarqe3onmtzczohwaq \
: CIDs to use on the job. Mounts them at '/inputs' in the execution.
davidgasquez/duckdb:latest
: the name and the tag of the docker image we are using
/inputs
: path to input dataset
duckdb -s
: execute DuckDB
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
Each job creates 3 subfolders: the combined_results,per_shard files, and the raw directory. To view the file, run the following command:
Ethereum Blockchain Analysis with Ethereum-ETL and Bacalhau
Mature blockchains are difficult to analyze because of their size. Ethereum-ETL is a tool that makes it easy to extract information from an Ethereum node, but it's not easy to get working in a batch manner. It takes approximately 1 week for an Ethereum node to download the entire chain (even more in my experience) and importing and exporting data from the Ethereum node is slow.
For this example, we ran an Ethereum node for a week and allowed it to synchronize. We then ran ethereum-etl to extract the information and pinned it on Filecoin. This means that we can both now access the data without having to run another Ethereum node.
But there's still a lot of data and these types of analyses typically need repeating or refining. So it makes absolute sense to use a decentralized network like Bacalhau to process the data in a scalable way.
Running Ethereum-etl tool on Bacalhau to extract Ethereum node.
To get started, you need to install the Bacalhau client, see more information
First let's download one of the IPFS files and inspect it locally. You can see the full list of IPFS CIDs in the appendix.
The following code inspects the daily trading volume of Ethereum for a single chunk (100,000 blocks) of data.
This is all good, but we can do better. We can use the Bacalhau client to download the data from IPFS and then run the analysis on the data in the cloud. This means that we can analyze the entire Ethereum blockchain without having to download it locally.
To run jobs on the Bacalhau network you need to package your code. In this example, I will package the code as a Docker image.
But before we do that, we need to develop the code that will perform the analysis. The code below is a simple script to parse the incoming data and produce a CSV file with the daily trading volume of Ethereum.
Next, let's make sure the file works as expected...
And finally, package the code inside a Docker image to make the process reproducible. Here I'm passing the Bacalhau default /inputs
and /outputs
directories. The /inputs
directory is where the data will be read from and the /outputs
directory is where the results will be saved to.
We've already pushed the container, but for posterity, the following command pushes this container to GHCR.
To run our analysis on the Ethereum blockchain, we will use the bacalhau docker run
command.
The job has been submitted and Bacalhau has printed out the related job id. We store that in an environment variable so that we can reuse it later on.
Bacalhau also mounts a data volume to store output data. The bacalhau docker run
command creates an output data volume mounted at /outputs
. This is a convenient location to store the results of your job.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory.
To view the file, run the following command:
To view the images, we will use glob to return all file paths that match a specific pattern.
Ok, so that works. Let's scale this up! We can run the same analysis on the entire Ethereum blockchain (up to the point where I have uploaded the Ethereum data). To do this, we need to run the analysis on each of the chunks of data that we have stored on IPFS. We can do this by running the same job on each of the chunks.
See the appendix for the hashes.txt
file.
Now take a look at the job id's. You can use these to check the status of the jobs and download the results. You might want to double-check that the jobs ran ok by doing a bacalhau list
.
Wait until all of these jobs have been completed:
And then download all the results and merge them into a single directory. This might take a while, so this is a good time to treat yourself to a nice Dark Mild. There's also been some issues in the past communicating with IPFS, so if you get an error, try again.
To view the images, we will use glob to return all file paths that match a specific pattern.
That's it! There are several years of Ethereum transaction volume data.
The following list is a list of IPFS CID's for the Ethereum data that we used in this tutorial. You can use these CID's to download the rest of the chain if you so desire. The CIDs are ordered by block number and they increase 50,000 blocks at a time. Here's a list of ordered CIDs:
In the course of writing this example, I had to set up an Ethereum node. It was a slow and painful process so I thought I would share the steps I took to make it easier for others.
Geth supports Ubuntu by default, so use that when creating a VM. Use Ubuntu 22.04 LTS.
Mount the disk:
Run as a new user:
Check they are running:
Watch the logs:
Prysm will need to finish synchronizing before geth will start synchronizing.
In Prysm you will see lots of log messages saying: Synced new block
, and in Geth you will see: Syncing beacon headers downloaded=11,920,384 left=4,054,753 eta=2m25.903s
. This tells you how long it will take to sync the beacons. Once that's done, get will start synchronizing the blocks.
Bring up the Ethereum javascript console with:
Once the block sync has started, eth.syncing
will return values. Before it starts, this value will be false
.
Note that by default, geth will perform a fast sync, without downloading the full blocks. The syncmode=full
flag forces geth to do a full sync. If we didn't do this, then we wouldn't be able to back up the data properly.
Tar and compress the directories to make them easier to upload:
Export your Web3.storage JWT API key as an environment variable called TOKEN
:
Bacalhau uses under the hood to communicate with other nodes on the network.
Because bacalhau is built using libp2p, the concept of peer identity is used to identify nodes on the network.
When you start a bacalhau node using bacalhau serve
, it will look for an RSA private key in the ~/.bacalhau
directory. If it doesn't find one, it will generate a new one and save it there.
You can override the directory where the private key is stored using the BACALHAU_PATH
environment variable.
Private keys are named after the port used for the libp2p connection which defaults to 1235
. By default when first starting a node, the private key will be stored in ~/.bacalhau/private_key.1235
.
The peer identity is derived from the private key and is used to identify the node on the network. You can get the peer identity of a node by running bacalhau id
:
By default, running bacalhau serve
will connect to the following nodes (which are the default bootstrap nodes run by Protocol labs):
Bacalhau uses libp2p to identify nodes on the network.
If you want to connect to other nodes, and you know their Peer IDs you can use the --peer
flag to specify additional peers to connect to (comma-separated list).
If you want to connect to a requester node, and you know it's IP but not it's Peer ID, you can use the following which will contact the requester API directly and ask for the current Peer ID instead.
The default port the libp2p swarm listens on is 1235.
You can configure the swarm port using the --port
flag:
To ensure that the node can communicate with other nodes on the network, make sure the swarm port is open and accessible by other nodes.
The Bacalhau node exposes a REST API that can be used to query the node for information.
The default port the REST API listens on is 1234.
The default network interface the REST API listens on is 0.0.0.0.
You can configure the REST API port using the --api-port
flag:
You can also configure which network interface the REST API will bind to using the --host
flag:
:::tip
You can use the --host
flag to restrict network access to the REST API.
:::
Once you run the command above, you'll get a CID output:
Oceanography data conversion with Bacalhau
The Surface Ocean CO₂ Atlas (SOCAT) contains measurements of the of CO2 in seawater around the globe. But to calculate how much carbon the ocean is taking up from the atmosphere, these measurements need to be converted to the partial pressure of CO2. We will convert the units by combining measurements of the surface temperature and fugacity. Python libraries (xarray, pandas, numpy) and the pyseaflux package facilitate this process.
In this example tutorial, we will investigate the data and convert the workload so that it can be executed on the Bacalhau network, to take advantage of the distributed storage and compute resources.
Running oceanography dataset with Bacalhau
To get started, you need to install the Bacalhau client, see more information
The raw data is available on the . We will use the dataset in the "Gridded" format to perform this calculation. First, let's take a quick look at some data:
Next let's write the requirements.txt
and install the dependencies. This file will also be used by the Dockerfile to install the dependencies.
Installing dependencies
We can see that the dataset contains lat-long coordinates, the date, and a series of seawater measurements. Above you can see a plot of the average surface sea temperature (sst) between 2010-2020, where recording buoys and boats have traveled.
To execute this workload on the Bacalhau network we need to perform three steps:
Upload the data to IPFS
Create a docker image with the code and dependencies
Run a Bacalhau job with the docker image using the IPFS data
For the purposes of this example:
Pinned the data to IPFS
This resulted in the IPFS CID of bafybeidunikexxu5qtuwc7eosjpuw6a75lxo7j5ezf3zurv52vbrmqwf6y
.
We will create a Dockerfile
and add the desired configuration to the file. These commands specify how the image will be built, and what extra requirements will be included.
We will run docker build
command to build the container;
Before running the command replace;
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
Now you can push this repository to the registry designated by its name or tag.
Now that we have the data in IPFS and the Docker image pushed, next is to run a job using the bacalhau docker run
command
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
To view the file, run the following command:
Converting from CSV to parquet or avro reduces the size of the file and allows for faster read and write speeds. With Bacalhau, you can convert your CSV files stored on ipfs or on the web without the need to download files and install dependencies locally.
In this example tutorial we will convert a CSV file from a URL to parquet format and save the converted parquet file to IPFS
Converting CSV stored in public storage with Bacalhau
To get started, you need to install the Bacalhau client, see more information
Installing dependencies
Run the following commands:
Viewing the parquet file
:::info You can skip this section entirely and directly go to running on Bacalhau :::
To build your own docker container, create a Dockerfile
, which contains instructions to build your image.
We will run docker build
command to build the container;
Before running the command replace;
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case:
To submit a job, we are going to either mount the script from an IPFS or from an URL.
With the command below, we are gmounting the CSV file for transactions from IPFS
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
-i ipfs://QmTAQMGiSv9xocaB4PUCT5nSBHrf9HZrYj21BAZ5nMTY2W
: CIDs to use on the job. Mounts them at '/inputs' in the execution.
jsacex/csv-to-arrow-or-parque
: the name and the tag of the docker image we are using
../inputs/movies.csv
: path to input dataset
../outputs/movies.parquet parquet
: path to the output
python3 src/converter.py
: execute the script
To mount the CSV file from a URL
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
-i https://raw.githubusercontent.com/bacalhau-project/csv_to_avro_or_parquet/master/movies.csv
: URL: path of the input data volumes downloaded from a URL source
jsacex/csv-to-arrow-or-parque
: the name and the tag of the docker image we are using
../inputs/movies.csv
: path to the input dataset
../outputs/movies.parquet parquet
: path to the output
python3 src/converter.py
: execute the script
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
:::note Replace the {JOB_ID}
with your generated ID. :::
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
To view the file, run the following command:
Alternatively, you can do this.
It is possible to run Bacalhau completely disconnected from the main Bacalhau network so that you can run private workloads without risking running on public nodes or inadvertently sharing your data outside of your organization. The isolated network will not connect to the public Bacalhau network nor connect to a public network. To do this, we will run our network in-process rather than externally.
:::info A private network and storage is easier to set up, but a separate public server is better for production. The private network and storage will use a temporary directory for its repository and so the contents will be lost on shutdown. :::
The first step is to start up the initial node, which we will use as the requester node
. This node will connect to nothing but will listen for connections.
This will produce output similar to this:
To connect another node to this private one, run the following command in your shell:
To connect another node to this private one, run the following command in your shell:
:::tip The exact command will be different on each computer and is outputted by the bacalhau serve --node-type requester ...
command :::
The command bacalhau serve --private-internal-ipfs --peer ...
starts up a compute node and adds it to the cluster.
To use this cluster from the client, run the following commands in your shell:
:::tip The exact command will be different on each computer and is outputted by the bacalhau serve --node-type requester ...
command :::
The command export BACALHAU_IPFS_SWARM_ADDRESSES=...
sends jobs into the cluster from the command line client.
Instructions for connecting to the public IPFS network via the private Bacalhau cluster:
On all nodes, start ipfs:
Then run the following command in your shell:
On the first node execute the following:
Monitor the output log for: 11:16:03.827 | DBG pkg/transport/bprotocol/compute_handler.go:39 > ComputeHandler started on host QmWXAaSHbbP7mU4GrqDhkgUkX9EscfAHPMCHbrBSUi4A35
On all other nodes execute the following:
Replace the values in the command above with your own value
Here is our example:
Then from any client set the following before invoking your Bacalhau job:
A private cluster is a network of Bacalhau nodes completely isolated from any public node. That means you can safely process private jobs and data on your cloud or on-premise hosts!
Install Bacalhau curl -sL https://get.bacalhau.org/install.sh | bash
on every host
Run bacalhau serve
only on one host, this will be our "bootstrap" machine
Copy and paste the command it outputs under the "To connect another node to this private one, run the following command in your shell..." line to the other hosts
Copy and paste the env vars it outputs under the "To use this requester node from the client, run the following commands in your shell..." line to a client machine
Run bacalhau docker run ubuntu echo hello
on the client machine
How to process images stored in IPFS with Bacalhau
In this example tutorial, we will show you how to use Bacalhau to process images on a .
Bacalhau has the unique capability of operating at a massive scale in a distributed environment. This is made possible because data is naturally sharded across the IPFS network amongst many providers. We can take advantage of this to process images in parallel.
Processing of images from a dataset using Bacalhau
To get started, you need to install the Bacalhau client, see more information
To submit a workload to Bacalhau, we will use the bacalhau docker run
command.
The job has been submitted and Bacalhau has printed out the related job id. We store that in an environment variable so that we can reuse it later on.
Bacalhau also mounts a data volume to store output data. The bacalhau docker run
command creates an output data volume mounted at /outputs
. This is a convenient location to store the results of your job.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory.
To view the file, run the following command:
To view the images, we will use glob to return all file paths that match a specific pattern.
This example tutorial demonstrates how to use stable diffusion on a GPU and run it on the network. is a state of the art text-to-image model that generates images from text and was developed as an open-source alternative to . It is based on a and uses a to generate images from text.
To get started, you need to install the Bacalhau client, see more information
Here is an example of an image generated by this model.
:::info When you run this code for the first time, it will download the pre-trained weights, which may add a short delay. :::
:::tip When you run this code for the first time, it will download the pre-trained weights, which may add a short delay. :::
When running this code, if you check the GPU RAM usage, you'll see that it's sucked up many GBs, and depending on what GPU you're running, it may OOM (Out of memory) if you run this again.
You can try and reduce RAM usage by playing with batch sizes (although it is only set to 1 above!) or more carefully controlling the TensorFlow session.
To clear the GPU memory we will use numba. This won't be required when running in a single-shot manner.
After writing the code the next step is to run the script.
The following presents additional parameters you can try:
python main.py --p "cat with three eyes
- to set prompt
python main.py --p "cat with three eyes
--n 100 - to set the number of iterations to 100
python stable-diffusion.py --p "cat with three eyes" --b 2
to set batch size to 2 (No of images to generate)
We will run docker build
command to build the container;
Before running the command replace;
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case
To submit a job, run the following Bacalhau command:
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
--gpu 1
: No of GPUs
ghcr.io/bacalhau-project/examples/stable-diffusion-gpu:0.0.1
: the name and the tag of the docker image we are using
../outputs
: path to the output
python main.py
: execute the script
:::tip This will take about 5 minutes to complete and is mainly due to the cold-start GPU setup time. This is faster than the CPU version, but you might still want to grab some fruit or plan your lunchtime run.
Furthermore, the container itself is about 10GB, so it might take a while to download on the node if it isn't cached. :::
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory
To view the file, run the following command:
To display and view your image run the code below:
Dolly 2.0, the groundbreaking, open-source, instruction-following Large Language Model (LLM) that has been fine-tuned on a human-generated instruction dataset, licensed for both research and commercial purposes. Developed using the EleutherAI Pythia model family, this 12-billion-parameter language model is built exclusively on a high-quality, human-generated instruction following dataset, contributed by Databricks employees.
Dolly 2.0 package is open source, including the training code, dataset, and model weights, all available for commercial use. This unprecedented move empowers organizations to create, own, and customize robust LLMs capable of engaging in human-like interactions, without the need for API access fees or sharing data with third parties.
A NVIDIA GPU
Python
Creating the inference script
Install Docker on your local machine.
Sign up for a DockerHub account if you don't already have one. Steps
Step 1: Create a Dockerfile Create a new file named Dockerfile in your project directory with the following content:
This Dockerfile sets up a container with the necessary dependencies and installs the Segment Anything Model from its GitHub repository.
Step 2: Build the Docker Image In your terminal, navigate to the directory containing the Dockerfile and run the following command to build the Docker image:
Replace your-dockerhub-username with your actual DockerHub username. This command will build the Docker image and tag it with your DockerHub username and the name "sam".
Step 3: Push the Docker Image to DockerHub Once the build process is complete, Next, push the Docker image to DockerHub using the following command:
Again, replace your-dockerhub-username with your actual DockerHub username. This command will push the Docker image to your DockerHub repository.
docker run
: Docker command to run a container from a specified image.
--gpu 1
: Flag to specify the number of GPUs to use for the execution. In this case, 1 GPU will be used.
-w /inputs
: Flag to set the working directory inside the container to /inputs
.
-i gitlfs://huggingface.co/databricks/dolly-v2-3b.git
: Flag to clone the Dolly V2-3B model from Hugging Face's repository using Git LFS. The files will be mounted to /inputs/databricks/dolly-v2-3b
.
-i https://gist.githubusercontent.com/js-ts/d35e2caa98b1c9a8f176b0b877e0c892/raw/3f020a6e789ceef0274c28fc522ebf91059a09a9/inference.py
: Flag to download the inference.py
script from the provided URL. The file will be mounted to /inputs/inference.py
.
jsacex/dolly_inference:latest
: The name and the tag of the Docker image.
The command to run inference on the model: python inference.py --prompt "Where is Earth located ?" --model_version "./databricks/dolly-v2-3b"
.
inference.py
: The Python script that runs the inference process using the Dolly V2-3B model.
--prompt "Where is Earth located ?"
: Specifies the text prompt to be used for the inference.
--model_version "./databricks/dolly-v2-3b"
: Specifies the path to the Dolly V2-3B model. In this case, the model files are mounted to /inputs/databricks/dolly-v2-3b
.
It should be compatible with this -
For message - just output a random combination of words from a - so each message should be like “dog cheese cow car sky”. Have it be 5 words each.
Use a container (like from our existing example) that has DuckDB inside it -
If you do not have Docker on your system, you can follow the official or just use the snippet below:
Now make :
In both cases - we should have a for our IPFS server that should look something like this:
To install a single IPFS node locally on Ubuntu you can follow the , or follow the steps below. We advise running the same IPFS version as the Bacalhau main network.
If you want to run the IPFS daemon as a service, here's an example .
Don't forget we need to fetch an pointing to our local node.
Firewall configuration is very specific to your network and we can't provide generic instructions for this step but if you need any help feel free to reach out on
to run bacalhau serve
.
If you want to run Bacalhau as a service, here's an example .
These commands join this node to the public Bacalhau network, congrats!
At this point, you probably have a number of questions for us. What incentive should you expect for running a public Bacalhau node? Please contact us on to further discuss this topic and for sharing your valuable feedback.
Config property | Environment variable | Default value | Meaning |
---|
See more information on how to containerize your script/app
hub-user with your docker hub username, If you don’t have a docker hub account , and use the username of the account you created
For questions, and feedback, please reach out in our
The bacalhau docker run
command allows to pass input data volume with a -i ipfs://CID:path
argument just like Docker, except the left-hand side of the argument is a . This results in Bacalhau mounting a data volume inside the container. By default, Bacalhau mounts the input volume at the path /inputs
inside the container.
You can call with the POST body as a JSON serialized spec
To convert the data from fugacity of CO2 (fCO2) to partial pressure of CO2 (pCO2) we will combine the measurements of the surface temperature and fugacity. The conversion is performed by the package.
The first step is to upload the data to IPFS. The simplest way to do this is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this you need an account with a pinning service like or . Once registered you can use their UI or API or SDKs to upload files.
Downloaded the latest monthly data from the
Downloaded the latest long-term global sea surface temperature data from - information about that dataset can be found .
hub-user with your docker hub username, If you don’t have a docker hub account , and use the username of the account you created
:::tip For more information about working with custom containers, see the . :::
For questions, and feedback, please reach out in our
:::info See more information on how to containerize your script/app :::
hub-user with your docker hub username, If you don’t have a docker hub account , and use the username of the account you created
For questions, and feedback, please reach out in our
Good news. Spinning up a private cluster is really a piece of cake :
That's all folks!
Optionally, set up units make Bacalhau daemons permanent, here's an example .
Please contact us on #bacalhau
channel for questions and feedback!
The bacalhau docker run
command allows to pass input data volume with a -i ipfs://CID:path
argument just like Docker, except the left-hand side of the argument is a . This results in Bacalhau mounting a data volume inside the container. By default, Bacalhau mounts the input volume at the path /inputs
inside the container.
For questions, feedback, please reach out in our
This stable diffusion example is based on the . You might also be interested in the Pytorch oriented .
Based on the requirements , we will install the following:
We have a sample code from this the repo which we will use to check if the code is working as expected. Our output for this code will be a DSLR photograph of an astronaut riding a horse.
You need a script to execute when we submit jobs. The code below is a slightly modified version of the code we ran above which we got from , however, this includes more things such as argument parsing to be able to customize the generator.
:::info For a full list of arguments that you can pass to the script, see more information :::
Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the . To containerize the inference code, we will create a Dockerfile
. The Dockerfile is a text document that contains the commands that specify how the image will be built.
The Dockerfile leverages the latest official TensorFlow GPU image and then installs other dependencies like git
, CUDA
packages, and other image-related necessities. See for the expected requirements.
:::info See more information on how to containerize your script/app :::
hub-user with your docker hub username, If you don’t have a docker hub account , and use the username of the account you created
The Bacalhau command passes a prompt to the model and generates an image in the outputs directory. The main difference in the example below compared to all the other examples is the addition of the --gpu X
flag, which tells Bacalhau to only schedule the job on nodes that have X
GPUs free. You can in the documentation.
To get started, you need to install the Bacalhau client, see more information
Config property
serve
flag
Default value
Meaning
Node.Compute.JobSelection.Locality
--job-selection-data-locality
Anywhere
Only accept jobs that reference data we have locally ("local") or anywhere ("anywhere").
Node.Compute.JobSelection.ProbeExec
--job-selection-probe-exec
unused
Use the result of an external program to decide if we should take on the job.
Node.Compute.JobSelection.ProbeHttp
--job-selection-probe-http
unused
Use the result of a HTTP POST to decide if we should take on the job.
Node.Compute.JobSelection.RejectStatelessJobs
--job-selection-reject-stateless
False
Reject jobs that don't specify any input data.
Node.Compute.JobSelection.AcceptNetworkedJobs
--job-selection-accept-networked
False
Accept jobs that require network connections.
Update.SkipChecks |
| False | If true, no update checks will be performed. As an environment variable, set to |
Update.CheckFrequency |
| 24 hours | The minimum amount of time between automated update checks. Set as any duration of hours, minutes or seconds, e.g. |
Update.CheckStatePath |
| $BACALHAU_DIR/update.json | An absolute path where Bacalhau should store the date and time of the last check. |
This directory contains examples relating to molecular dynamics workloads. The goal is to provide a range of examples that show you how to work with Bacalhau in a variety of use cases.
Stable Diffusion is a state of the art text-to-image model that generates images from text and was developed as an open-source alternative to DALL·E 2. It is based on a Diffusion Probabilistic Model and uses a Transformer to generate images from text.
This example demonstrates how to use stable diffusion on a CPU and run it on the Bacalhau network. The first section describes the development of the code and the container. The section demonstrates how to run the job using Bacalhau.
The following image is an example generated by this model.
The original text-to-image stable diffusion model was trained on a fleet of GPU machines, at great cost. To use this trained model for inference, you also need to run it on a GPU.
However, this isn't always desired or possible. One alternative is to use a project called OpenVINO from Intel that allows you to convert and optimize models from a variety of frameworks (and ONNX if your framework isn't directly supported) to run on a supported Intel CPU. This is what we will do in this example.
:::tip Heads up! This example takes about 10 minutes to generate an image on an average CPU. Whilst this demonstrates it is possible, it might not be practical. :::
In order to run this example you need:
A Debian-flavoured Linux (although you might be able to get it working on M1 macs)
The first step is to convert the trained stable diffusion models so that they work efficiently on a CPU using OpenVINO. The example is quite complex, so we have created a separate repository (which is a fork from Github user Sergei Belousov) to host the code. In summary, the code:
Downloads a pre-optimized OpenVINO version of ...
the original pre-trained stable diffusion model ...
which also leverages OpenAI's CLIP transformer ...
and is then wrapped inside an OpenVINO runtime, which reads in and executes the model.
The core code representing these tasks can be found in the stable_diffusion_engine.py
file. This is a mashup that creates a pipeline necessary to tokenize the text and run the stable diffusion model. This boilerplate could be simplified by leveraging the more recent version of the diffusers library. But let's crack on.
Note that these dependencies are only known to work on Ubuntu-based x64 machines.
The following commands clone the example repository, and other required repositories, and install the Python dependencies.
Now that we have all the dependencies installed, we can call the demo.py
wrapper, which is a simple CLI, to generate an image from a prompt.
Now we have a working example, we can convert it into a format that allows us to perform inference in a distributed environment.
First we will create a Dockerfile
to containerize the inference code. The Dockerfile can be found in the repository, but is presented here to aid understanding.
This container is using the python:3.9.9-bullseye
image and the working directory is set. Next, the Dockerfile installs the same dependencies from earlier in this notebook. Then we add our custom code and pull the dependent repositories.
We've already pushed this image to GHCR, but for posterity, you'd use a command like this to update it:
To run this example you will need:
Bacalhau installed and running
Bacalhau is a distributed computing platform that allows you to run jobs on a network of computers. It is designed to be easy to use and to run on a variety of hardware. In this example, we will use it to run the stable diffusion model on a CPU.
To submit a job, you can use the Bacalhau CLI. The following command passes a prompt to the model and generates an image in the outputs directory.
:::tip
This will take about 10 minutes to complete. Go grab a coffee. Or a beer. Or both. If you want to block and wait for the job to complete, add the --wait
flag.
Furthermore, the container itself is about 15GB, so it might take a while to download on the node if it isn't cached.
:::
Running the commands will output a UUID that represents the job that was created. You can check the status of the job with the following command:
Wait until it says Completed
and then get the results.
To find out more information about your job, run the following command:
If you see that the job has completed and there are no errors, then you can download the results with the following command:
After the download has finished you should see the following contents in the results directory:
In this example tutorial, we will show you how to train a Pytorch RNN MNIST neural network model with Bacalhau. PyTorch is a framework developed by Facebook AI Research for deep learning, featuring both beginner-friendly debugging tools and a high level of customization for advanced users, with researchers and practitioners using it across companies like Facebook and Tesla. Applications include computer vision, natural language processing, cryptography, and more.
Running any type of Pytorch model with Bacalhau
To get started, you need to install the Bacalhau client, see more information here
To train our model locally, we will start by cloning the Pytorch examples repo
Install the following
Next, we run the command below to begin the training of the mnist_rnn model. We added the --save-model
flag to save the model
Next, the downloaded MNIST dataset is saved in the data
folder.
Now that we have downloaded our dataset, the next step is to upload it to IPFS. The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this you need an account with a pinning service like web3.storage or Pinata or NFT.Storage. Once registered you can use their UI or API or SDKs to upload files.
Once you have uploaded your data, you'll be finished copying the CID. Here is the dataset we have uploaded https://gateway.pinata.cloud/ipfs/QmdeQjz1HQQdT9wT2NHX86Le9X6X6ySGxp8dfRUKPtgziw/?filename=data
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. To submit a job, run the following Bacalhau command:
bacalhau docker run
: call to bacalhau
--gpu 1
: Request 1 GPU to train the model
pytorch/pytorch
: Using the official pytorch Docker image
-i ipfs://QmdeQjz1HQQd.....
: Mounting the uploaded dataset to the path
-i https://raw.githubusercontent.com/py..........
: Mounting our training script we will use the URL to this Pytorch example
-w /outputs:
Our working directory is /outputs. This is the folder where we will save the model as it will automatically get uploaded to IPFS as outputs
python ../inputs/main.py --save-model
: URL script gets mounted to the /inputs folder in the container.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory
To view the file, run the following command:
In this example, we will demonstrate how to run inference on a model stored on Amazon S3. We will use a PyTorch model trained on the MNIST dataset.
Python
PyTorch
This script is designed to load a pretrained PyTorch model for MNIST digit classification from a tar.gz file, extract it, and use the model to perform inference on a given input image.
To use this script, you need to provide the paths to the tar.gz file containing the pre-trained model, the output directory where the model will be extracted, and the input image file for which you want to perform inference. The script will output the predicted digit (class) for the given input image.
To get started, you need to install the Bacalhau client, see more information here
-w /inputs
Setting the current working directory at /inputs in the container
-i src=s3://sagemaker-sample-files/datasets/image/MNIST/model/pytorch-training-2020-11-21-22-02-56-203/model.tar.gz,dst=/model/,opt=region=us-east-1
: Mounting the s3 bucket at the destination path provided /model/
and specifying the region where the bucket is located opt=region=us-east-1
-i git://github.com/js-ts/mnist-test.git
: Flag to mount the source code repo from GitHub. It would mount the repo at /inputs/js-ts/mnist-test
in this case it also contains the test image.
pytorch/pytorch
: The name of the Docker image.
-- python /inputs/js-ts/mnist-test/inference.py --tar_gz_file_path /model/model.tar.gz --output_directory /model-pth --image_path /inputs/js-ts/mnist-test/image.png
: The command to run inference on the model.
/model/model.tar.gz
is the path to the model file.
/model-pth
is the output directory for the model.
/inputs/js-ts/mnist-test/image.png
is the path to the input image.
The job has been submitted and Bacalhau has printed out the related job id. We store that in an environment variable so that we can reuse it later on.
Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. We show that the use of such a large and diverse dataset leads to improved robustness to accents, background noise, and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English. We are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing. In this example, we will transcribe an audio clip locally, containerize the script and then run the container on Bacalhau.
The advantage of using Bacalhau over managed Automatic Speech Recognition services is that you can run your own containers which can scale to do batch process petabytes of videos or audio for automatic speech recognition
Using OpenAI whisper with Bacalhau to process audio files
To get started, you need to install:
Bacalhau client, see more information here
Whisper,
pytorch
pandas
Before we create and run the script we need a sample audio file to test the code for that we download a sample audio clip.
We will create a script that accepts parameters (input file path, output file path, temperature, etc.) and set the default parameters. Also:
If the input file is in mp4 format, then the script converts it to wav format.
Save the transcript in various formats,
We load the large model
Then pass it the required parameters. This model is not only limited to English and transcription, it supports other languages and also does translation, into the following languages:
Next, let's create a openai-whisper script:
Let's run the script with the default parameters:
Viewing the outputs
To build your own docker container, create a Dockerfile
, which contains instructions on how the image will be built, and what extra requirements will be included.
We choose pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime
as our base image
And then install all the dependencies, after that we will add the test audio file and our openai-whisper script to the container, we will also run a test command to check whether our script works inside the container and if the container builds successfully
:::info See more information on how to containerize your script/app here :::
We will run docker build
command to build the container;
Before running the command replace;
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create a Docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case
We will transcribe the moon landing video, which can be found here: https://www.nasa.gov/multimedia/hd/apollo11_hdpage.html
Since the downloaded video is in mov format we convert the video to mp4 format and then upload it to our public storage in this case IPFS. We will be using NFT.Storage (Recommended Option). To upload your dataset using NFTup just drag and drop your directory it will upload it to IPFS
After the dataset has been uploaded, copy the CID:
bafybeielf6z4cd2nuey5arckect5bjmelhouvn5rhbjlvpvhp7erkrc4nu
To submit a job, run the following Bacalhau command:
Let's look closely at the command above:
-i ipfs://bafybeielf6z4cd2nuey5arckect5bjmelhouvn5r
: flag to mount the CID which contains our file to the container at the path /inputs
-p inputs/Apollo_11_moonwalk_montage_720p.mp4
: the input path of our file
-o outputs
: the path where to store the outputs
--gpu
: here we request 1 GPU
jsacex/whisper
: the name and the tag of the docker image we are using
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
To view the file, run the following command:
Parallel Video Resizing via File Sharding
Many data engineering workloads consist of embarrassingly parallel workloads where you want to run a simple execution on a large number of files. In this example tutorial, we will run a simple video filter on a large number of video files.
Running video files with Bacalhau
To get started, you need to install the Bacalhau client, see more information here
To submit a workload to Bacalhau, we will use the bacalhau docker run
command.
The job has been submitted and Bacalhau has printed out the related job id. We store that in an environment variable so that we can reuse it later on.
The bacalhau docker run
command allows one to pass input data volume with a -i ipfs://CID:path
argument just like Docker, except the left-hand side of the argument is a content identifier (CID). This results in Bacalhau mounting a data volume inside the container. By default, Bacalhau mounts the input volume at the path /inputs
inside the container.
We created a 72px wide video thumbnails for all the videos in the inputs
directory. The outputs
directory will contain the thumbnails for each video. We will shard by 1 video per job, and use the linuxserver/ffmpeg
container to resize the videos.
:::tip Bacalhau overwrites the default entrypoint so we must run the full command after the --
argument. In this line you will list all of the mp4 files in the /inputs
directory and execute ffmpeg
against each instance. :::
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory.
To view the file, run the following command:
To view the videos, we will use glob to return all file paths that match a specific pattern.
<video src={require('./scaled_Bird_flying_over_the_lake.mp4').default} controls > Your browser does not support the video
element. <video src={require('./scaled_Calm_waves_on_a_rocky_sea_gulf.mp4').default} controls > Your browser does not support the video
element. <video src={require('./scaled_Prominent_Late_Gothic_styled_architecture.mp4').default} controls > Your browser does not support the video
element.
For questions, and feedback, please reach out in our forum
In this example tutorial, we use Bacalhau and Easy OCR to digitize paper records or for recognizing characters or extract text data from images stored on IPFS/Filecoin or on the web. EasyOCR is a ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc. With easy OCR you use the pre-trained models or use your own fine-tuned model.
Using Bacalhau and Easy OCR to extract text data from images stored on the web.
To get started, you need to install the Bacalhau client, see more information here
Install the required dependencies
Load the different example images
List all the images
To displaying an image from the list
Next, we create a reader to do OCR to get coordinates which represent a rectangle containing text and the text itself
:::tip You can skip this step and go straight to running a Bacalhau job :::
We will use the Dockerfile
that is already created in the Easy OCR repo. Use the command below to clone the repo
The docker build
command builds Docker images from a Dockerfile.
Before running the command replace;
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name, or tag.
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. To submit a job, run the following Bacalhau command:
Since the model and the image aren't present in the container we will mount the image from an URL and the model from IPFS. You can find models to download from here. You can choose the model you want to use in this case we will be using the zh_sim_g2 model
-i ipfs://bafybeibvc......
: Mounting the model from IPFS
-i https://raw.githubusercontent.com.........
Mounting the Input Image from a URL
--gpu 1
: Specifying the no of GPUs
jsacex/easyocr
: Name of the Docker image
Breaking up the easyocr command
-- easyocr -l ch_sim en -f ./inputs/chinese.jpg --detail=1 --gpu=True
-l
: the name of the model which is ch_sim
-f
: path to the input Image or directory
--detail=1
: level of detail
--gpu=True
: we set this flag to true since we are running inference on a GPU, if you run this on a CPU you set this to false
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory
To view the file, run the following command:
In this example tutorial, we will show you how to generate realistic images with StyleGAN3 and Bacalhau. StyleGAN is based on Generative Adversarial Networks (GANs), which include a generator and discriminator network that has been trained to differentiate images generated by the generator from real images. However, during the training, the generator tries to fool the discriminator, which results in the generation of realistic-looking images. With StyleGAN3 we can generate realistic-looking images or videos. It can generate not only human faces but also animals, cars, and landscapes.
Generative images with Bacalhau
To get started, you need to install the Bacalhau client, see more information here
To run StyleGAN3 locally, you'll need to clone the repo, install dependencies and download the model weights.
Generate an image using a pre-trained AFHQv2
model
Viewing the output image
To build your own docker container, create a Dockerfile
, which contains instructions to build your image.
:::info See more information on how to containerize your script/apphere :::
We will run docker build
command to build the container;
Before running the command replace;
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case
To submit a job, run the following Bacalhau command:
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
--gpu 1
: No of GPUs
jsacex/stylegan3
: the name and the tag of the docker image we are using
../outputs
: path to the output
python gen_images.py
: execute the script
--trunc=1 --seeds=2 --network=stylegan3-r-afhqv2-512x512.pkl
: The animation length is either determined based on the --seeds value or explicitly specified using the --num-keyframes option. When num keyframes are specified with --num-keyframes, the output video length will be 'num_keyframes*w_frames' frames.
You can also run variations of this command to generate videos and other things. In the following command below, we will render a latent vector interpolation video. This will render a 4x2 grid of interpolations for seeds 0 through 31.
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
--gpu 1
: No of GPUs
jsacex/stylegan3
: the name and the tag of the docker image we are using
../outputs
: path to the output
python gen_images.py
: execute the script
--trunc=1 --seeds=2 --network=stylegan3-r-afhqv2-512x512.pkl
: The animation length is either determined based on the --seeds value or explicitly specified using the --num-keyframes option. When num keyframes is specified with --num-keyframes, the output video length will be 'num_keyframes_w_frames' frames. If --num-keyframes is not specified, the number of seeds given with --seeds must be divisible by grid size W_H (--grid). In this case, the output video length will be '# seeds/(w*h)*w_frames' frames.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory
To view the file, run the following command:
The identification and localization of objects in images and videos is a computer vision task called object detection. Several algorithms have emerged in the past few years to tackle the problem. One of the most popular algorithms to date for real-time object detection is YOLO (You Only Look Once), initially proposed by Redmond et al.[1]
Traditionally, models like YOLO required enormous amounts of training data to yield reasonable results. People might not have access to such high-quality labeled data. Thankfully, open-source communities and researchers have made it possible to utilize pre-trained models to perform inference. In other words, you can use models that have already been trained on large datasets to perform object detection on your own data.
In this tutorial you will perform an end-to-end object detection inference, using the YOLOv5 Docker Image developed by Ultralytics.
Performing object detection inference using Yolov5 and Bacalhau
To get started, you need to install the Bacalhau client, see more information here
Bacalhau is a highly scalable decentralized computing platform and is well suited to running massive object detection jobs. In this example, you can take advantage of the GPUs available on the Bacalhau network.
To get started, let's run a test job with a small sample dataset that is included in the YOLOv5 Docker Image. This will give you a chance to familiarise yourself with the process of running a job on Bacalhau.
In addition to the usual Bacalhau flags, you will also see:
--gpu 1
to specify the use of a GPU
:::tip Remember that Bacalhau does not provide any network connectivity when running a job. All assets must be provided at submission time. :::
The model requires pre-trained weights to run and by default downloads them from within the container. Bacalhau jobs don't have network access so we will pass in the weights at submission time, saving them to /usr/src/app/yolov5s.pt
. You may also provide your own weights here.
The container has its own options that we must specify:
--input
to select which pre-trained weights you desire with details on the yolov5 release page
--project
specifies the output volume that the model will save its results to. Bacalhau defaults to using /outputs
as the output directory, so we save it there.
For more container flags refer to the yolov5/detect.py
file in the YOLO repository.
One final additional hack that we have to do is move the weights file to a location with the standard name. As of writing this, Bacalhau downloads the file to a UUID-named file, which the model is not expecting. This is because GitHub 302 redirects the request to a random file in its backend.
This should output a UUID (like 59c59bfb-4ef8-45ac-9f4b-f0e9afd26e70
). This is the ID of the job that was created. You can check the status of the job with the following command:
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished we can see the results:
Now let's use some custom images. First, you will need to ingest your images onto IPFS/Filecoin. For more information about how to do that see the data ingestion section.
This example will use the Cyclist Dataset for Object Detection | Kaggle dataset.
We have already uploaded this dataset to Filecoin under the CID: bafybeicyuddgg4iliqzkx57twgshjluo2jtmlovovlx5lmgp5uoh3zrvpm
. You can browse to this dataset via a HTTP IPFS proxy.
Let's run a the same job again, but this time use the images above.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
TensorFlow is an open-source machine learning software library, TensorFlow is used to train neural networks. Expressed in the form of stateful dataflow graphs, each node in the graph represents the operations performed by neural networks on multi-dimensional arrays. These multi-dimensional arrays are commonly known as “tensors,” hence the name TensorFlow. In this example, we will be training a MNIST model.
Running any type of Tensorflow model with Bacalhau
This section is from TensorFlow 2 quickstart for beginners
This short introduction uses Keras to:
Load a prebuilt dataset.
Build a neural network machine learning model that classifies images.
Train this neural network.
Evaluate the accuracy of the model.
Import TensorFlow into your program to check whether it is installed
Build a tf.keras.Sequential
model by stacking layers.
For each example, the model returns a vector of logits or log-odds scores, one for each class.
The tf.nn.softmax
function converts these logits to probabilities for each class:
Note: It is possible to bake the tf.nn.softmax
function into the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to provide an exact and numerically stable loss calculation for all models when using a softmax output.
Define a loss function for training using losses.SparseCategoricalCrossentropy
, which takes a vector of logits and a True
index and returns a scalar loss for each example.
This loss is equal to the negative log probability of the true class: The loss is zero if the model is sure of the correct class.
This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.math.log(1/10) ~= 2.3
.
Before you start training, configure and compile the model using Keras Model.compile
. Set the optimizer
class to adam
, set the loss
to the loss_fn
function you defined earlier, and specify a metric to be evaluated for the model by setting the metrics
parameter to accuracy
.
Use the Model.fit
method to adjust your model parameters and minimize the loss:
The Model.evaluate
method checks the models performance, usually on a "Validation-set" or "Test-set".
The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the TensorFlow tutorials.
If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:
the following method can be used to save the model as a checkpoint
You can use a tool like nbconvert
to convert your Python notebook into a script.
After that, you can create a gist of the training script at gist.github.com copy the raw link of the gist
Testing whether the script works
The dataset and the script are mounted to the TensorFlow container using an URL we then run the script inside the container
Structure of the command:
-i https://gist.githubusercontent.com/js-ts/e7d32c7d19ffde7811c683d4fcb1a219/raw/ff44ac5b157d231f464f4d43ce0e05bccb4c1d7b/train.py
: mount the training script
-i https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
: mount the dataset
tensorflow/tensorflow
: specify the Docker image
python train.py
: execute the script
By default whatever URL you mount using the -i flag gets mounted at the path /inputs so we choose that as our input directory -w /inputs
Where it says Completed
, that means the job is done, and we can get the results.
To find out more information about your job, run the following command:
Stable diffusion has revolutionalized text2image models by producing high quality images based on a prompt. Dreambooth is a approach for personalization of text-to-image diffusion models. With images as input subject, we can fine-tune a pretrained text-to-image model
Although the dreambooth paper used Imagen to finetune the pre-trained model since both the Imagen model and Dreambooth code are closed source, several opensource projects have emerged using stable diffusion.
Dreambooth makes stable-diffusion even more powered with the ability to generate realistic looking pictures of humans, animals or any other object by just training them on 20-30 images.
In this example tutorial, we will be fine-tuning a pretrained stable diffusion using images of a human and generating images of him drinking coffee.
The following command generates the following:
Subject: SBF
Prompt: a photo of SBF without hair
Output:
To get started, you need to install the Bacalhau client, see more information here
:::info You can skip this section entirely and directly go to running a job on Bacalhau :::
Building this container requires you to have a supported GPU which needs to have 16gb+ of memory, since it can be resource intensive.
We will create a Dockerfile
and add the desired configuration to the file. These commands specify how the image will be built, and what extra requirements will be included.
This container is using the pytorch/pytorch:1.12.1-cuda11.3-cudnn8-devel
image and the working directory is set. Next, we add our custom code and pull the dependent repositories.
The shell script is there to make things much simpler since the command to train the model needs many parameters to pass and later convert the model weights to the checkpoint, you can edit this script and add in your own parameters
To download the models and run a test job in the Docker file, copy the following:
finetune.sh
We will run docker build
command to build the container;
Before running the command replace;
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create a Docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
Now you can push this repository to the registry designated by its name or tag.
The optimal dataset size is between 20-30 images. You can choose the images of the subject in different positions, full body images, half body, pictures of the face etc.
Only the subject should appear in the image so you can crop the image to just fit the subject. Make sure that the images are 512x512 size and are named in the following pattern since the subject name is David Aronchick we name the images in the following pattern
You can view the Subject Image dataset of David Aronchick for reference
After the Subject dataset is created we upload it to IPFS
In this case, we will be using NFT.Storage (Recommended Option) to upload files and directories with NFTUp
To upload your dataset using NFTup just drag and drop your directory it will upload it to IPFS
After the checkpoint file has been uploaded copy its CID bafybeidqbuphwkqwgrobv2vakwsh3l6b4q2mx7xspgh4l7lhulhc3dfa7a
Since there are a lot of combinations that you can try, processing of finetuned model can take almost 1hr+ to complete. Here are a few approaches that you can try based on your requirements
Structure of the command
No of GPUs --gpu 1
CID of the Subject Images -i ipfs://bafybeidqbuphwkqwgrobv2vakwsh3l6b4q2mx7xspgh4l7lhulhc3dfa7a
Name of our Image jsacex/dreambooth:latest
-- bash finetune.sh /inputs /outputs "a photo of aronchick man" "a photo of man" 3000 "/man"
Path to the subject Images /inputs
Path to save the generated outputs /outputs
Subject name along with class "a photo of < name of the subject > < class >" -> "a photo of aronchick man"
Name of the class "a photo of < class >" -> "a photo of man"
The number of iterations is 3000. This number should be no of subject images x 100. So if there are 30 images, it would be 3000. It takes around 32Mins on a v100 for 3000 iterations, but you can increase/decrease the number based on your requirements
Path to our class Images /man
Here is our command with our parameters replaced
If your subject fits the above class but has a different name you just need to replace the input CID and the subject name which in this case is SBF
Use the /woman
class images
provide your own regularization images or use the mix class
Use the /mix
class images if the class of the subject is mix
You can upload the model to IPFS and then create a gist and mount the model and script to the lightweight container
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
In the next steps, we will be doing inference on the finetuned model
:::info
Refer https://docs.bacalhau.org/examples/model-inference/Stable-Diffusion-CKPT-Inference on details of how to build a SD inference container :::
Bacalhau currently doesn't support mounting subpaths of the CID, so instead of just mounting the model.ckpt file we need to mount the whole output CID which is 6.4GB, which might result in errors like FAILED TO COPY /inputs. So you have to manually copy the CID of the model.ckpt which is of 2GB
To get the CID of the model.ckpt file go to https://gateway.ipfs.io/ipfs/< YOUR-OUTPUT-CID >/outputs/
https://gateway.ipfs.io/ipfs/QmcmD7M7pYLP8QgwjqpbP4dojRLiLuEBdhevuCD9kFmbdV/outputs/
If you use the Brave browser
Using IPFS CLI
Copy the link of model.ckpt highlighted in the box https://gateway.ipfs.io/ipfs/QmdpsqZn9BZx9XxzCsyPcJyS7yfYacmQXZxHzcuYwzmtGg?filename=model.ckpt
Extract the CID portion of the link and copy it
To run a Bacalhau Job on the fine-tuned model, we will use the bacalhau docker run
command.
If you are facing difficulties using the above method you can mount the whole output CID
When a job is sumbitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory
To view the file, run the following command:
%%bash ls results/outputs
In this example tutorial, we will look at how to run BIDS App on Bacalhau. BIDS (Brain Imaging Data Structure) is an emerging standard for organizing and describing neuroimaging datasets. BIDS App is a container image capturing a neuroimaging pipeline that takes a BIDS formatted dataset as input. Each BIDS App has the same core set of command line arguments, making them easy to run and integrate into automated platforms. BIDS Apps are constructed in a way that does not depend on any software outside of the image other than the container engine.
Running imaging data structure with Bacalhau
To get started, you need to install the Bacalhau client, see more information here
For this tutorial, download file ds005.tar
from this Bids dataset folder and untar it in a directory. ds005
will be our input directory in the following example.
The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this you need an account with a pinning service like web3.storage or Pinata or nft.storage. Once registered you can use their UI or API or SDKs to upload files.
When you pin your data, you'll get a CID which is in a format like this QmaNyzSpJCt1gMCQLd3QugihY6HzdYmA8QMEa45LDBbVPz
. Copy the CID as it will be used to access your data
:::info Alternatively, you can upload your dataset to IPFS using IPFS CLI, but the recommended approach is to use a pinning service as we have mentioned above. :::
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
-i ipfs://QmaNyzSpJCt1gMCQLd3QugihY6HzdYmA8QMEa45LDBbVPz:/data
: mount the CID of the dataset that is uploaded to IPFS and mount it to a folder called data on the container
nipreps/mriqc:latest
: the name and the tag of the docker image we are using
../data/ds005
: path to input dataset
../outputs
: path to the output
participant --participant_label 01 02 03
: run the participant level in subjects 001 002 003
When a job is submitted, Bacalhau prints out the related job_id. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory
To view the file, run the following command: