Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The different job types available in Bacalhau
Bacalhau has recently introduced different job types in v1.1, providing more control and flexibility over the orchestration and scheduling of those jobs - depending on their type.
Despite the differences in job types, all jobs benefit from core functionalities provided by Bacalhau, including:
Node selection - the appropriate nodes are selected based on several criteria, including resource availability, priority and feedback from the nodes.
Job monitoring - jobs are monitored to ensure they complete, and that they stay in a healthy state.
Retries - within limits, Bacalhau will retry certain jobs a set number of times should it fail to complete successfully when requested.
Batch jobs are executed on demand, running on a specified number of Bacalhau nodes. These jobs either run until completion or until they reach a timeout. They are designed to carry out a single, discrete task before finishing. This is the only queueable job type.
Ideal for intermittent yet intensive data dives, for instance performing computation over large datasets before publishing the response. This approach eliminates the continuous processing overhead, focusing on specific, in-depth investigations and computation.
Similar to batch jobs, ops jobs have a broader reach. They are executed on all nodes that align with the job specification, but otherwise behave like batch jobs.
Ops jobs are perfect for urgent investigations, granting direct access to logs on host machines, where previously you may have had to wait for the logs to arrive at a central location before being able to query them. They can also be used for delivering configuration files for other systems should you wish to deploy an update to many machines at once.
Daemon jobs run continuously on all nodes that meet the criteria given in the job specification. Should any new compute nodes join the cluster after the job was started, and should they meet the criteria, the job will be scheduled to run on that node too.
A good application of daemon jobs is to handle continuously generated data on every compute node. This might be from edge devices like sensors, or cameras, or from logs where they are generated. The data can then be aggregated and compressed them before sending it onwards. For logs, the aggregated data can be relayed at regular intervals to platforms like Kafka or Kinesis, or directly to other logging services with edge devices potentially delivering results via MQTT.
Service jobs run continuously on a specified number of nodes that meet the criteria given in the job specification. Bacalhau's orchestrator selects the optimal nodes to run the job, and continuously monitors its health, performance. If required, it will reschedule on other nodes.
This job type is good for long-running consumers such as streaming or queuing services, or real-time event listeners.
A Constraint
represents a condition that must be met for a compute node to be eligible to run a given job. Operators have the flexibility to manually define node labels when initiating a node using the bacalhau serve command. Additionally, Bacalhau boasts features like automatic resource detection and dynamic labeling, further enhancing its capability.
By defining constraints, you can ensure that jobs are scheduled on nodes that have the necessary requirements or conditions.
Constraint
Parameters:Key: The name of the attribute or property to check on the compute node. This could be anything from a specific hardware feature, operating system version, or any other node property.
Operator: Determines the kind of comparison to be made against the Key
's value, which can be:
in
: Checks if the Key's value exists within the provided list of values.
notin
: Ensures the Key's value doesn't match any in the provided list of values.
exists
: Verifies that a value for the specified Key is present, regardless of its actual value.
!
: Confirms the absence of the specified Key. i.e DoesNotExist
gt
: Assesses if the Key's value is greater than the provided value.
lt
: Assesses if the Key's value is less than the provided value.
=
& ==
: Both are used to compare the Key's value for an exact match with the provided value.
!=
: Ensures the Key's value is not the same as the provided value.
Values (optional): A list of values that the node attribute, specified by the Key
, is compared against using the Operator
. This is not needed for operators like exists
or !
.
Consider a scenario where a job should only run on nodes with a GPU and an operating system version greater than 2.0
. The constraints for such a requirement might look like:
In this example, the first constraint checks if the node has a GPU, the second constraint ensures the OS is linux, and deployed in eu-west-1 or eu-west-2`.
Constraints are evaluated as a logical AND, meaning all constraints must be satisfied for a node to be eligible.
Using too many specific constraints can lead to a job not being scheduled if no nodes satisfy all the conditions.
It's essential to balance the specificity of constraints with the broader needs and resources available in the cluster.
A Job
represents a discrete unit of work that can be scheduled and executed. It carries all the necessary information to define the nature of the work, how it should be executed, and the resources it requires.
job
ParametersName (string : <optional>)
: A logical name to refer to the job. Defaults to job ID.
Namespace (string: "default")
: The namespace in which the job is running. ClientID
is used as a namespace in the public demo network.
Type (string: <required>)
: The type of the job, such as batch
, ops
, daemon
or service
. You can learn more about the supported jobs types in the Job Types guide.
Priority (int: 0
): Determines the scheduling priority.
Count (int: <required)
: Number of replicas to be scheduled. This is only applicable for jobs of type batch
and service
.
Meta (
Meta
: nil)
: Arbitrary metadata associated with the job.
Labels (
Label
[] : nil)
: Arbitrary labels associated with the job for filtering purposes.
Constraints (
Constraint
[] : nil)
: These are selectors which must be true for a compute node to run this job.
Tasks (
Task
[] : <required>)
:: Task associated with the job, which defines a unit of work within the job. Today we are only supporting single task per job, but with future plans to extend this.
The following parameters are generated by the server and should not be set directly.
ID (string)
: A unique identifier assigned to this job. It's auto-generated by the server and should not be set directly. Used for distinguishing between jobs with similar names.
State (
State
)
: Represents the current state of the job.
Version (int)
: A monotonically increasing version number incremented on job specification update.
Revision (int)
: A monotonically increasing revision number incremented on each update to the job's state or specification.
CreateTime (int)
: Timestamp of job creation.
ModifyTime (int)
: Timestamp of last job modification.
An InputSource
defines where and how to retrieve specific artifacts needed for a Task
, such as files or data, and where to mount them within the task's context. This ensures the necessary data is present before the task's execution begins.
Bacalhau's InputSource
natively supports fetching data from remote sources like S3 and IPFS and can also mount local directories. It is intended to be flexible for future expansion.
InputSource
Parameters:Source (
SpecConfig
: <required>)
: Specifies the origin of the artifact, which could be a URL, an S3 bucket, or other locations.
Alias (string: <optional>)
: An optional identifier for this input source. It's particularly useful for dynamic operations within a task, such as dynamically importing data in WebAssembly using an alias.
Target (string: <required>)
: Defines the path inside the task's environment where the retrieved artifact should be mounted or stored. This ensures that the task can access the data during its execution.
In this example, the first input source fetches data from an S3 bucket and mounts it at /my_s3_data
within the task. The second input source mounts a local directory at /my_local_data
and allows the task to read and write data to it.
Templating Support in Bacalhau Job Run
This documentation introduces templating support for bacalhau job run
, providing users with the ability to dynamically inject variables into their job specifications. This feature is particularly useful when running multiple jobs with varying parameters such as DuckDB query, S3 buckets, prefixes, and time ranges without the need to edit each job specification file manually.
The motivation behind this feature arises from the need to streamline the process of preparing and running multiple jobs with different configurations. Rather than manually editing job specs for each run, users can leverage placeholders and pass actual values at runtime.
The templating functionality in Bacalhau is built upon the Go text/template
package. This powerful library offers a wide range of features for manipulating and formatting text based on template definitions and input variables.
For more detailed information about the Go text/template
library and its syntax, please refer to the official documentation: Go text/template
Package.
You can also use environment variables for templating:
To preview the final templated job spec without actually submitting the job, you can use the --dry-run
flag:
This will output the processed job specification, showing you how the placeholders have been replaced with the provided values.
This is an ops
job that runs on all nodes that match the job selection criteria. It accepts duckdb query
variable, and two optional start-time
and end-time
variables to define the time range for the query.
To run this job, you can use the following command:
This is a batch
job that runs on a single node. It accepts duckdb query
variable, and four other variables to define the S3 bucket, prefix, pattern for the logs and the AWS region.
To run this job, you can use the following command:
A ResultPath
denotes a specific location within a Task
that contains meaningful output or results. By specifying a ResultPath
, you can pinpoint which files or directories are essential and should be retained or published after the task's execution.
ResultPath
Parameters:Name: A descriptive label or identifier for the result, allowing for easier referencing and understanding of the output's nature or significance.
Path: Specifies the exact location, either a file or a directory, within the task's environment where the result or output is stored. This ensures that after the task completes, the critical data at this path can be accessed, retained, or published as necessary.
The Resources
provides a structured way to detail the computational resources a Task
requires. By specifying these requirements, you ensure that the task is scheduled on a node with adequate resources, optimizing performance and avoiding potential issues linked to resource constraints.
Resources
Parameters:CPU (string: <optional>)
: Defines the CPU resources required for the task. Units can be specified in cores (e.g., 2
for 2 CPU cores) or in milliCPU units (e.g., 250m
or 0.25
for 250 milliCPU units). For instance, if you have half a CPU core, you can represent it as 500m
or 0.5
.
Memory (string: <optional>)
: Highlights the amount of RAM needed for the task. You can specify the memory in various units such as:
Kb
for Kilobytes
Mb
for Megabytes
Gb
for Gigabytes
Tb
for Terabytes
Disk (string: <optional>)
: States the disk storage space needed for the task. Similarly, the disk space can be expressed in units like Gb
for Gigabytes, Mb
for Megabytes, and so on. As an example, 10Gb
indicates 10 Gigabytes of storage space.
GPU (string: <optional>)
: Denotes the number of GPU units required. For example, 2
signifies the requirement of 2 GPU units. This is crucial for tasks involving heavy computational processes, machine learning models, or tasks that leverage GPU acceleration.
The Network
object offers a method to specify the networking requirements of a Task
. It defines the scope and constraints of the network connectivity based on the demands of the task.
Network
Parameters:Type (string: "None")
: Indicates the network configuration's nature. There are several network modes available:
None
: This mode implies that the task does not necessitate any networking capabilities.
Full
: Specifies that the task mandates unrestricted, raw IP networking without any imposed filters.
HTTP
: This mode constrains the task to only require HTTP networking with specific domains. In this model:
The job specifier puts forward a job, stipulating the domain(s) it intends to communicate with.
The compute provider assesses the inherent risk of the job based on these domains and bids accordingly.
At runtime, the network traffic remains strictly confined to the designated domain(s).
A typical command for this might resemble: bacalhau docker run —network=http —domain=crates.io —domain=github.com -i ipfs://Qmy1234myd4t4,dst=/code rust/compile
The primary risks for the compute provider center around possible violations of its terms, its hosting provider's terms, or even prevailing laws in its jurisdiction. This encompasses issues such as unauthorized access or distribution of illicit content and potential cyber-attacks.
Conversely, the job specifier's primary risk involves operating in a paid environment. External entities might seek to exploit this environment, for instance, through a compromised package download that initiates a crypto mining operation, depleting the allocated, prepaid job time. By limiting traffic strictly to the pre-specified domains, the potential for such cyber threats diminishes considerably.
While a compute provider might impose its limits through other means, having domains declared upfront allows it to selectively bid on jobs that it can execute without issues, improving the user experience for job specifiers.
Domains (string[]: <optional>)
: A list of domain strings, relevant primarily when the Type
is set to HTTP. It dictates the specific domains the task can communicate with over HTTP.
Understanding and utilizing these configurations aptly can ensure that tasks are executed in an environment that aligns with their networking requirements, bolstering efficiency and security.
The Labels
block within a Job
specification plays a crucial role in Bacalhau, serving as a mechanism for filtering jobs. By attaching specific labels to jobs, users can quickly and effectively filter and manage jobs via both the Command Line Interface (CLI) and Application Programming Interface (API) based on various criteria.
Labels
ParametersLabels are essentially key-value pairs attached to jobs, allowing for detailed categorizations and filtrations. Each label consists of a Key
and a Value
. These labels can be filtered using operators to pinpoint specific jobs fitting certain criteria.
Jobs can be filtered using the following operators:
in
: Checks if the key's value matches any within a specified list of values.
notin
: Validates that the key's value isn’t within a provided list of values.
exists
: Checks for the presence of a specified key, regardless of its value.
!
: Validates the absence of a specified key. (i.e., DoesNotExist)
gt
: Checks if the key's value is greater than a specified value.
lt
: Checks if the key's value is less than a specified value.
= & ==
: Used for exact match comparisons between the key’s value and a specified value.
!=
: Validates that the key’s value doesn't match a specified value.
Filter jobs with a label whose key is "environment" and value is "development":
Filter jobs with a label whose key is "version" and value is greater than "2.0":
Filter jobs with a label "project" existing:
Filter jobs without a "project" label:
Job Management: Enables efficient management of jobs by categorizing them based on distinct attributes or criteria.
Automation: Facilitates the automation of job deployment and management processes by allowing scripts and tools to target specific categories of jobs.
Monitoring & Analytics: Enhances monitoring and analytics by grouping jobs into meaningful categories, allowing for detailed insights and analysis.
The Labels
block is instrumental in the enhanced management, filtering, and operation of jobs within Bacalhau. By understanding and utilizing the available operators and label parameters effectively, users can optimize their workflow, automate processes, and achieve detailed insights into their jobs.
When running a node, you can choose which jobs you want to run by using configuration options, environment variables or flags to specify a job selection policy.
setting-up/networking-instructions/networking.md
If you want more control over making the decision to take on jobs, you can use the --job-selection-probe-exec
and --job-selection-probe-http
flags.
These are external programs that are passed the following data structure so that they can make a decision about whether or not to take on a job:
The exec
probe is a script to run that will be given the job data on stdin
, and must exit with status code 0 if the job should be run.
The http
probe is a URL to POST the job data to. The job will be rejected if the HTTP request returns a non-positive status code (e.g. >= 400).
If the HTTP response is a JSON blob, it should match the following schema and will be used to respond to the bid directly:
For example, the following response will reject the job:
If the HTTP response is not a JSON blob, the content is ignored and any non-error status code will accept the job.
SpecConfig
provides a unified structure to specify configurations for various components in Bacalhau, including engines, publishers, and input sources. Its flexible design allows seamless integration with multiple systems like Docker, WebAssembly (Wasm), AWS S3, and local directories, among others.
SpecConfig
ParametersType (string : <required>)
: Specifies the type of the configuration. Examples include docker
and wasm
for execution engines, S3
for input sources and publishers, etc.
Params (map[string]any : <optional>)
: A set of key-value pairs that provide the specific configurations for the chosen type. The keys and values are flexible and depend on the Type
. For instance, parameters for a Docker engine might include image name and version, while an S3 publisher would require configurations like the bucket name and AWS region. If not provided, it defaults to nil
.
Here are a few hypothetical examples to demonstrate how you might define SpecConfig
for different components:
Full Docker spec can be found .
Remember, the exact keys and values in the Params
map will vary depending on the specific requirements of the component being configured. Always refer to the individual component's documentation to understand the available parameters.
In both the Job
and Task
specifications within Bacalhau, the Meta
block is a versatile element used to attach arbitrary metadata. This metadata isn't utilized for filtering or categorizing jobs; there's a separate block specifically designated for that purpose. Instead, the Meta
block is instrumental for embedding additional information for operators or external systems, enhancing clarity and context.
Meta
Parameters in Job and Task SpecsThe Meta
block is comprised of key-value pairs, with both keys and values being strings. These pairs aren't constrained by a predefined structure, offering flexibility for users to annotate jobs and tasks with diverse metadata.
Users can incorporate any arbitrary key-value pairs to convey descriptive information or context about the job or task.
project: Identifies the associated project.
version: Specifies the version of the application or service.
owner: Names the responsible team or individual.
environment: Indicates the stage in the development lifecycle.
Beyond user-defined metadata, Bacalhau automatically injects specific metadata keys for identification and security purposes.
bacalhau.org/requester.id: A unique identifier for the orchestrator that handled the job.
bacalhau.org/requester.publicKey: The public key of the requester, aiding in security and validation.
bacalhau.org/client.id: The ID for the client submitting the job, enhancing traceability.
Identification: The metadata aids in uniquely identifying jobs and tasks, connecting them to their originators and executors.
Context Enhancement: Metadata can supplement jobs and tasks with additional data, offering insights and context that aren't captured by standard parameters.
Security Enhancement: Auto-generated keys like the requester's public key contribute to the secure handling and execution of jobs and tasks.
Full S3 Publisher can be found .
Full local source can be found .
While the Meta
block is distinct from the block used for filtering, its contribution to providing context, security, and traceability is integral in managing and understanding the diverse jobs and tasks within the Bacalhau ecosystem effectively.
Config property
serve
flag
Default value
Meaning
Node.Compute.JobSelection.Locality
--job-selection-data-locality
Anywhere
Only accept jobs that reference data we have locally ("local") or anywhere ("anywhere").
Node.Compute.JobSelection.ProbeExec
--job-selection-probe-exec
unused
Use the result of an external program to decide if we should take on the job.
Node.Compute.JobSelection.ProbeHttp
--job-selection-probe-http
unused
Use the result of a HTTP POST to decide if we should take on the job.
Node.Compute.JobSelection.RejectStatelessJobs
--job-selection-reject-stateless
False
Reject jobs that don't specify any input data.
Node.Compute.JobSelection.AcceptNetworkedJobs
--job-selection-accept-networked
False
Accept jobs that require network connections.
The Timeouts
object provides a mechanism to impose timing constraints on specific task operations, particularly execution. By setting these timeouts, users can ensure tasks don't run indefinitely and align them with intended durations.
Timeouts
Parameters:ExecutionTimeout (int: <optional>)
: Defines the maximum duration (in seconds) that a task is permitted to run. A value of zero indicates that there's no set timeout. This could be particularly useful for tasks that function as daemons and are designed to run indefinitely.
Utilizing the Timeouts
judiciously helps in managing resource utilization and ensures tasks adhere to expected timelines, thereby enhancing the efficiency and predictability of job executions.
State
Structure SpecificationWithin Bacalhau, the State
structure is designed to represent the status or state of an object (like a ), coupled with a human-readable message for added context. Below is a breakdown of the structure:
State
ParametersStateType (T : <required>)
: Represents the current state of the object. This is a generic parameter that will take on a specific value from a set of defined state types for the object in question. For jobs, this will be one of the values.
Message (string : <optional>)
: A human-readable message giving more context about the current state. Particularly useful for states like Failed
to provide insight into the nature of any error.
When State
is used for a job, the StateType
can be one of the following:
Pending
: This indicates that the job is submitted but is not yet scheduled for execution.
Running
: The job is scheduled and is currently undergoing execution.
Completed
: This state signifies that a job has successfully executed its task. Only applicable for batch jobs.
Failed
: A state indicating that the job encountered errors and couldn't successfully complete.
JobStateTypeStopped
: The job has been intentionally halted by the user before its natural completion.
The inclusion of the Message
field can offer detailed insights, especially in states like Failed
, aiding in error comprehension and debugging.
A Task
signifies a distinct unit of work within the broader context of a Job
. It defines the specifics of how the task should be executed, where the results should be published, what environment variables are needed, among other configurations
Task
ParametersName (string : <required>)
: A unique identifier representing the name of the task.
Engine (
: required)
: Configures the execution engine for the task, such as or .
Publisher (
: optional)
: Specifies where the results of the task should be published, such as and publishers. Only applicable for tasks of type batch
and ops
.
Env (map[string]string : optional)
: A set of environment variables for the driver.
Meta (
: optional)
: Allows association of arbitrary metadata with this task.
InputSources (
[] : optional)
: Lists remote artifacts that should be downloaded before task execution and mounted within the task, such as from or .
ResultPaths (
[] : optional)
: Indicates volumes within the task that should be included in the published result. Only applicable for tasks of type batch
and ops
.
Resources (
: optional)
: Details the resources that this task requires.
Network (
: optional)
: Configurations related to the networking aspects of the task.
Timeouts (
: optional)
: Configurations concerning any timeouts associated with the task.