Bacalhau Docs
GithubSlackBlogEnterprise
v1.6.x
  • Documentation
  • Use Cases
  • CLI & API
  • References
  • Community
v1.6.x
  • Welcome
  • Getting Started
    • How Bacalhau Works
    • Getting Started
      • Step 1: Install the Bacalhau CLI
      • Step 2: Running Your Own Job
      • Step 3: Checking on the Status of Your Job
    • Creating Your Own Bacalhau Network
      • Setting Up a Cluster on Amazon Web Services (AWS) with Terraform 🚀
      • Setting Up a Cluster on Google Cloud Platform (GCP) With Terraform 🚀
      • Setting Up a Cluster on Azure with Terraform 🚀
    • Hardware Setup
    • Container Onboarding
      • Docker Workloads
      • WebAssembly (Wasm) Workloads
  • Setting Up
    • Running Nodes
      • Node Onboarding
      • GPU Installation
      • Job selection policy
      • Access Management
      • Node persistence
      • Configuring Your Input Sources
      • Configuring Transport Level Security
      • Limits and Timeouts
      • Test Network Locally
      • Bacalhau WebUI
      • Private IPFS Network Setup
    • Workload Onboarding
      • Container
        • Docker Workload Onboarding
        • WebAssembly (Wasm) Workloads
        • Bacalhau Docker Image
        • How To Work With Custom Containers in Bacalhau
      • Python
        • Building and Running Custom Python Container
        • Running Pandas on Bacalhau
        • Running a Python Script
        • Running Jupyter Notebooks on Bacalhau
        • Scripting Bacalhau with Python
      • R (language)
        • Building and Running your Custom R Containers on Bacalhau
        • Running a Simple R Script on Bacalhau
      • Run CUDA programs on Bacalhau
      • Running a Prolog Script
      • Reading Data from Multiple S3 Buckets using Bacalhau
      • Running Rust programs as WebAssembly (WASM)
      • Generate Synthetic Data using Sparkov Data Generation technique
    • Networking Instructions
      • Accessing the Internet from Jobs
      • Utilizing NATS.io within Bacalhau
    • GPU Workloads Setup
    • Automatic Update Checking
    • Marketplace Deployments
      • Google Cloud Marketplace
    • Inter-Nodes TLS
  • Guides
    • Configuration Management
    • Write a config.yaml
    • Write a SpecConfig
    • Using Labels and Constraints
  • Examples
    • Table of Contents for Bacalhau Examples
    • Data Engineering
      • Using Bacalhau with DuckDB
      • Ethereum Blockchain Analysis with Ethereum-ETL and Bacalhau
      • Convert CSV To Parquet Or Avro
      • Simple Image Processing
      • Oceanography - Data Conversion
      • Video Processing
      • Bacalhau and BigQuery
    • Data Ingestion
      • Copy Data from URL to Public Storage
      • Pinning Data
      • Running a Job over S3 data
    • Model Inference
      • EasyOCR (Optical Character Recognition) on Bacalhau
      • Running Inference on Dolly 2.0 Model with Hugging Face
      • Speech Recognition using Whisper
      • Stable Diffusion on a GPU
      • Stable Diffusion on a CPU
      • Object Detection with YOLOv5 on Bacalhau
      • Generate Realistic Images using StyleGAN3 and Bacalhau
      • Stable Diffusion Checkpoint Inference
      • Running Inference on a Model stored on S3
    • Model Training
      • Training Pytorch Model with Bacalhau
      • Training Tensorflow Model
      • Stable Diffusion Dreambooth (Finetuning)
    • Molecular Dynamics
      • Running BIDS Apps on Bacalhau
      • Coresets On Bacalhau
      • Genomics Data Generation
      • Gromacs for Analysis
      • Molecular Simulation with OpenMM and Bacalhau
    • Systems Engineering
      • Ad-hoc log query using DuckDB
  • References
    • Jobs Guide
      • Job Specification
        • Job Types
        • Task Specification
          • Engines
            • Docker Engine Specification
            • WebAssembly (WASM) Engine Specification
          • Publishers
            • IPFS Publisher Specification
            • Local Publisher Specification
            • S3 Publisher Specification
          • Sources
            • IPFS Source Specification
            • Local Source Specification
            • S3 Source Specification
            • URL Source Specification
          • Network Specification
          • Input Source Specification
          • Resources Specification
          • ResultPath Specification
        • Constraint Specification
        • Labels Specification
        • Meta Specification
      • Job Templates
      • Queuing & Timeouts
        • Job Queuing
        • Timeouts Specification
      • Job Results
        • State
    • CLI Guide
      • Single CLI commands
        • Agent
          • Agent Overview
          • Agent Alive
          • Agent Node
          • Agent Version
        • Config
          • Config Overview
          • Config Auto-Resources
          • Config Default
          • Config List
          • Config Set
        • Job
          • Job Overview
          • Job Describe
          • Job Executions
          • Job History
          • Job List
          • Job Logs
          • Job Run
          • Job Stop
        • Node
          • Node Overview
          • Node Approve
          • Node Delete
          • Node List
          • Node Describe
          • Node Reject
      • Command Migration
    • API Guide
      • Bacalhau API overview
      • Best Practices
      • Agent Endpoint
      • Orchestrator Endpoint
      • Migration API
    • Node Management
    • Authentication & Authorization
    • Database Integration
    • Debugging
      • Debugging Failed Jobs
      • Debugging Locally
    • Running Locally In Devstack
    • Setting up Dev Environment
  • Help & FAQ
    • Bacalhau FAQs
    • Glossary
    • Release Notes
      • v1.5.0 Release Notes
      • v1.4.0 Release Notes
  • Integrations
    • Apache Airflow Provider for Bacalhau
    • Lilypad
    • Bacalhau Python SDK
    • Observability for WebAssembly Workloads
  • Community
    • Social Media
    • Style Guide
    • Ways to Contribute
Powered by GitBook
LogoLogo

Use Cases

  • Distributed ETL
  • Edge ML
  • Distributed Data Warehousing
  • Fleet Management

About Us

  • Who we are
  • What we value

News & Blog

  • Blog

Get Support

  • Request Enterprise Solutions

Expanso (2025). All Rights Reserved.

On this page
  • Publisher Parameters
  • Published Result Spec
  • Dynamic Naming
  • Examples
  • Declarative Examples
  • Imperative Examples
  • Credential Requirements
  • Required IAM Policies
  • Compute Nodes
  • Requester Node

Was this helpful?

Export as PDF
  1. References
  2. Jobs Guide
  3. Job Specification
  4. Task Specification
  5. Publishers

S3 Publisher Specification

PreviousLocal Publisher SpecificationNextSources

Was this helpful?

Bacalhau's S3 Publisher provides users with a secure and efficient method to publish task results to any S3-compatible storage service. This publisher supports not just AWS S3, but other S3-compatible services offered by cloud providers like Google Cloud Storage and Azure Blob Storage, as well as open-source options like MinIO. The integration is designed to be highly flexible, ensuring users can choose the storage option that aligns with their needs, privacy preferences, and operational requirements.

Publisher Parameters

  1. Bucket (string: <required>): The name of the S3 bucket where the task results will be stored.

  2. Key (string: <required>): The object key within the specified bucket where the task results will be stored.

  3. Endpoint (string: <optional>): The endpoint URL of the S3 service (useful for S3-compatible services).

  4. Region (string: <optional>): The region where the S3 bucket is located.

Published Result Spec

Results published to S3 are stored as objects that can also be used as inputs to other Bacalhau jobs by using . The published result specification includes the following parameters:

  1. Bucket: Confirms the name of the bucket containing the stored results.

  2. Key: Identifies the unique object key within the specified bucket.

  3. Region: Notes the AWS region of the bucket.

  4. Endpoint: Records the endpoint URL for S3-compatible storage services.

  5. VersionID: The version ID of the stored object, enabling versioning support for retrieving specific versions of stored data.

  6. ChecksumSHA256: The SHA-256 checksum of the stored object, providing a method to verify data integrity.

Dynamic Naming

With the S3 Publisher in Bacalhau, you have the flexibility to use dynamic naming for the objects you publish to S3. This allows you to incorporate specific job and execution details into the object key, making it easier to trace, manage, and organize your published artifacts.

Bacalhau supports the following dynamic placeholders that will be replaced with their actual values during the publishing process:

  1. {executionID}: Replaced with the specific execution ID.

  2. {jobID}: Replaced with the ID of the job.

  3. {nodeID}: Replaced with the ID of the node where the execution took place

  4. {date}: Replaced with the current date in the format YYYYMMDD.

  5. {time}: Replaced with the current time in the format HHMMSS.

Additionally, if you are publishing an archive and the object key does not end with .tar.gz, it will be automatically appended. Conversely, if you're not archiving and the key doesn't end with a /, a trailing slash will be added.

Example

Imagine you've specified the following object key pattern for publishing:

results/{jobID}/{date}/{time}/

Given a job with ID abc123, executed on 2023-09-26 at 14:05:30, the published object key would be:

results/abc123/20230926/140530/

This dynamic naming feature offers a powerful way to create organized, intuitive naming conventions for your Bacalhau published objects in S3.

Examples

Declarative Examples

Here’s an example YAML configuration that outlines the process of using the S3 Publisher with Bacalhau:

Publisher:
  Type: "s3"
  Params:
    Bucket: "my-task-results"
    Key: "task123/result.tar.gz"
    Endpoint: "https://s3.us-west-2.amazonaws.com"

In this configuration, task results will be published to the specified S3 bucket and object key. If you’re using an S3-compatible service, simply update the Endpoint parameter with the appropriate URL.

The results will be compressed into a single object, and the published result specification will look like:

PublishedResult:
  Type: "s3"
  Params:
    Bucket: "my-task-results"
    Key: "task123/result.tar.gz"
    Endpoint: "https://s3.us-west-2.amazonaws.com"
    Region: "us-west-2"
    ChecksumSHA256: "0x9a3a..."
    VersionID: "3/L4kqtJlcpXroDTDmJ+rmDbwQaHWyOb..."

Imperative Examples

The Bacalhau command-line interface (CLI) provides an imperative approach to specify the S3 Publisher. Below are a few examples showcasing how to define an S3 publisher using CLI commands:

  1. Basic Docker job writing to S3 with default configurations:

    bacalhau docker run -p s3://bucket/key ubuntu ...

    This command writes to the S3 bucket using default endpoint and region settings.

  2. Docker job writing to S3 with a specific endpoint and region:

    bacalhau docker run -p s3://bucket/key,opt=endpoint=http://s3.example.com,opt=region=us-east-1 ubuntu ...

    This command specifies a unique endpoint and region for the S3 bucket.

  3. Using naming placeholders:

    bacalhau docker run -p s3://bucket/result-{date}-{jobID} ubuntu ...

    Dynamic naming placeholders like {date} and {jobID} allow for organized naming structures, automatically replacing these placeholders with appropriate values upon execution.

Remember to replace the placeholders like bucket, key, and other parameters with your specific values. These CLI commands offer a quick and customizable way to submit jobs and specify how the results should be published to S3.

Credential Requirements

To support this storage provider, no extra dependencies are necessary. However, valid AWS credentials are essential to sign the requests. The storage provider employs the default credentials chain to retrieve credentials, primarily sourcing them from:

  1. Environment variables: AWS credentials can be specified using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

  2. Credentials file: The credentials file typically located at ~/.aws/credentials can also be used to fetch the necessary AWS credentials.

  3. IAM Roles for Amazon EC2 Instances: If you're running your tasks within an Amazon EC2 instance, IAM roles can be utilized to provide the necessary permissions and credentials.

Required IAM Policies

Compute Nodes

Compute nodes must run with the following policies to publish to S3:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::BUCKET_NAME/*"
        }
    ]
}
  • PutObject Permissions: The s3:PutObject permission is necessary to publish objects to the specified S3 bucket.

  • Resource: The Resource field in the policy specifies the Amazon Resource Name (ARN) of the S3 bucket. The /* suffix is necessary to allow publishing with any prefix within the bucket or can be replaced with a prefix to limit the scope of the policy. You can also specify multiple resources in the policy to allow publishing to multiple buckets, or * to allow publishing to all buckets in the account.

Requester Node

To enable downloading published results using bacalhau job get <job_id> command, the requester node must run with the following policies:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::BUCKET_NAME/*"
        }
    ]
}
  • GetObject Permissions: The s3:GetObject permission is necessary for the requester node to provide a pre-signed URL to download the published results by the client.

For a more detailed overview on AWS credential management and other ways to provide these credentials, please refer to the AWS official documentation on .

For more information on IAM policies specific to Amazon S3 buckets and users, please refer to the .

S3 Input Source
standardized credentials
AWS documentation on Using IAM Policies with Amazon S3