Setting Up a Cluster on Amazon Web Services (AWS) with Terraform 🚀
Welcome to the guide for setting up your own Bacalhau cluster across multiple AWS regions! This guide will walk you through creating a robust, distributed compute cluster that's perfect for running your Bacalhau workloads.
What You'll Build
Think of this as building your own distributed supercomputer! Your cluster will provision compute nodes spread across different AWS regions for global coverage.
Before You Start
You'll need a few things ready:
Terraform (version 1.0.0 or newer)
AWS CLI installed and configured
An active AWS account with appropriate permissions
Your AWS credentials configured
An SSH key pair for securely accessing your nodes
A Bacalhau network
Quick Setup Guide
First, set up an orchestrator node. We recommend using Expanso Cloud for this! But you can always set up your own
Create your environment configuration file:
Fill in your AWS details in
env.tfvars.json
:Configure your desired regions in
locations.yaml
. Here's an example (we have a full list of these in all_locations.yaml):
Make sure the AMI exists in the region you need it to! You can confirm this by executing the following command:
Update your Bacalhau config/config.yaml (the defaults are mostly fine, just update the Orchestrator, and Token lines):
Deploy your cluster using the Python deployment script:
Understanding the Configuration
Why use a deployment script? Why not use Terraform directly?
Terraform on AWS requires switching to different workspaces when deploying to different availability zones. As a result, we had to setup a separate deploy.py
script which switches to each workspace for you under the hood, to make it easier.
Core Configuration Files
env.tfvars.json
: Your main configuration file containing AWS-specific settings`locations.yaml
: Defines which regions to deploy to and instance configurationsconfig/config.yaml
: Bacalhau node configuration
Essential Settings in env.tfvars.json
app_name
: Name for your cluster resourcesapp_tag
: Tag for resource managementbacalhau_installation_id
: Unique identifier for your clusterusername
: SSH username for instancespublic_key_path
: Path to your SSH public keyprivate_key_path
: Path to your SSH private keybacalhau_config_file_path
: Path to the config file for this compute node (should point at the orchestrator and have the right token)
Location Configuration (locations.yaml)
Each region entry requires:
region
: AWS region (e.g., us-west-2)zone
: Availability zone (e.g., us-west-2a)instance_type
: EC2 instance type (e.g., t3.medium)instance_ami
: AMI ID for the regionnode_count
: Number of instances to deploy
Taking Your Cluster for a Test Drive
Once everything's up and running, let's make sure it works!
First, make sure you have the Bacalhau CLI installed. You can read more about installing the CLI here.
Configure your Bacalhau client:
List your compute nodes:
Run a test job:
Check job status:
Troubleshooting Tips
Deployment Issues
Verify AWS credentials are properly configured:
Check IAM permissions
Ensure you have quota available in target regions
Node Health Issues
SSH into a node:
Check Bacalhau service logs:
Check Docker container status:
Network Issues
Verify security group rules (ports 22, 80, and 4222 should be open)
Check VPC and subnet configurations
Ensure internet gateway is properly attached
Common Solutions
If nodes aren't joining the network:
Check NATS connection string in config.yaml
Verify security group allows port 4222
Ensure nodes can reach the orchestrator
If jobs aren't running:
Check compute is enabled in node config
Verify Docker is running properly
Check available disk space
If deployment fails:
Look for errors in Terraform output
Check AWS service quotas
Verify AMI availability in chosen regions
Cleanup
Remove all resources:
Monitoring
Check node health:
Understanding the Directory Structure
Need Help?
If you get stuck or have questions:
Open an issue in our GitHub repository
Join our Slack
We're here to help you get your cluster running smoothly! 🌟
Last updated
Was this helpful?