Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In this example tutorial, we use Bacalhau and Easy OCR to digitize paper records or for recognizing characters or extract text data from images stored on IPFS, S3 or on the web. EasyOCR is a ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic etc. With easy OCR, you use the pre-trained models or use your own fine-tuned model.
Install the required dependencies
Load the different example images
List all the images. You'll see an output like this:
Next, we create a reader to do OCR to get coordinates which represent a rectangle containing text and the text itself:
You can skip this step and go straight to running a Bacalhau job
We will use the Dockerfile
that is already created in the Easy OCR repo. Use the command below to clone the repo
The docker build
command builds Docker images from a Dockerfile.
Before running the command replace:
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name, or tag.
To get started, you need to install the Bacalhau client, see more information here.
Now that we have an image in the docker hub (your own or an example image from the manual), we can use the container for running on Bacalhau.
Let's look closely at the command below:
export JOB_ID=$( ... )
exports the job ID as environment variable
bacalhau docker run
: call to bacalhau
The --gpu 1
flag is set to specify hardware requirements, a GPU is needed to run such a job
The --id-only
flag is set to print only job id
-i ipfs://bafybeibvc......
Mounts the model from IPFS
-i https://raw.githubusercontent.com...
Mounts the Input Image from a URL
jsacex/easyocr
the name and the tag of the docker image we are using
-- easyocr -l ch_sim en -f ./inputs/chinese.jpg --detail=1 --gpu=True
execute script with following paramters:
-l ch_sim
: the name of the model
-f ./inputs/chinese.jpg
: path to the input Image or directory
--detail=1
: level of detail
--gpu=True
: we set this flag to true since we are running inference on a GPU. If you run this on a CPU - set this flag to false
Since the model and the image aren't present in the container we will mount the image from an URL and the model from IPFS. You can find models to download from here. You can choose the model you want to use in this case we will be using the zh_sim_g2
model
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
The same job can be presented in the declarative format. In this case, the description will look like this:
The job description should be saved in .yaml
format, e.g. easyocr.yaml
, and then run with the command:
You can check the status of the job using bacalhau job list
.
When it says Completed
, that means the job is done, and we can get the results.
You can find out more information about your job by using bacalhau job describe
.
You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory
Now you can find the file in the results/outputs
folder. You can view results by running following commands:
Stable Diffusion is a state of the art text-to-image model that generates images from text and was developed as an open-source alternative to DALL·E 2. It is based on a Diffusion Probabilistic Model and uses a Transformer to generate images from text.
This example demonstrates how to use stable diffusion using a finetuned model and run inference on it. The first section describes the development of the code and the container - it is optional as users don't need to build their own containers to use their own custom model. The second section demonstrates how to convert your model weights to ckpt. The third section demonstrates how to run the job using Bacalhau.
The following guide is using the fine-tuned model, which was finetuned on Bacalhau. To learn how to finetune your own stable diffusion model refer to this guide.
Convert your existing model weights to the ckpt
format and upload to the IPFS storage.
Create a job using bacalhau docker run
, relevant docker image, model weights and any prompt.
Download results using bacalhau job get
and the job id.
To get started, you need to install:
Bacalhau client, see more information here
NVIDIA GPU
CUDA drivers
NVIDIA docker
This part of the guide is optional - you can skip it and proceed to the Running a Bacalhau job if you are not going to use your own custom image.
To build your own docker container, create a Dockerfile
, which contains instructions to containerize the code for inference.
This container is using the pytorch/pytorch:1.13.0-cuda11.6-cudnn8-runtime
image and the working directory is set. Next the Dockerfile installs required dependencies. Then we add our custom code and pull the dependent repositories.
See more information on how to containerize your script/app here
We will run docker build
command to build the container.
Before running the command replace:
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create the Docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest
tag
So in our case, the command will look like this:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
Thus, in this case, the command would look this way:
After the repo image has been pushed to Docker Hub, you can now use the container for running on Bacalhau. But before that you need to check whether your model is a ckpt
file or not. If your model is a ckpt
file you can skip to the running on Bacalhau, and if not - the next section describes how to convert your model into the ckpt
format.
To download the convert script:
To convert the model weights into ckpt
format, the --half
flag cuts the size of the output model from 4GB to 2GB:
To do inference on your own checkpoint on Bacalhau you need to first upload it to your public storage, which can be mounted anywhere on your machine. In this case, we will be using NFT.Storage (Recommended Option). To upload your dataset using NFTup drag and drop your directory and it will upload it to IPFS.
After the checkpoint file has been uploaded copy its CID.
Some of the jobs presented in the Examples section may require more resources than are currently available on the demo network. Consider starting your own network or running less resource-intensive jobs on the demo network
Let's look closely at the command above:
export JOB_ID=$( ... )
: Export results of a command execution as environment variable
The --gpu 1
flag is set to specify hardware requirements, a GPU is needed to run such a job
-i ipfs://QmUCJuFZ2v7KvjBGHRP2K1TMPFce3reTkKVGF2BJY5bXdZ:/model.ckpt
: Path to mount the checkpoint
-- conda run --no-capture-output -n ldm
: since we are using conda we need to specify the name of the environment which we are going to use, in this case it is ldm
scripts/txt2img.py
: running the python script
--prompt "a photo of a person drinking coffee"
: the prompt you need to specify the session name in the prompt.
--plms
: the sampler you want to use. In this case we will use the plms
sampler
--ckpt ../model.ckpt
: here we specify the path to our checkpoint
--n_samples 1
: no of samples we want to produce
--skip_grid
: skip creating a grid of images
--outdir ../outputs
: path to store the outputs
--seed $RANDOM
: The output generated on the same prompt will always be the same for different outputs on the same prompt set the seed parameter to random
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau job list
:
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
:
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished we can see the results in the results/outputs
folder. We received following image for our prompt:
Dolly 2.0, the groundbreaking, open-source, instruction-following Large Language Model (LLM) that has been fine-tuned on a human-generated instruction dataset, licensed for both research and commercial purposes. Developed using the EleutherAI Pythia model family, this 12-billion-parameter language model is built exclusively on a high-quality, human-generated instruction following dataset, contributed by Databricks employees.
Dolly 2.0 package is open source, including the training code, dataset, and model weights, all available for commercial use. This unprecedented move empowers organizations to create, own, and customize robust LLMs capable of engaging in human-like interactions, without the need for API access fees or sharing data with third parties.
A NVIDIA GPU
Python
Create an inference.py
file with following code:
You may want to create your own container for this kind of task. In that case, use the instructions for creating and publishing your own image in the docker hub. Use huggingface/transformers-pytorch-deepspeed-nightly-gpu
as base image, install dependencies listed above and copy the inference.py
into it. So your Dockerfile will look like this:
To get started, you need to install the Bacalhau client, see more information here
export JOB_ID=$( ... )
: Export results of a command execution as environment variable
bacalhau docker run
: Run a job using docker executor.
--gpu 1
: Flag to specify the number of GPUs to use for the execution. In this case, 1 GPU will be used.
-w /inputs
: Flag to set the working directory inside the container to /inputs
.
-i gitlfs://huggingface.co/databricks/dolly-v2-3b.git
: Flag to clone the Dolly V2-3B model from Hugging Face's repository using Git LFS. The files will be mounted to /inputs/databricks/dolly-v2-3b
.
-i https://gist.githubusercontent.com/js-ts/d35e2caa98b1c9a8f176b0b877e0c892/raw/3f020a6e789ceef0274c28fc522ebf91059a09a9/inference.py
: Flag to download the inference.py
script from the provided URL. The file will be mounted to /inputs/inference.py
.
jsacex/dolly_inference:latest
: The name and the tag of the Docker image.
The command to run inference on the model: python inference.py --prompt "Where is Earth located ?" --model_version "./databricks/dolly-v2-3b"
. It consists of:
inference.py
: The Python script that runs the inference process using the Dolly V2-3B model.
--prompt "Where is Earth located ?"
: Specifies the text prompt to be used for the inference.
--model_version "./databricks/dolly-v2-3b"
: Specifies the path to the Dolly V2-3B model. In this case, the model files are mounted to /inputs/databricks/dolly-v2-3b
.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau job list
:
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
:
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished, we can see the results in the results/outputs
folder.
This example tutorial demonstrates how to use Stable Diffusion on a GPU and run it on the demo network. is a state of the art text-to-image model that generates images from text and was developed as an open-source alternative to . It is based on a and uses a to generate images from text.
To get started, you need to install the Bacalhau client, see more information .
Here is an example of an image generated by this model.
When you run this code for the first time, it will download the pre-trained weights, which may add a short delay.
When you run this code for the first time, it will download the pre-trained weights, which may add a short delay.
When running this code, if you check the GPU RAM usage, you'll see that it's sucked up many GBs, and depending on what GPU you're running, it may OOM (Out of memory) if you run this again.
You can try and reduce RAM usage by playing with batch sizes (although it is only set to 1 above!) or more carefully controlling the TensorFlow session.
To clear the GPU memory we will use numba. This won't be required when running in a single-shot manner.
After writing the code the next step is to run the script.
As a result, you will get something like this:
The following presents additional parameters you can try:
python main.py --p "cat with three eyes
- to set prompt
python main.py --p "cat with three eyes" --n 100
- to set the number of iterations to 100
python stable-diffusion.py --p "cat with three eyes" --b 2
to set batch size to 2 (№ of images to generate)
We will run docker build
command to build the container;
Before running the command replace following:
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case:
To submit a job run the Bacalhau command with following structure:
export JOB_ID=$( ... )
exports the job ID as environment variable
The --gpu 1
flag is set to specify hardware requirements, a GPU is needed to run such a job
The --id-only
flag is set to print only job id
ghcr.io/bacalhau-project/examples/stable-diffusion-gpu:0.0.1
: the name and the tag of the docker image we are using
-- python main.py --o ./outputs --p "meme about tensorflow"
: The command to run inference on the model. It consists of:
main.py
path to the script
--o ./outputs
specifies the output directory
--p "meme about tensorflow"
specifies the prompt
This will take about 5 minutes to complete and is mainly due to the cold-start GPU setup time. This is faster than the CPU version, but you might still want to grab some fruit or plan your lunchtime run.
Furthermore, the container itself is about 10GB, so it might take a while to download on the node if it isn't cached.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau job list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
.
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory
Now you can find the file in the results/outputs
folder:
In this example, we will demonstrate how to run inference on a model stored on Amazon S3. We will use a PyTorch model trained on the MNIST dataset.
Consider using the latest versions or use the docker method listed below in the article.
Python
PyTorch
Use the following commands to download the model and test image:
This script is designed to load a pretrained PyTorch model for MNIST digit classification from a tar.gz
file, extract it, and use the model to perform inference on a given input image. Ensure you have all required dependencies installed:
To use this script, you need to provide the paths to the tar.gz
file containing the pre-trained model, the output directory where the model will be extracted, and the input image file for which you want to perform inference. The script will output the predicted digit (class) for the given input image.
export JOB_ID=$( ... )
: Export results of a command execution as environment variable
-w /inputs
Set the current working directory at /inputs
in the container
-i src=s3://sagemaker-sample-files/datasets/image/MNIST/model/pytorch-training-2020-11-21-22-02-56-203/model.tar.gz,dst=/model/,opt=region=us-east-1
: Mount the s3 bucket at the destination path provided - /model/
and specifying the region where the bucket is located opt=region=us-east-1
-i git://github.com/js-ts/mnist-test.git
: Flag to mount the source code repo from GitHub. It would mount the repo at /inputs/js-ts/mnist-test
in this case it also contains the test image
pytorch/pytorch
: The name of the Docker image
-- python3 /inputs/js-ts/mnist-test/inference.py --tar_gz_file_path /model/model.tar.gz --output_directory /model-pth --image_path /inputs/js-ts/mnist-test/image.png
: The command to run inference on the model. It consists of:
/model/model.tar.gz
is the path to the model file
/model-pth
is the output directory for the model
/inputs/js-ts/mnist-test/image.png
is the path to the input image
When the job is submitted Bacalhau prints out the related job id. We store that in an environment variable JOB_ID
so that we can reuse it later on.
Use the bacalhau job logs
command to view the job output, since the script prints the result of execution to the stdout:
You can also use bacalhau job get
to download job results:
Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. It shows that the use of such a large and diverse dataset leads to improved robustness to accents, background noise, and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English. Creators are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing. In this example, we will transcribe an audio clip locally, containerize the script and then run the container on Bacalhau.
The advantage of using Bacalhau over managed Automatic Speech Recognition services is that you can run your own containers which can scale to do batch process petabytes of videos or audio for automatic speech recognition
To get started, you need to install:
Bacalhau client, see more information .
Whisper.
PyTorch.
Python Pandas.
Before we create and run the script we need a sample audio file to test the code. For that we download a sample audio clip:
We will create a script that accepts parameters (input file path, output file path, temperature, etc.) and set the default parameters. Also if the input file is in mp4
format, then the script converts it to wav
format. The transcript can be saved in various formats. Then the large model is loaded and we pass it the required parameters.
This model is not only limited to English and transcription, it supports many other languages.
Next, let's create an openai-whisper script:
Let's run the script with the default parameters:
To view the outputs, execute following:
To build your own docker container, create a Dockerfile
, which contains instructions on how the image will be built, and what extra requirements will be included.
We choose pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime
as our base image.
And then install all the dependencies, after that we will add the test audio file and our openai-whisper script to the container, we will also run a test command to check whether our script works inside the container and if the container builds successfully
We will run docker build
command to build the container;
Before running the command replace:
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case:
After the dataset has been uploaded, copy the CID:
Let's look closely at the command below:
export JOB_ID=$( ... )
exports the job ID as environment variable
bacalhau docker run
: call to bacalhau
The-i ipfs://bafybeielf6z4cd2nuey5arckect5bjmelhouvn5r
: flag to mount the CID which contains our file to the container at the path /inputs
The --gpu 1
flag is set to specify hardware requirements, a GPU is needed to run such a job
jsacex/whisper
: the name and the tag of the docker image we are using
python openai-whisper.py
: execute the script with following parameters:
-p inputs/Apollo_11_moonwalk_montage_720p.mp4
: the input path of our file
-o outputs
: the path where to store the outputs
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
You can check the status of the job using bacalhau job list
.
When it says Completed
, that means the job is done, and we can get the results.
You can find out more information about your job by using bacalhau job describe
.
You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory
Now you can find the file in the results/outputs
folder. To view it, run the following command:
In this example tutorial, we will show you how to generate realistic images with and Bacalhau. StyleGAN is based on Generative Adversarial Networks (GANs), which include a generator and discriminator network that has been trained to differentiate images generated by the generator from real images. However, during the training, the generator tries to fool the discriminator, which results in the generation of realistic-looking images. With StyleGAN3 we can generate realistic-looking images or videos. It can generate not only human faces but also animals, cars, and landscapes.
To get started, you need to install the Bacalhau client, see more information
To run StyleGAN3 locally, you'll need to clone the repo, install dependencies and download the model weights.
Now you can generate an image using a pre-trained AFHQv2
model. Here is an example of the image we generated:
To build your own docker container, create a Dockerfile
, which contains instructions to build your image.
We will run docker build
command to build the container:
Before running the command replace:
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case:
To submit a job run the Bacalhau command with following structure:
export JOB_ID=$( ... )
exports the job ID as environment variable
bacalhau docker run
: call to Bacalhau
The --gpu 1
flag is set to specify hardware requirements, a GPU is needed to run such a job
The --id-only
flag is set to print only job id
jsacex/stylegan3
: the name and the tag of the docker image we are using
python gen_images.py
: execute the script with following parameters:
--trunc=1 --seeds=2 --network=stylegan3-r-afhqv2-512x512.pkl
: The animation length is either determined based on the --seeds
value or explicitly specified using the --num-keyframes
option. When num keyframes are specified with --num-keyframes
, the output video length will be num_keyframes * w_frames
frames.
../outputs
: path to the output
The job description should be saved in .yaml
format, e.g. stylegan3.yaml
, and then run with the command:
You can also run variations of this command to generate videos and other things. In the following command below, we will render a latent vector interpolation video. This will render a 4x2 grid of interpolations for seeds 0 through 31.
Let's look closely at the command below:
export JOB_ID=$( ... )
exports the job ID as environment variable
bacalhau docker run
: call to bacalhau
The --gpu 1
flag is set to specify hardware requirements, a GPU is needed to run such a job
The --id-only
flag is set to print only job id
jsacex/stylegan3
the name and the tag of the docker image we are using
python gen_images.py
: execute the script with following parameters:
--trunc=1 --seeds=2 --network=stylegan3-r-afhqv2-512x512.pkl
: The animation length is either determined based on the --seeds
value or explicitly specified using the --num-keyframes
option. When num keyframes is specified with --num-keyframes
, the output video length will be num_keyframes * w_frames frames
. If --num-keyframes
is not specified, the number of seeds given with --seeds
must be divisible by grid size W*H (--grid
). In this case, the output video length will be # seeds/(w*h)*w_frames
frames.
../outputs
: path to the output
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
You can check the status of the job using bacalhau job list
.
When it says Completed
, that means the job is done, and we can get the results.
You can find out more information about your job by using bacalhau job describe
.
You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory
Now you can find the file in the results/outputs
folder.
The identification and localization of objects in images and videos is a computer vision task called object detection. Several algorithms have emerged in the past few years to tackle the problem. One of the most popular algorithms to date for real-time object detection is , initially proposed
Traditionally, models like YOLO required enormous amounts of training data to yield reasonable results. People might not have access to such high-quality labeled data. Thankfully, open-source communities and researchers have made it possible to utilize pre-trained models to perform inference. In other words, you can use models that have already been trained on large datasets to perform object detection on your own data.
Bacalhau is a highly scalable decentralized computing platform and is well suited to running massive object detection jobs. In this example, you can take advantage of the GPUs available on the Bacalhau Network and perform an end-to-end object detection inference, using the
Load your dataset into S3/IPFS, specify it and pre-trained weights via the --input
flags, choose a suitable container, specify the command and path to save the results - done!
To get started, you need to install the Bacalhau client, see more information
To get started, let's run a test job with a small sample dataset that is included in the YOLOv5 Docker Image. This will give you a chance to familiarise yourself with the process of running a job on Bacalhau.
In addition to the usual Bacalhau flags, you will also see example of using the --gpu 1
flag in order to specify the use of a GPU.
Remember that by default Bacalhau does not provide any network connectivity when running a job. So you need to either provide all assets at job submission time, or use the --network=full
or --network=http
flags to access the data at task time. See the page for more details
The model requires pre-trained weights to run and by default downloads them from within the container. Bacalhau jobs don't have network access so we will pass in the weights at submission time, saving them to /usr/src/app/yolov5s.pt
. You may also provide your own weights here.
The container has its own options that we must specify:
--project
specifies the output volume that the model will save its results to. Bacalhau defaults to using /outputs
as the output directory, so we save it there.
One final additional hack that we have to do is move the weights file to a location with the standard name. As of writing this, Bacalhau downloads the file to a UUID-named file, which the model is not expecting. This is because GitHub 302 redirects the request to a random file in its backend.
export JOB_ID=$( ... )
exports the job ID as environment variable
The --gpu 1
flag is set to specify hardware requirements, a GPU is needed to run such a job
The --timeout
flag is set to make sure that if the job is not completed in the specified time, it will be terminated
The --wait
flag is set to wait for the job to complete before return
The --wait-timeout-secs
flag is set together with --wait
to define how long should app wait for the job to complete
The --id-only
flag is set to print only job id
The --input
flags are used to specify the sources of input data
-- /bin/bash -c 'find /inputs -type f -exec cp {} /outputs/yolov5s.pt \; ; python detect.py --weights /outputs/yolov5s.pt --source $(pwd)/data/images --project /outputs'
tells the model where to find input data and where to write output
This should output a UUID (like 59c59bfb-4ef8-45ac-9f4b-f0e9afd26e70
), which will be stored in the environment variable JOB_ID
. This is the ID of the job that was created. You can check the status of the job using the commands below.
Job status: You can check the status of the job using bacalhau job list
:
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
:
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished we can see the results in the results/outputs/exp
folder.
Let's run a the same job again, but this time use the images above.
Just as in the example above, this should output a UUID, which will be stored in the environment variable JOB_ID
. You can check the status of the job using the commands below.
This stable diffusion example is based on the . You might also be interested in the Pytorch oriented .
Based on the requirements , we will install the following:
We have a sample code from this the repo which we will use to check if the code is working as expected. Our output for this code will be a DSLR photograph of an astronaut riding a horse.
You need a script to execute when we submit jobs. The code below is a slightly modified version of the code we ran above which we got from , however, this includes more things such as argument parsing to be able to customize the generator.
For a full list of arguments that you can pass to the script, see more information
Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the . To containerize the inference code, we will create a Dockerfile
. The Dockerfile is a text document that contains the commands that specify how the image will be built.
The Dockerfile leverages the latest official TensorFlow GPU image and then installs other dependencies like git
, CUDA
packages, and other image-related necessities. See for the expected requirements.
See more information on how to containerize your script/app
hub-user with your docker hub username, If you don’t have a docker hub account follow to create a Docker account, and use the username of the account you created
Some of the jobs presented in the Examples section may require more resources than are currently available on the demo network. Consider or running less resource-intensive jobs on the demo network
The Bacalhau command passes a prompt to the model and generates an image in the outputs directory. The main difference in the example below compared to all the other examples is the addition of the --gpu X
flag, which tells Bacalhau to only schedule the job on nodes that have X
GPUs free. You can in the documentation.
To get started, you need to install the Bacalhau client, see more information
If you have questions or need support or guidance, please reach out to the (#general channel).
See more information on how to containerize your script/app
hub-user with your docker hub username, If you don’t have a docker hub account , and use the username of the account you created
We will transcribe the moon landing video, which can be found here:
Since the downloaded video is in mov format we convert the video to mp4 format and then upload it to our public storage in this case IPFS. To do this, a public network can be used, or you can create your own and use it to pin files.
The same job can be presented in the format. In this case, the description will look like this:
See more information on how to containerize your script/app
hub-user with your docker hub username, If you don’t have a docker hub account follow to create docker account (), and use the username of the account you created
Some of the jobs presented in the Examples section may require more resources than are currently available on the demo network. Consider or running less resource-intensive jobs on the demo network
The same job can be presented in the format. In this case, the description will look like this:
If you have questions or need support or guidance, please reach out to the (#general channel).
--input
to select which pre-trained weights you desire with details on the
For more container flags refer to the yolov5/detect.py
file in the .
Some of the jobs presented in the Examples section may require more resources than are currently available on the demo network. Consider or running less resource-intensive jobs on the demo network
The same job can be presented in the format. In this case, the description will look like this:
Now let's use some custom images. First, you will need to ingest your images onto IPFS or S3 storage. For more information about how to do that see the section.
This example will use the dataset.
We have already uploaded this dataset to the IPFS storage under the CID: bafybeicyuddgg4iliqzkx57twgshjluo2jtmlovovlx5lmgp5uoh3zrvpm
. You can browse to this dataset via .
To check the state of the job and view job output refer to the .
If you have questions or need support or guidance, please reach out to the (#general channel).
Stable Diffusion is an open-source text-to-image model, which generates images from text. It's a cutting-edge alternative to DALL·E 2 and uses the Diffusion Probabilistic Model for image generation. At the core the model generates graphics from text using a Transformer.
This example demonstrates how to use stable diffusion online on a CPU and run it on the Bacalhau demo network. The first section describes the development of the code and the container. The second section demonstrates how to run the job using Bacalhau.
This model generated the images presented on this page.
The original text-to-image stable diffusion model was trained on a fleet of GPU machines, at great cost. To use this trained model for inference, you also need to run it on a GPU.
However, this isn't always desired or possible. One alternative is to use a project called OpenVINO from Intel that allows you to convert and optimize models from a variety of frameworks (and ONNX if your framework isn't directly supported) to run on a supported Intel CPU. This is what we will do in this example.
Heads up! This example takes about 10 minutes to generate an image on an average CPU. Whilst this demonstrates it is possible, it might not be practical.
In order to run this example you need:
A Debian-flavoured Linux (although you might be able to get it working on the newest machines)
First we convert the trained stable diffusion models so that they work efficiently on a CPU with OpenVINO. Choose the fine tuned version of Stable Diffusion you want to use. The example is quite complex, so we have created a separate repository to host the code. This is a fork from this Github repository.
In summary, the code downloads a pre-optimized OpenVINO version of the original pre-trained stable diffusion model. This model leverages OpenAI's CLIP transformer and is wrapped inside an OpenVINO runtime, which executes the model.
The core code representing these tasks can be found in the stable_diffusion_engine.py file. This is a mashup that creates a pipeline necessary to tokenize the text and run the stable diffusion model. This boilerplate could be simplified by leveraging the more recent version of the diffusers library. But let's continue.
Note that these dependencies are only known to work on Ubuntu-based x64 machines.
The following commands clone the example repository, and other required repositories, and install the Python dependencies.
Now that we have all the dependencies installed, we can call the demo.py
wrapper, which is a simple CLI, to generate an image from a prompt.
When the generation is complete, you can open the generated hello.png
and see something like this:
Lets try another prompt and see what we get:
Now we have a working example, we can convert it into a format that allows us to perform inference in a distributed environment.
First we will create a Dockerfile
to containerize the inference code. The Dockerfile can be found in the repository, but is presented here to aid understanding.
This container is using the python:3.9.9-bullseye
image and the working directory is set. Next, the Dockerfile installs the same dependencies from earlier in this notebook. Then we add our custom code and pull the dependent repositories.
We've already pushed this image to GHCR, but for posterity, you'd use a command like this to update it:
To run this example you will need Bacalhau installed and running
Bacalhau is a distributed computing platform that allows you to run jobs on a network of computers. It is designed to be easy to use and to run on a variety of hardware. In this example, we will use it to run the stable diffusion model on a CPU.
To submit a job, you can use the Bacalhau CLI. The following command passes a prompt to the model and generates an image in the outputs directory.
This will take about 10 minutes to complete. Go grab a coffee. Or a beer. Or both. If you want to block and wait for the job to complete, add the --wait
flag.
Furthermore, the container itself is about 15GB, so it might take a while to download on the node if it isn't cached.
Some of the jobs presented in the Examples section may require more resources than are currently available on the demo network. Consider starting your own network or running less resource-intensive jobs on the demo network
export JOB_ID=$( ... )
: Export results of a command execution as environment variable
bacalhau docker run
: Run a job using docker executor.
--id-only
: Flag to print out only the job id
ghcr.io/bacalhau-project/examples/stable-diffusion-cpu:0.0.1
: The name and the tag of the Docker image.
The command to run inference on the model: python demo.py --prompt "First Humans On Mars" --output ../outputs/mars.png
. It consists of:
demo.py
: The Python script that runs the inference process.
--prompt "First Humans On Mars"
: Specifies the text prompt to be used for the inference.
--output ../outputs/mars.png
: Specifies the path to the output image.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau job list
:
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
:
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished we can see the results in the results/outputs
folder.