Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This directory contains examples relating to molecular dynamics workloads. The goal is to provide a range of examples that show you how to work with Bacalhau in a variety of use cases.
Coreset is a data subsetting method. Since the uncompressed datasets can get very large when compressed, it becomes much harder to train them as training time increases with the dataset size. To reduce the training time to save costs we use the coreset method; the coreset method can also be applied to other datasets. In this case, we use the coreset method which can lead to a fast speed in solving the k-means problem among the big data with high accuracy in the meantime.
We construct a small coreset for arbitrary shapes of numerical data with a decent time cost. The implementation was mainly based on the coreset construction algorithm that was proposed by Braverman et al. (SODA 2021).
Running compressed dataset with Bacalhau
Clone the repo which contains the code
To download the dataset you should open Street Map, which is a public repository that aims to generate and distribute accessible geographic data for the whole world. Basically, it supplies detailed position information, including the longitude and latitude of the places around the world.
The dataset is a osm.pbf (compressed format for .osm file), the file can be downloaded from Geofabrik Download Server
The following command is installing Linux dependencies:
The following command is installing Python dependencies:
To run coreset locally, you need to convert from compressed pbf format to geojson format:
The following command is to run the Python script to generate the coreset:
To build your own docker container, create a Dockerfile
, which contains instructions on how the image will be built, and what extra requirements will be included.
We will use the python:3.8
image, and we will choose the src directory in the container as our work directory, we run the same commands for installing dependencies that we used locally, but we also add files and directories which are present on our local machine, we also run a test command, in the end, to check whether the script works
:::info See more information on how to containerize your script/app here :::
We will run docker build
command to build the container;
Before running the command replace;
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. To submit a job, run the following Bacalhau command:
Backend: Docker backend here for running the job
input/liechtenstein-latest.osm.pbf
: Upload the .osm.pbf file
-i ipfs://QmXuatKaWL24CwrBPC9PzmLW8NGjgvBVJfk6ZGCWUGZgCu:/input
: mount dataset to the folder inside the container so it can be used by the script
jsace/coreset
: the name and the tag of the docker image we are using
The following command converts the osm.pbf dataset to geojson (the dataset is stored in the input volume folder):
Let's run the script, we use flag -f
to determine the path of the output geojson file from the step above.
We get the output in stdout
Additional parameters:
-k
: amount of initialized centers (default=5)
-n
: size of coreset (default=50)
-o
: the output folder
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
To view the file, run the following command:
To view the output as a CSV file, run:
[1] http://proceedings.mlr.press/v97/braverman19a/braverman19a.pdf
[2]https://aaltodoc.aalto.fi/bitstream/handle/123456789/108293/master_Wu_Xiaobo_2021.pdf?sequence=2
In this tutorial example, we will showcase how to containerize an OpenMM workload so that it can be executed on the Bacalhau network and take advantage of the distributed storage & compute resources. OpenMM is a toolkit for molecular simulation. It is a physic-based library that is useful for refining the structure and exploring functional interactions with other molecules. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it truly unique among simulation codes.
Running OpenMM m molecular simulation with Bacalhau
To get started, you need to install the Bacalhau client, see more information here
We use a processed 2DRI dataset that represents the ribose binding protein in bacterial transport and chemotaxis. The source organism is the Escherichia coli bacteria. You can find more details on this protein at the related RCSB Protein Data Bank page.
Protein data can be stored in a .pdb
file, this is a human-readable format. It provides for the description and annotation of protein and nucleic acid structures including atomic coordinates, secondary structure assignments, as well as atomic connectivity. See more information about PDB format here.
To run the script above all we need is a Python environment with the OpenMM library installed.
We are printing the first 10 lines of the file. The output contains a number of ATOM records. These describe the coordinates of the atoms that are part of the protein.
The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this you need an account with a pinning service like web3.storage or Pinata or nft.storage . Once registered you can use their UI or API or SDKs to upload files.
To build your own docker container, create a Dockerfile
, which contains instructions to build your image.
:::tip For more information about working with custom containers, see the custom containers example. :::
We will run docker build
command to build the container;
Before running the command replace;
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case, this will be:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name, or tag.
Now that we have the data in IPFS and the docker image pushed, we can run a job on the Bacalhau network.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory
To view the file, run the following command:
Kipoi (pronounce: kípi; from the Greek κήποι: gardens) is an API and a repository of ready-to-use trained models for genomics. It currently contains 2201 different models, covering canonical predictive tasks in transcriptional and post-transcriptional gene regulation. Kipoi's API is implemented as a python package and it is also accessible from the command line.
Running genomics model on Bacalhau
To get started, you need to install the Bacalhau client, see more information here
To run Genomics on Bacalhau we need to set up a Docker container. To do this, you'll need to create a Dockerfile
and add your desired configuration. The Dockerfile is a text document that contains the commands that specify how the image will be built.
The docker build
command builds Docker images from a Dockerfile.
Before running the command replace;
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create a Docker Account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
In our case
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. To submit a job, run the following Bacalhau command:
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory
To view the file, run the following command:
GROMACS is a package for high-performance molecular dynamics and output analysis. Molecular dynamics is a computer simulation method for analyzing the physical movements of atoms and molecules
In this example, we will make use of gmx pdb2gmx program to add hydrogens to the molecules and generates coordinates in Gromacs (Gromos) format and topology in Gromacs format
Running Gromacs package with Bacalhau
Datasets can be found here https://www.rcsb.org, In this example we use RCSB PDB - 1AKI dataset. After downloading place it in a folder called “input”
Upload the directory to IPFS using IPFS CLI (Installation Instructions) [Not recommended]
Copy the CID in the end which is QmeeEB1YMrG6K8z43VdsdoYmQV46gAPQCHotZs9pwusCm9
Upload the directory to IPFS using Pinata (Recommended)
This command converts coordinate files to topology and FF-compliant coordinate files:
Lets look at the command above more closely:
bacalhau docker run
using the docker backend
-i ipfs://QmeeEB1YMrG6K8z43VdsdoYmQV46gAPQCHotZs9pwusCm9:/input
here we mount the CID of the dataset we uploaded to IPFS and mount it to a folder called data on the container
gromacs/gromacs
we use the official gromacs - Docker Image
-f input/1AKI.pdb
input file
-o output/1AKI_processed.gro
output file
-water
Water model to use in this case we use spc
Additional parameters could be found here gmx pdb2gmx — GROMACS 2022.2 documentation
(similar tutorial you can try yourself KALP-15 in DPPC - GROMACS Tutorial )
Installing Bacalhau
Running the commands will output a UUID. This is the ID of the job that was created. You can check the status of the job with the following command:
Where it says Completed
, that means the job is done, and we can get the results.
To find out more information about your job, run the following command:
To Download the results of your job, run the following command:
After the download has finished you should see the following contents in the results directory
The Rach repository contains self-explanatory results.
In this example tutorial, we will look at how to run BIDS App on Bacalhau. BIDS (Brain Imaging Data Structure) is an emerging standard for organizing and describing neuroimaging datasets. is a container image capturing a neuroimaging pipeline that takes a BIDS formatted dataset as input. Each BIDS App has the same core set of command line arguments, making them easy to run and integrate into automated platforms. BIDS Apps are constructed in a way that does not depend on any software outside of the image other than the container engine.
Running imaging data structure with Bacalhau
To get started, you need to install the Bacalhau client, see more information
For this tutorial, download file ds005.tar
from this Bids dataset and untar it in a directory. ds005
will be our input directory in the following example.
The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this you need an account with a pinning service like or or . Once registered you can use their UI or API or SDKs to upload files.
When you pin your data, you'll get a CID which is in a format like this QmaNyzSpJCt1gMCQLd3QugihY6HzdYmA8QMEa45LDBbVPz
. Copy the CID as it will be used to access your data
:::info Alternatively, you can upload your dataset to IPFS using , but the recommended approach is to use a pinning service as we have mentioned above. :::
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
-i ipfs://QmaNyzSpJCt1gMCQLd3QugihY6HzdYmA8QMEa45LDBbVPz:/data
: mount the CID of the dataset that is uploaded to IPFS and mount it to a folder called data on the container
nipreps/mriqc:latest
: the name and the tag of the docker image we are using
../data/ds005
: path to input dataset
../outputs
: path to the output
participant --participant_label 01 02 03
: run the participant level in subjects 001 002 003
When a job is submitted, Bacalhau prints out the related job_id. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe
.
Job download: You can download your job results directly by using bacalhau get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in the results directory
To view the file, run the following command: