Loading...
Loading...
Loading...
Loading...
Loading...
Coreset is a data subsetting method. Since the uncompressed datasets can get very large when compressed, it becomes much harder to train them as training time increases with the dataset size. To reduce training time and cut costs, we employ the coreset method; the coreset method can also be applied to other datasets. In this case, we use the coreset method which can lead to a fast speed in solving the k-means problem among the big data with high accuracy in the meantime.
We construct a small coreset for arbitrary shapes of numerical data with a decent time cost. The implementation was mainly based on the coreset construction algorithm that was proposed by Braverman et al. (SODA 2021).
For a deeper understanding of the core concepts, it's recommended to explore: 1. Coresets for Ordered Weighted Clustering 2. Efficient Implementation of Coreset-based K-Means Methods
In this tutorial example, we will run compressed dataset with Bacalhau
To get started, you need to install the Bacalhau client, see more information here
Clone the repo which contains the code
To download the dataset you should open Street Map, which is a public repository that aims to generate and distribute accessible geographic data for the whole world. Basically, it supplies detailed position information, including the longitude and latitude of the places around the world.
The dataset is a osm.pbf
(compressed format for .osm
file), the file can be downloaded from Geofabrik Download Server
The following command is installing Linux dependencies:
Ensure that the requirements.txt
file contains the following dependencies:
The following command is installing Python dependencies:
To run coreset locally, you need to convert from compressed pbf
format to geojson
format:
The following command is to run the Python script to generate the coreset:
coreset.py
contains the following script here
To build your own docker container, create a Dockerfile
, which contains instructions on how the image will be built, and what extra requirements will be included.
We will use the python:3.8
image, we run the same commands for installing dependencies that we used locally.
See more information on how to containerize your script/app here
We will run docker build
command to build the container:
Before running the command replace:
hub-user
with your docker hub username, If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name
with the name of the container, you can name it anything you want
tag
this is not required but you can use the latest tag
In our case:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case:
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. We've already converted the monaco-latest.osm.pbf
file from compressed pbf
format to geojson
format here. To submit a job, run the following Bacalhau command:
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
--input https://github.com/js-ts/Coreset/blob/master/monaco-latest.geojson
: mount the monaco-latest.geojson
file inside the container so it can be used by the script
jsace/coreset
: the name of the docker image we are using
python Coreset/python/coreset.py -f monaco-latest.geojson -o outputs
: the script initializes cluster centers, creates a coreset using these centers, and saves the results to the specified folder.
-k
: amount of initialized centers (default=5)
-n
: size of coreset (default=50)
-o
: the output folder
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
The same job can be presented in the declarative format. In this case, the description will look like this:
The job description should be saved in .yaml
format, e.g. coreset.yaml
, and then run with the command:
Job status: You can check the status of the job using bacalhau job list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
.
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the file, run the following command:
To view the output as a CSV file, run:
If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).
Kipoi (pronounce: kípi; from the Greek κήποι: gardens) is an API and a repository of ready-to-use trained models for genomics. It currently contains 2201 different models, covering canonical predictive tasks in transcriptional and post-transcriptional gene regulation. Kipoi's API is implemented as a python package, and it is also accessible from the command line.
In this tutorial example, we will run a genomics model on Bacalhau.
To get started, you need to install the Bacalhau client, see more information here
To run locally you need to install kipoi-veff2. You can find out the information about installing and usage here
In our case this will be the following command:
To run Genomics on Bacalhau we need to set up a Docker container. To do this, you'll need to create a Dockerfile
and add your desired configuration. The Dockerfile is a text document that contains the commands that specify how the image will be built.
We will use the kipoi/kipoi-veff2:py37
image and perform variant-centered effect prediction using the kipoi_veff2_predict
tool.
See more information on how to containerize your script/app here
The docker build
command builds Docker images from a Dockerfile.
Before running the command replace:
hub-user
with your docker hub username. If you don’t have a docker hub account follow these instructions to create a Docker Account, and use the username of the account you created
repo-name
with the name of the container, you can name it anything you want
tag
this is not required but you can use the latest tag
In our case:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. To submit a job for generating genomics data, run the following Bacalhau command:
In this example, a model from github.com
is downloaded during the job execution. In order to do this, use the --network full
flag when describing the job, and --job-selection-accept-networked
when starting the compute node on which the job will be executed.
Note, that in the demo network, nodes do not accept jobs that require full
network access. Consider creating your own private network.
Let's look closely at the command above:
bacalhau docker run
: call to Bacalhau
jsacex/kipoi-veff2:py37
: the name of the image we are using
kipoi_veff2_predict ./examples/input/test.vcf ./examples/input/test.fa ../outputs/output.tsv -m "DeepSEA/predict" -s "diff" -s "logit"
: the command that will be executed inside the container. It performs variant-centered effect prediction using the kipoi_veff2_predict tool
./examples/input/test.vcf
: the path to a Variant Call Format (VCF) file containing information about genetic variants
./examples/input/test.fa
: the path to a FASTA file containing DNA sequences. FASTA files contain nucleotide sequences used for variant effect prediction
../outputs/output.tsv
: the path to the output file where the prediction results will be stored. The output file format is Tab-Separated Values (TSV), and it will contain information about the predicted variant effects
-m "DeepSEA/predict"
: specifies the model to be used for prediction
-s "diff" -s "logit"
: indicates using two scoring functions for comparing prediction results. In this case, the "diff" and "logit" scoring functions are used. These scoring functions can be employed to analyze differences between predictions for the reference and alternative alleles.
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
The same job can be presented in the declarative format. In this case, the description will look like this:
The job description should be saved in .yaml
format, e.g. genomics.yaml
, and then run with the command:
Job status: You can check the status of the job using bacalhau job list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
.
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the file, run the following command:
If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).
GROMACS is a package for high-performance molecular dynamics and output analysis. Molecular dynamics is a computer simulation method for analyzing the physical movements of atoms and molecules
In this example, we will make use of program to add hydrogens to the molecules and generates coordinates in Gromacs (Gromos) format and topology in Gromacs format.
In this example tutorial, our focus will be on running Gromacs package with Bacalhau
To get started, you need to install the Bacalhau client, see more information
Datasets can be found here , In this example we use dataset. After downloading, place it in a folder called “input”
The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this you need an account with a pinning service like or . Once registered you can use their UI or API or SDKs to upload files.
Alternatively, you can upload your dataset to IPFS using :
Copy the CID in the end which is QmeeEB1YMrG6K8z43VdsdoYmQV46gAPQCHotZs9pwusCm9
Let's run a Bacalhau job that converts coordinate files to topology and FF-compliant coordinate files:
Lets look closely at the command above:
bacalhau docker run
: call to Bacalhau
-i ipfs://QmeeEB1YMrG6K8z43VdsdoYmQV46gAPQCHotZs9pwusCm9:/input
: here we mount the CID of the dataset we uploaded to IPFS to use on the job
-f input/1AKI.pdb
: input file
-o outputs/1AKI_processed.gro
: output file
-water
Water model to use. In this case we use spc
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
The job description should be saved in .yaml
format, e.g. gromacs.yaml
, and then run with the command:
Job status: You can check the status of the job using bacalhau job list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
.
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the file, run the following command:
In this example tutorial, we will look at how to run BIDS App on Bacalhau. BIDS (Brain Imaging Data Structure) is an emerging standard for organizing and describing neuroimaging datasets. is a container image capturing a neuroimaging pipeline that takes a BIDS formatted dataset as input. Each BIDS App has the same core set of command line arguments, making them easy to run and integrate into automated platforms. BIDS Apps are constructed in a way that does not depend on any software outside of the image other than the container engine.
To get started, you need to install the Bacalhau client, see more information
For this tutorial, download file ds005.tar
from this Bids dataset and untar it in a directory:
Let's take a look at the structure of the data
directory:
When you pin your data, you'll get a CID which is in a format like this QmaNyzSpJCt1gMCQLd3QugihY6HzdYmA8QMEa45LDBbVPz
. Copy the CID as it will be used to access your data
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau
-i ipfs://QmaNyzSpJCt1gMCQLd3QugihY6HzdYmA8QMEa45LDBbVPz:/data
: mount the CID of the dataset that is uploaded to IPFS and mount it to a folder called data on the container
nipreps/mriqc:latest
: the name and the tag of the docker image we are using
../data/ds005
: path to input dataset
../outputs
: path to the output
participant --participant_label 01 02 03
: run the mriqc on subjects with participant labels 01, 02, and 03
When a job is submitted, Bacalhau prints out the related job_id. We store that in an environment variable so that we can reuse it later on.
Copy
The job description should be saved in .yaml
format, e.g. bids.yaml
, and then run with the command:
Job status: You can check the status of the job using bacalhau job list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
.
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the file, run the following command:
In this tutorial example, we will showcase how to containerize an OpenMM workload so that it can be executed on the Bacalhau network and take advantage of the distributed storage & compute resources. is a toolkit for molecular simulation. It is a physic-based library that is useful for refining the structure and exploring functional interactions with other molecules. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it truly unique among simulation codes.
In this example tutorial, our focus will be on running OpenMM molecular simulation with Bacalhau.
To get started, you need to install the Bacalhau client, see more information
We use a processed 2DRI dataset that represents the ribose binding protein in bacterial transport and chemotaxis. The source organism is the bacteria.
This is only done to check whether your Python script is running. If there are no errors occurring, proceed further.
When you pin your data, you'll get a CID. Copy the CID as it will be used to access your data
To build your own docker container, create a Dockerfile
, which contains instructions to build your image.
We will run docker build
command to build the container:
Before running the command, replace:
repo-name
with the name of the container, you can name it anything you want
tag
this is not required but you can use the latest tag
In our case, this will be:
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name, or tag.
Now that we have the data in IPFS and the docker image pushed, we can run a job on the Bacalhau network.
Lets look closely at the command above:
bacalhau docker run
: call to Bacalhau
bafybeig63whfqyuvwqqrp5456fl4anceju24ttyycexef3k5eurg5uvrq4
: here we mount the CID of the dataset we uploaded to IPFS to use on the job
ghcr.io/bacalhau-project/examples/openmm:0.3
: the name and the tag of the image we are using
python run_openmm_simulation.py
: the script that will be executed inside the container
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Job status: You can check the status of the job using bacalhau job list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
.
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
To view the file, run the following command:
gromacs/gromacs
: we use the official
gmx pdb2gmx
: command in GROMACS that performs the conversion of molecular structural data from the Protein Data Bank (PDB) format to the GROMACS format, which is used for conducting Molecular Dynamics (MD) simulations and analyzing the results. Additional parameters could be found here
For a similar tutorial that you can try yourself, check out
The same job can be presented in the format. In this case, the description will look like this:
If you have questions or need support or guidance, please reach out to the (#general channel).
The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this, you need an account with a pinning service like or . Once registered, you can use their UI or API or SDKs to upload files.
Alternatively, you can upload your dataset to IPFS using , but the recommended approach is to use a pinning service as we have mentioned above.
The same job can be presented in the format. In this case, the description will look like this:
If you have questions or need support or guidance, please reach out to the (#general channel).
Protein data can be stored in a .pdb
file, this is a human-readable format. It provides for the description and annotation of protein and nucleic acid structures including atomic coordinates, secondary structure assignments, as well as atomic connectivity. See more information about PDB format . For the original, unprocessed 2DRI dataset, you can download it from the RCSB Protein Data Bank .
The relevant code of the processed 2DRI dataset can be found . Let's print the first 10 lines of the 2dri-processed.pdb
file. The output contains a number of ATOM records. These describe the coordinates of the atoms that are part of the protein.
To run the script above all we need is a Python environment with the installed. This script makes sure that there are no empty cells and to filter out potential error sources from the file.
The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this, you need an account with a pinning service like or . Once registered, you can use their UI or API or SDKs to upload files.
See more information on how to containerize your script/app
hub-user
with your docker hub username, If you don’t have a docker hub account , and use the username of the account you created
If you have questions or need support or guidance, please reach out to the (#general channel).