Gromacs for Analysis

Introduction​

GROMACS is a package for high-performance molecular dynamics and output analysis. Molecular dynamics is a computer simulation method for analyzing the physical movements of atoms and molecules

In this example, we will make use of gmx pdb2gmx program to add hydrogens to the molecules and generates coordinates in Gromacs (Gromos) format and topology in Gromacs format.

In this example tutorial, our focus will be on running Gromacs package with Bacalhau

Prerequisites​

To get started, you need to install the Bacalhau client, see more information here

Downloading datasets​

Datasets can be found here https://www.rcsb.org, In this example we use RCSB PDB - 1AKI dataset. After downloading, place it in a folder called “input”

input
└── 1AKI.pdb

Uploading the datasets to IPFS​

The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this you need an account with a pinning service like NFT.storage or Pinata. Once registered you can use their UI or API or SDKs to upload files.

Alternatively, you can upload your dataset to IPFS using IPFS CLI:

ipfs add -r input/

added QmTCCqPzX3qSJHuMeSma9uCqUnriZ5eJX7MnxebxydL89f input/1AKI.pdb
added QmeeEB1YMrG6K8z43VdsdoYmQV46gAPQCHotZs9pwusCm9 input
 113.59 KiB / 113.59 KiB [=================================] 100.00%

Copy the CID in the end which is QmeeEB1YMrG6K8z43VdsdoYmQV46gAPQCHotZs9pwusCm9

Running Bacalhau Job​

Let's run a Bacalhau job that converts coordinate files to topology and FF-compliant coordinate files:

export JOB_ID=$(bacalhau docker run \
    --id-only \
    --wait \
    --timeout 3600 \
    --wait-timeout-secs 3600 \
    -i ipfs://QmeeEB1YMrG6K8z43VdsdoYmQV46gAPQCHotZs9pwusCm9:/input \
    gromacs/gromacs \
    -- /bin/bash -c 'echo 15 | gmx pdb2gmx -f input/1AKI.pdb -o outputs/1AKI_processed.gro -water spc')

Structure of the command​

Lets look closely at the command above:

  1. bacalhau docker run: call to Bacalhau

  2. -i ipfs://QmeeEB1YMrG6K8z43VdsdoYmQV46gAPQCHotZs9pwusCm9:/input: here we mount the CID of the dataset we uploaded to IPFS to use on the job

  3. gromacs/gromacs: we use the official gromacs - Docker Image

  4. gmx pdb2gmx: command in GROMACS that performs the conversion of molecular structural data from the Protein Data Bank (PDB) format to the GROMACS format, which is used for conducting Molecular Dynamics (MD) simulations and analyzing the results. Additional parameters could be found here gmx pdb2gmx — GROMACS 2022.2 documentation

  5. -f input/1AKI.pdb: input file

  6. -o outputs/1AKI_processed.gro: output file

  7. -water Water model to use. In this case we use spc

For a similar tutorial that you can try yourself, check out KALP-15 in DPPC - GROMACS Tutorial

When a job is submitted, Bacalhau prints out the related job_id. We store that in an environment variable so that we can reuse it later on.

Declarative job description​

The same job can be presented in the declarative format. In this case, the description will look like this:

name: Gromacs
type: batch
count: 1
tasks:
  - name: My main task
    Engine:
      type: docker
      params:
        Image: gromacs/gromacs
        Entrypoint:
          - /bin/bash
        Parameters:
          - -c
          - echo 15 | gmx pdb2gmx -f input/1AKI.pdb -o outputs/1AKI_processed.gro -water spc
    Publisher:
      Type: ipfs
    ResultPaths:
      - Name: outputs
        Path: /outputs      
    InputSources:
      - Target: "/input"
        Source:
          Type: "s3"
          Params:
            Bucket: "bacalhau-gromacs"
            Key: "*"
            Region: "us-east-1"

The job description should be saved in .yaml format, e.g. gromacs.yaml, and then run with the command:

bacalhau job run gromacs.yaml

Checking the State of your Jobs​

Job status: You can check the status of the job using bacalhau job list.

bacalhau job list --id-filter ${JOB_ID} --wide

When it says Published or Completed, that means the job is done, and we can get the results.

Job information: You can find out more information about your job by using bacalhau job describe.

bacalhau job describe ${JOB_ID}

Job download: You can download your job results directly by using bacalhau job get. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results) and downloaded our job output to be stored in that directory.

rm -rf results && mkdir -p results
bacalhau job get $JOB_ID --output-dir results

Viewing your Job Output​

To view the file, run the following command:

cat results/outputs/1AKI_processed.gro  

Support​

If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).

Last updated