In this tutorial example, we will showcase how to containerize an OpenMM workload so that it can be executed on the Bacalhau network and take advantage of the distributed storage & compute resources. OpenMM is a toolkit for molecular simulation. It is a physic-based library that is useful for refining the structure and exploring functional interactions with other molecules. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it truly unique among simulation codes.
In this example tutorial, our focus will be on running OpenMM molecular simulation with Bacalhau.
Prerequisite
To get started, you need to install the Bacalhau client, see more information here
Running Locally
Downloading Datasets
We use a processed 2DRI dataset that represents the ribose binding protein in bacterial transport and chemotaxis. The source organism is the Escherichia coli bacteria.
Protein data can be stored in a .pdb file, this is a human-readable format. It provides for the description and annotation of protein and nucleic acid structures including atomic coordinates, secondary structure assignments, as well as atomic connectivity. See more information about PDB format here. For the original, unprocessed 2DRI dataset, you can download it from the RCSB Protein Data Bank here.
The relevant code of the processed 2DRI dataset can be found here. Let's print the first 10 lines of the 2dri-processed.pdb file. The output contains a number of ATOM records. These describe the coordinates of the atoms that are part of the protein.
To run the script above all we need is a Python environment with the OpenMM library installed. This script makes sure that there are no empty cells and to filter out potential error sources from the file.
# Import the packagesimport osfrom openmm import*from openmm.app import*from openmm.unit import*# Specify the input filesinput_path ='inputs/2dri-processed.pdb'ifnot os.path.exists(input_path):raiseFileNotFoundError(f"Input file not found: {input_path}")# Function to check and filter PDB file linesdeffilter_valid_pdb_lines(input_path,output_path):withopen(input_path, 'r')as infile,open(output_path, 'w')as outfile: lines = infile.readlines()for i, line inenumerate(lines):if line.startswith("ATOM")or line.startswith("HETATM"):iflen(line)>=54:try:float(line[30:38].strip())float(line[38:46].strip())float(line[46:54].strip()) outfile.write(line)exceptValueError:print(f"Skipping line {i +1} because it has invalid coordinates: {line.strip()}")else:print(f"Skipping line {i +1} because it is too short: {line.strip()}")else: outfile.write(line)# Filter PDB filefiltered_pdb_path ='inputs/filtered_2dri-processed.pdb'filter_valid_pdb_lines(input_path, filtered_pdb_path)# Load the filtered PDB filetry: pdb =PDBFile(filtered_pdb_path)exceptValueErroras e:print(f"ValueError while reading filtered PDB file: {e}")raiseforcefield =ForceField('amber14-all.xml', 'amber14/tip3pfb.xml')# Outputoutput_path ='outputs/final_state.pdbx'ifnot os.path.exists(os.path.dirname(output_path)): os.makedirs(os.path.dirname(output_path))# System ConfigurationnonbondedMethod = PMEnonbondedCutoff =1.0* nanometersewaldErrorTolerance =0.0005constraints = HBondsrigidWater =TrueconstraintTolerance =0.000001hydrogenMass =1.5* amu# Integration Optionsdt =0.002* picosecondstemperature =310* kelvinfriction =1.0/ picosecondpressure =1.0* atmospheresbarostatInterval =25# Simulation Optionssteps =10equilibrationSteps =0# platform = Platform.getPlatformByName('CUDA')platform = Platform.getPlatformByName('CPU')# platformProperties = {'Precision': 'single'}platformProperties ={}dcdReporter =DCDReporter('trajectory.dcd', 1000)dataReporter =StateDataReporter('log.txt', 1000, totalSteps=steps, step=True, time=True, speed=True, progress=True, elapsedTime=True, remainingTime=True, potentialEnergy=True, kineticEnergy=True, totalEnergy=True, temperature=True, volume=True, density=True, separator='\t')checkpointReporter =CheckpointReporter('checkpoint.chk', 1000)# Prepare the Simulationprint('Building system...')topology = pdb.topologypositions = pdb.positionssystem = forcefield.createSystem(topology, nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff, constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance,
hydrogenMass=hydrogenMass)system.addForce(MonteCarloBarostat(pressure, temperature, barostatInterval))integrator =LangevinMiddleIntegrator(temperature, friction, dt)integrator.setConstraintTolerance(constraintTolerance)simulation =Simulation(topology, system, integrator, platform, platformProperties)simulation.context.setPositions(positions)# Minimize and Equilibrateprint('Performing energy minimization...')simulation.minimizeEnergy()print('Equilibrating...')simulation.context.setVelocitiesToTemperature(temperature)simulation.step(equilibrationSteps)# Simulateprint('Simulating...')simulation.reporters.append(dcdReporter)simulation.reporters.append(dataReporter)simulation.reporters.append(checkpointReporter)simulation.currentStep =0simulation.step(steps)# Write a file with the final simulation statestate = simulation.context.getState(getPositions=True, enforcePeriodicBox=system.usesPeriodicBoundaryConditions())withopen(output_path, mode="w+")as file: PDBxFile.writeFile(simulation.topology, state.getPositions(), file)print('Simulation complete, file written to disk at: {}'.format(output_path))
Running the Script
pythonrun_openmm_simulation.py
This is only done to check whether your Python script is running. If there are no errors occurring, proceed further.
Uploading the Data to IPFS
The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this, you need an account with a pinning service like Pinata or nft.storage. Once registered, you can use their UI or API or SDKs to upload files.
When you pin your data, you'll get a CID. Copy the CID as it will be used to access your data
Containerize Script using Docker
To build your own docker container, create a Dockerfile, which contains instructions to build your image.
See more information on how to containerize your script/app here
bafybeig63whfqyuvwqqrp5456fl4anceju24ttyycexef3k5eurg5uvrq4: here we mount the CID of the dataset we uploaded to IPFS to use on the job
ghcr.io/bacalhau-project/examples/openmm:0.3: the name and the tag of the image we are using
python run_openmm_simulation.py: the script that will be executed inside the container
When a job is submitted, Bacalhau prints out the related job_id. We store that in an environment variable so that we can reuse it later on.
Checking the State of your Jobs
Job status: You can check the status of the job using bacalhau job list.
bacalhaujoblist--id-filter=${JOB_ID}--no-style
When it says Published or Completed, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe.
bacalhaujobdescribe ${JOB_ID}
Job download: You can download your job results directly by using bacalhau job get. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results) and downloaded our job output to be stored in that directory.
rm-rfresults&&mkdir-presultsbacalhaujobget ${JOB_ID} --output-dirresults# Download the results
Viewing your Job Output
To view the file, run the following command:
catresults/outputs/final_state.pdbx
Support
If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).