In this tutorial example, we will showcase how to containerize an OpenMM workload so that it can be executed on the Bacalhau network and take advantage of the distributed storage & compute resources. OpenMM is a toolkit for molecular simulation. It is a physic-based library that is useful for refining the structure and exploring functional interactions with other molecules. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it truly unique among simulation codes.
TD;LR
Running OpenMM m molecular simulation with Bacalhau
Prerequisite
To get started, you need to install the Bacalhau client, see more information here
Protein data
We use a processed 2DRI dataset that represents the ribose binding protein in bacterial transport and chemotaxis. The source organism is the Escherichia coli bacteria. You can find more details on this protein at the related RCSB Protein Data Bank page.
Protein data can be stored in a .pdb file, this is a human-readable format. It provides for the description and annotation of protein and nucleic acid structures including atomic coordinates, secondary structure assignments, as well as atomic connectivity. See more information about PDB format here.
Write the script
To run the script above all we need is a Python environment with the OpenMM library installed.
%%writefile run_openmm_simulation.pyimport osfrom openmm import*from openmm.app import*from openmm.unit import*# Input Filesinput_path ='/inputs/2dri-processed.pdb'os.path.exists(input_path)# check if input file existspdb =PDBFile(input_path)forcefield =ForceField('amber14-all.xml', 'amber14/tip3pfb.xml')# Outputoutput_path ='/outputs/final_state.pdbx'ifnot os.path.exists(os.path.dirname(output_path)):# check if output dir exists os.makedirs(os.path.dirname(output_path))# System ConfigurationnonbondedMethod = PMEnonbondedCutoff =1.0*nanometersewaldErrorTolerance =0.0005constraints = HBondsrigidWater =TrueconstraintTolerance =0.000001hydrogenMass =1.5*amu# Integration Optionsdt =0.002*picosecondstemperature =310*kelvinfriction =1.0/picosecondpressure =1.0*atmospheresbarostatInterval =25# Simulation Optionssteps =10equilibrationSteps =0#platform = Platform.getPlatformByName('CUDA')platform = Platform.getPlatformByName('CPU')#platformProperties = {'Precision': 'single'}platformProperties ={}dcdReporter =DCDReporter('trajectory.dcd', 1000)dataReporter =StateDataReporter('log.txt', 1000, totalSteps=steps, step=True, time=True, speed=True, progress=True, elapsedTime=True, remainingTime=True, potentialEnergy=True, kineticEnergy=True, totalEnergy=True, temperature=True, volume=True, density=True, separator='\t')
checkpointReporter =CheckpointReporter('checkpoint.chk', 1000)# Prepare the Simulationprint('Building system...')topology = pdb.topologypositions = pdb.positionssystem = forcefield.createSystem(topology, nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff, constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance, hydrogenMass=hydrogenMass)system.addForce(MonteCarloBarostat(pressure, temperature, barostatInterval))integrator =LangevinMiddleIntegrator(temperature, friction, dt)integrator.setConstraintTolerance(constraintTolerance)simulation =Simulation(topology, system, integrator, platform, platformProperties)simulation.context.setPositions(positions)# Minimize and Equilibrateprint('Performing energy minimization...')simulation.minimizeEnergy()print('Equilibrating...')simulation.context.setVelocitiesToTemperature(temperature)simulation.step(equilibrationSteps)# Simulateprint('Simulating...')simulation.reporters.append(dcdReporter)simulation.reporters.append(dataReporter)simulation.reporters.append(checkpointReporter)simulation.currentStep =0simulation.step(steps)# Write a file with the final simulation statestate = simulation.context.getState(getPositions=True, enforcePeriodicBox=system.usesPeriodicBoundaryConditions())withopen(output_path, mode="w+") as file: PDBxFile.writeFile(simulation.topology, state.getPositions(), file)print('Simulation complete, file written to disk at: {}'.format(output_path))
We are printing the first 10 lines of the file. The output contains a number of ATOM records. These describe the coordinates of the atoms that are part of the protein.
%%bashhead./dataset/2dri-processed.pdb
REMARK 1 CREATED WITH OPENMM 7.6, 2022-07-12
CRYST1 81.309 81.309 81.309 90.00 90.00 90.00 P 1 1
ATOM 1 N LYS A 1 64.731 9.461 59.430 1.00 0.00 N
ATOM 2 CA LYS A 1 63.588 10.286 58.927 1.00 0.00 C
ATOM 3 HA LYS A 1 62.707 9.486 59.038 1.00 0.00 H
ATOM 4 C LYS A 1 63.790 10.671 57.468 1.00 0.00 C
ATOM 5 O LYS A 1 64.887 11.089 57.078 1.00 0.00 O
ATOM 6 CB LYS A 1 63.458 11.567 59.749 1.00 0.00 C
ATOM 7 HB2 LYS A 1 63.333 12.366 58.879 1.00 0.00 H
ATOM 8 HB3 LYS A 1 64.435 11.867 60.372 1.00 0.00 H
Upload the Data to IPFS
The simplest way to upload the data to IPFS is to use a third-party service to "pin" data to the IPFS network, to ensure that the data exists and is available. To do this you need an account with a pinning service like web3.storage or Pinata or nft.storage . Once registered you can use their UI or API or SDKs to upload files.
Containerize Script using Docker
To build your own docker container, create a Dockerfile, which contains instructions to build your image.
:::tip For more information about working with custom containers, see the custom containers example. :::
When a job is submitted, Bacalhau prints out the related job_id. We store that in an environment variable so that we can reuse it later on.
%env JOB_ID={job_id}
Checking the State of your Jobs
Job status: You can check the status of the job using bacalhau list.
%%bashbacalhaulist--id-filter=${JOB_ID}--no-style
When it says Completed, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe.
%%bashbacalhaudescribe ${JOB_ID}
Job download: You can download your job results directly by using bacalhau get. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
%%bashrm-rfresults&&mkdir-presultsbacalhauget ${JOB_ID} # Download the results
After the download has finished you should see the following contents in the results directory