Ethereum Blockchain Analysis with Ethereum-ETL and Bacalhau
Introduction
Mature blockchains are difficult to analyze because of their size. Ethereum-ETL is a tool that makes it easy to extract information from an Ethereum node, but it's not easy to get working in a batch manner. It takes approximately 1 week for an Ethereum node to download the entire chain (even more in my experience) and importing and exporting data from the Ethereum node is slow.
For this example, we ran an Ethereum node for a week and allowed it to synchronize. We then ran ethereum-etl to extract the information and pinned it on Filecoin. This means that we can both now access the data without having to run another Ethereum node.
But there's still a lot of data and these types of analyses typically need repeating or refining. So it makes absolute sense to use a decentralized network like Bacalhau to process the data in a scalable way.
In this tutorial example, we will run Ethereum-ETL tool on Bacalhau to extract data from an Ethereum node.
Prerequisite
To get started, you need to install the Bacalhau client, see more information here
Analysing Ethereum Data Locally
First let's download one of the IPFS files and inspect it locally:
You can see the full list of IPFS CIDs in the appendix at the bottom of the page.
If you don't already have the Pandas library, let's install it:
pipinstallpandas
# Use pandas to read in transaction data and clean up the columnsimport pandas as pdimport globfile = glob.glob('output_*/transactions/start_block=*/end_block=*/transactions*.csv')[0]print("Loading file %s"% file)df = pd.read_csv(file)df['value']= df['value'].astype('float')df['from_address']= df['from_address'].astype('string')df['to_address']= df['to_address'].astype('string')df['hash']= df['hash'].astype('string')df['block_hash']= df['block_hash'].astype('string')df['block_datetime']= pd.to_datetime(df['block_timestamp'], unit='s')df.info()# Total volume per daydf[['block_datetime','value']].groupby(pd.Grouper(key='block_datetime', freq='1D')).sum().plot()
The following code inspects the daily trading volume of Ethereum for a single chunk (100,000 blocks) of data.
This is all good, but we can do better. We can use the Bacalhau client to download the data from IPFS and then run the analysis on the data in the cloud. This means that we can analyze the entire Ethereum blockchain without having to download it locally.
Analysing Ethereum Data With Bacalhau
To run jobs on the Bacalhau network you need to package your code. In this example, I will package the code as a Docker image.
But before we do that, we need to develop the code that will perform the analysis. The code below is a simple script to parse the incoming data and produce a CSV file with the daily trading volume of Ethereum.
# main.pyimport glob, os, sys, shutil, tempfileimport pandas as pddefmain(input_dir,output_dir): search_path = os.path.join(input_dir, "output*", "transactions", "start_block*", "end_block*", "transactions_*.csv")
csv_files = glob.glob(search_path)iflen(csv_files)==0:print("No CSV files found in %s"% search_path) sys.exit(1)for transactions_file in csv_files:print("Loading %s"% transactions_file) df = pd.read_csv(transactions_file) df['value']= df['value'].astype('float') df['block_datetime']= pd.to_datetime(df['block_timestamp'], unit='s')print("Processing %d blocks"% (df.shape[0])) results = df[['block_datetime','value']].groupby(pd.Grouper(key='block_datetime', freq='1D')).sum()print("Finished processing %d days worth of records"% (results.shape[0])) save_path = os.path.join(output_dir, os.path.basename(transactions_file)) os.makedirs(os.path.dirname(save_path), exist_ok=True)print("Saving to %s"% (save_path)) results.to_csv(save_path)defextractData(input_dir,output_dir): search_path = os.path.join(input_dir, "*.tar.gz") gz_files = glob.glob(search_path)iflen(gz_files)==0:print("No tar.gz files found in %s"% search_path) sys.exit(1)for f in gz_files: shutil.unpack_archive(filename=f, extract_dir=output_dir)if__name__=="__main__":iflen(sys.argv)!=3:print('Must pass arguments. Format: [command] input_dir output_dir') sys.exit()with tempfile.TemporaryDirectory()as tmp_dir:extractData(sys.argv[1], tmp_dir)main(tmp_dir, sys.argv[2])
Next, let's make sure the file works as expected:
pythonmain.py.outputs/
And finally, package the code inside a Docker image to make the process reproducible. Here I'm passing the Bacalhau default /inputs and /outputs directories. The /inputs directory is where the data will be read from and the /outputs directory is where the results will be saved to.
The job has been submitted and Bacalhau has printed out the related job id. We store that in an environment variable so that we can reuse it later on.
The bacalhau docker run command allows passing input data volume with --input or -i ipfs://CID:path argument just like Docker, except the left-hand side of the argument is a content identifier (CID). This results in Bacalhau mounting a data volume inside the container. By default, Bacalhau mounts the input volume at the path /inputs inside the container.
Bacalhau also mounts a data volume to store output data. The bacalhau docker run command creates an output data volume mounted at /outputs. This is a convenient location to store the results of your job.
Declarative job description
The same job can be presented in the declarative format. In this case, the description will look like this:
The job description should be saved in .yaml format, e.g. blockchain.yaml, and then run with the command:
Copy
bacalhau job run blockchain.yaml
Checking the State of your Jobs
Job status: You can check the status of the job using bacalhau job list.
bacalhaujoblist--id-filter ${JOB_ID}
When it says Published or Completed, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe.
bacalhaujobdescribe ${JOB_ID}
Job download: You can download your job results directly by using bacalhau job get. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results) and downloaded our job output to be stored in that directory.
rm-rfresults&&mkdir-presults# Temporary directory to store the resultsbacalhaujobget ${JOB_ID} --output-dirresults# Download the results
Viewing your Job Output
To view the file, run the following command:
ls-lahresults/outputs
Display the image
To view the images, we will use glob to return all file paths that match a specific pattern.
import globimport pandas as pd# Get CSV files list from a foldercsv_files = glob.glob("results/outputs/*.csv")df = pd.read_csv(csv_files[0], index_col='block_datetime')df.plot()
Massive Scale Ethereum Analysis
Ok, so that works. Let's scale this up! We can run the same analysis on the entire Ethereum blockchain (up to the point where I have uploaded the Ethereum data). To do this, we need to run the analysis on each of the chunks of data that we have stored on IPFS. We can do this by running the same job on each of the chunks.
See the appendix for the hashes.txt file.
printf"">job_ids.txtfor h in $(cathashes.txt); do \bacalhaudockerrun \--id-only \--wait=false \--input=ipfs://$h:/inputs/data.tar.gz \ghcr.io/bacalhau-project/examples/blockchain-etl:0.0.6>>job_ids.txtdone
Now take a look at the job id's. You can use these to check the status of the jobs and download the results:
You might want to double-check that the jobs ran ok by doing a bacalhau job list.
bacalhaujoblist-n50
Wait until all of these jobs have been completed. And then download all the results and merge them into a single directory. This might take a while, so this is a good time to treat yourself to a nice Dark Mild. There's also been some issues in the past communicating with IPFS, so if you get an error, try again.
for id in $(catjob_ids.txt); do \rm-rfresults_$id &&mkdirresults_$idbacalhaujobget--output-dirresults_$id $id &donewait
Display the image
To view the images, we will use glob to return all file paths that match a specific pattern.
import os, globimport pandas as pd# Get CSV files list from a folderpath = os.path.join("results_*", "outputs", "*.csv")csv_files = glob.glob(path)# Read each CSV file into a list of DataFramesdf_list = (pd.read_csv(file, index_col='block_datetime')for file in csv_files)# Concatenate all DataFramesdf_unsorted = pd.concat(df_list, ignore_index=False)# Some files will cross days, so group by day and sum the valuesdf = df_unsorted.groupby(level=0).sum()# Plotdf.plot(figsize=(16,9))
That's it! There are several years of Ethereum transaction volume data.
The following list is a list of IPFS CID's for the Ethereum data that we used in this tutorial. You can use these CID's to download the rest of the chain if you so desire. The CIDs are ordered by block number and they increase 50,000 blocks at a time. Here's a list of ordered CIDs: