Ethereum Blockchain Analysis with Ethereum-ETL and Bacalhau
Mature blockchains are difficult to analyze because of their size. Ethereum-ETL is a tool that makes it easy to extract information from an Ethereum node, but it's not easy to get working in a batch manner. It takes approximately 1 week for an Ethereum node to download the entire chain (even more in my experience) and importing and exporting data from the Ethereum node is slow.
For this example, we ran an Ethereum node for a week and allowed it to synchronize. We then ran ethereum-etl to extract the information and pinned it on Filecoin. This means that we can both now access the data without having to run another Ethereum node.
But there's still a lot of data and these types of analyses typically need repeating or refining. So it makes absolute sense to use a decentralized network like Bacalhau to process the data in a scalable way.
TD;LR
Running Ethereum-etl tool on Bacalhau to extract Ethereum node.
Prerequisite
To get started, you need to install the Bacalhau client, see more information here
# Use pandas to read in transaction data and clean up the columnsimport pandas as pdimport globfile = glob.glob('output_*/transactions/start_block=*/end_block=*/transactions*.csv')[0]print("Loading file %s"% file)df = pd.read_csv(file)df['value']= df['value'].astype('float')df['from_address']= df['from_address'].astype('string')df['to_address']= df['to_address'].astype('string')df['hash']= df['hash'].astype('string')df['block_hash']= df['block_hash'].astype('string')df['block_datetime']= pd.to_datetime(df['block_timestamp'], unit='s')df.info()
The following code inspects the daily trading volume of Ethereum for a single chunk (100,000 blocks) of data.
This is all good, but we can do better. We can use the Bacalhau client to download the data from IPFS and then run the analysis on the data in the cloud. This means that we can analyze the entire Ethereum blockchain without having to download it locally.
# Total volume per daydf[['block_datetime','value']].groupby(pd.Grouper(key='block_datetime', freq='1D')).sum().plot()
Analysing Ethereum Data With Bacalhau
To run jobs on the Bacalhau network you need to package your code. In this example, I will package the code as a Docker image.
But before we do that, we need to develop the code that will perform the analysis. The code below is a simple script to parse the incoming data and produce a CSV file with the daily trading volume of Ethereum.
%%writefile main.pyimport glob, os, sys, shutil, tempfileimport pandas as pddefmain(input_dir,output_dir): search_path = os.path.join(input_dir, "output*", "transactions", "start_block*", "end_block*", "transactions_*.csv")
csv_files = glob.glob(search_path)iflen(csv_files)==0:print("No CSV files found in %s"% search_path) sys.exit(1)for transactions_file in csv_files:print("Loading %s"% transactions_file) df = pd.read_csv(transactions_file) df['value']= df['value'].astype('float') df['block_datetime']= pd.to_datetime(df['block_timestamp'], unit='s')print("Processing %d blocks"% (df.shape[0])) results = df[['block_datetime','value']].groupby(pd.Grouper(key='block_datetime', freq='1D')).sum()print("Finished processing %d days worth of records"% (results.shape[0])) save_path = os.path.join(output_dir, os.path.basename(transactions_file)) os.makedirs(os.path.dirname(save_path), exist_ok=True)print("Saving to %s"% (save_path)) results.to_csv(save_path)defextractData(input_dir,output_dir): search_path = os.path.join(input_dir, "*.tar.gz") gz_files = glob.glob(search_path)iflen(gz_files)==0:print("No tar.gz files found in %s"% search_path) sys.exit(1)for f in gz_files: shutil.unpack_archive(filename=f, extract_dir=output_dir)if__name__=="__main__":iflen(sys.argv)!=3:print('Must pass arguments. Format: [command] input_dir output_dir') sys.exit()with tempfile.TemporaryDirectory()as tmp_dir:extractData(sys.argv[1], tmp_dir)main(tmp_dir, sys.argv[2])
Next, let's make sure the file works as expected...
%%bashpythonmain.py.outputs/
And finally, package the code inside a Docker image to make the process reproducible. Here I'm passing the Bacalhau default /inputs and /outputs directories. The /inputs directory is where the data will be read from and the /outputs directory is where the results will be saved to.
The job has been submitted and Bacalhau has printed out the related job id. We store that in an environment variable so that we can reuse it later on.
%env JOB_ID={job_id}
The bacalhau docker run command allows to pass input data volume with a -i ipfs://CID:path argument just like Docker, except the left-hand side of the argument is a content identifier (CID). This results in Bacalhau mounting a data volume inside the container. By default, Bacalhau mounts the input volume at the path /inputs inside the container.
Bacalhau also mounts a data volume to store output data. The bacalhau docker run command creates an output data volume mounted at /outputs. This is a convenient location to store the results of your job.
Checking the State of your Jobs
Job status: You can check the status of the job using bacalhau list.
%%bashbacalhaulist--id-filter ${JOB_ID}
When it says Published or Completed, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau describe.
%%bashbacalhaudescribe ${JOB_ID}
Job download: You can download your job results directly by using bacalhau get. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
%%bashrm-rf./results&&mkdir-p./results# Temporary directory to store the resultsbacalhauget--output-dir./results ${JOB_ID} # Download the results
After the download has finished you should see the following contents in the results directory.
Viewing your Job Output
To view the file, run the following command:
%%bashls-lahresults/outputs
Display the image
To view the images, we will use glob to return all file paths that match a specific pattern.
import globimport pandas as pd# Get CSV files list from a foldercsv_files = glob.glob("results/outputs/*.csv")df = pd.read_csv(csv_files[0], index_col='block_datetime')df.plot()
Massive Scale Ethereum Analysis
Ok, so that works. Let's scale this up! We can run the same analysis on the entire Ethereum blockchain (up to the point where I have uploaded the Ethereum data). To do this, we need to run the analysis on each of the chunks of data that we have stored on IPFS. We can do this by running the same job on each of the chunks.
See the appendix for the hashes.txt file.
%%bashprintf"">job_ids.txtfor h in $(cathashes.txt); do \bacalhaudockerrun \--id-only \--wait=false \--input=ipfs://$h:/inputs/data.tar.gz \ghcr.io/bacalhau-project/examples/blockchain-etl:0.0.6>>job_ids.txtdone
Now take a look at the job id's. You can use these to check the status of the jobs and download the results. You might want to double-check that the jobs ran ok by doing a bacalhau list.
%%bashcatjob_ids.txt
Wait until all of these jobs have been completed:
%%bashbacalhaulist-n50
And then download all the results and merge them into a single directory. This might take a while, so this is a good time to treat yourself to a nice Dark Mild. There's also been some issues in the past communicating with IPFS, so if you get an error, try again.
%%bashfor id in $(catjob_ids.txt); do \rm-rfresults_$id &&mkdirresults_$idbacalhauget--output-dirresults_$id $id &donewait
Display the image
To view the images, we will use glob to return all file paths that match a specific pattern.
import os, globimport pandas as pd# Get CSV files list from a folderpath = os.path.join("results_*", "outputs", "*.csv")csv_files = glob.glob(path)# Read each CSV file into a list of DataFramesdf_list = (pd.read_csv(file, index_col='block_datetime')for file in csv_files)# Concatenate all DataFramesdf_unsorted = pd.concat(df_list, ignore_index=False)# Some files will cross days, so group by day and sum the valuesdf = df_unsorted.groupby(level=0).sum()# Plotdf.plot(figsize=(16,9))
That's it! There are several years of Ethereum transaction volume data.
The following list is a list of IPFS CID's for the Ethereum data that we used in this tutorial. You can use these CID's to download the rest of the chain if you so desire. The CIDs are ordered by block number and they increase 50,000 blocks at a time. Here's a list of ordered CIDs:
In the course of writing this example, I had to set up an Ethereum node. It was a slow and painful process so I thought I would share the steps I took to make it easier for others.
Geth setup and sync
Geth supports Ubuntu by default, so use that when creating a VM. Use Ubuntu 22.04 LTS.
Prysm will need to finish synchronizing before geth will start synchronizing.
In Prysm you will see lots of log messages saying: Synced new block, and in Geth you will see: Syncing beacon headers downloaded=11,920,384 left=4,054,753 eta=2m25.903s. This tells you how long it will take to sync the beacons. Once that's done, get will start synchronizing the blocks.
Bring up the Ethereum javascript console with:
sudo geth --datadir /mnt/disks/ethereum/ attach
Once the block sync has started, eth.syncing will return values. Before it starts, this value will be false.
Note that by default, geth will perform a fast sync, without downloading the full blocks. The syncmode=full flag forces geth to do a full sync. If we didn't do this, then we wouldn't be able to back up the data properly.
Extracting the Data
# Install pip and ethereum-etlsudoapt-getinstall-ypython3-pipsudopip3installethereum-etlcdmkdirethereum-etlcdethereum-etl# Export data with one 50000-item batch in a directory. Up to this point we've processed about 3m.# The full chain is about 16m blocksfor i in $(seq 0 50000 16000000); do sudo ethereumetl export_all --partition-batch-size 50000 --start $i --end $(expr $i + 50000 - 1) --provider-uri file:///mnt/disks/ethereum/geth.ipc -o output_$i; done
Upload the data
Tar and compress the directories to make them easier to upload:
sudoapt-getinstall-yjq# Install jq to parse the cidcdcdethereum-etlfor i in $(seq05000016000000); dotarcfzoutput_$i.tar.gzoutput_$i; done
Export your Web3.storage JWT API key as an environment variable called TOKEN:
printf"">hashes.txtfor i in $(seq 0 50000 16000000); do curl -X POST https://api.web3.storage/upload -H "Authorization: Bearer ${TOKEN}" -H 'accept: application/json' -H 'Content-Type: text/plain' -H "X-NAME: ethereum-etl-block-$i" --data-binary "@output_$i.tar.gz" >> raw.json; done