Skip to main content

Generate Synthetic Data using Sparkov Data Generation technique

stars - badge-generator


A synthetic dataset is generated by algorithms or simulations which has similar characteristics to real-world data. Collecting real-world data, especially data that contains sensitive user data like credit card information, is not possible due to security and privacy concerns. If a data scientist needs to train a model to detect credit fraud they can use synthetically generated data instead of using real data without compromising the privacy of users.

The advantage of using Bacalhau is that you can generate terabytes of synthetic data without having to install any dependencies or store the data locally.

In this example, we will generate synthetic credit card transaction data using the Sparkov program and store the results in IPFS.


Run Bacalhau on a synthetic dataset.


To get started, you need to install the Bacalhau client, see more information here

Running Sparkov Locally​

To run Sparkov locally, you'll need to clone the repo and install dependencies.

git clone
pip3 install -r Sparkov_Data_Generation/requirements.txt

Go to the Sparkov_Data_Generation directory

%cd Sparkov_Data_Generation

Creating a temporary directory to store the outputs

mkdir ../outputs

Running the script

After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. To submit a job, run the following Bacalhau command:

python3 -n 1000 -o ../outputs "01-01-2022" "10-01-2022"

Below are some of the parameters you need before running the script

  • -n: Number of customers to generate

  • -o: path to store the outputs

  • Start date: "01-01-2022"

  • End date: "10-01-2022"

To see the full list of options, use:

python -h

Containerize Script using Docker


You can skip this entirely and directly go to running on Bacalhau.

If you want any additional dependencies to be installed along with DuckDB, you need to build your own container.

To build your own docker container, create a Dockerfile, which contains instructions to build your DuckDB docker container.

FROM python:3.8

RUN apt update && apt install git

RUN git clone

WORKDIR /Sparkov_Data_Generation/

RUN pip3 install -r requirements.txt

See more information on how to containerize your script/app here

Build the container

We will run docker build command to build the container;

docker build -t <hub-user>/<repo-name>:<tag> .

Before running the command replace;

  • hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create Docker account, and use the username of the account you created

  • repo-name with the name of the container, you can name it anything you want

  • tag this is not required but you can use the latest tag

In our case:

docker build -t jsacex/sparkov-data-generation

Push the container

Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.

docker push <hub-user>/<repo-name>:<tag>

In our case:

docker push jsacex/sparkov-data-generation

After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau

Running a Bacalhau Job

Now we're ready to run a Bacalhau job. This code runs a job, downloads the results, and prints the stdout.

Copy and paste the following code to your terminal

%%bash --out job_id
bacalhau docker run \
--id-only \
--wait \
jsacex/sparkov-data-generation \
-- python3 -n 1000 -o ../outputs "01-01-2022" "10-01-2022"

Structure of the command

Let's look closely at the command above:

  • bacalhau docker run: call to bacalhau

  • jsacex/sparkov-data-generation: the name and the tag of the docker image we are using

  • -o ../outputs "01-01-2022" "10-01-2022: path to store the outputs, start date and end-date.

  • python3 -n 1000: execute Sparktov

When a job is submitted, Bacalhau prints out the related job_id. We store that in an environment variable so that we can reuse it later on.

%env JOB_ID={job_id}

Checking the State of your Jobs

  • Job status: You can check the status of the job using bacalhau list.
bacalhau list --id-filter ${JOB_ID}

When it says Completed, that means the job is done, and we can get the results.

  • Job information: You can find out more information about your job by using bacalhau describe.
bacalhau describe ${JOB_ID}
  • Job download: You can download your job results directly by using bacalhau get. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
rm -rf results && mkdir -p results
bacalhau get $JOB_ID --output-dir results

After the download has finished you should see the following contents in the results directory

Viewing your Job Output

To view the file, run the following command:

ls results/outputs # list the contents of the current directory