Coresets On Bacalhau
Introduction
Coreset is a data subsetting method. Since the uncompressed datasets can get very large when compressed, it becomes much harder to train them as training time increases with the dataset size. To reduce training time and cut costs, we employ the coreset method; the coreset method can also be applied to other datasets. In this case, we use the coreset method which can lead to a fast speed in solving the k-means problem among the big data with high accuracy in the meantime.
We construct a small coreset for arbitrary shapes of numerical data with a decent time cost. The implementation was mainly based on the coreset construction algorithm that was proposed by Braverman et al. (SODA 2021).
For a deeper understanding of the core concepts, it's recommended to explore: 1. Coresets for Ordered Weighted Clustering 2. Efficient Implementation of Coreset-based K-Means Methods
In this tutorial example, we will run compressed dataset with Bacalhau
Prerequisite
To get started, you need to install the Bacalhau client, see more information here
Running Locally
Clone the repo which contains the code
Downloading the dataset
To download the dataset you should open Street Map, which is a public repository that aims to generate and distribute accessible geographic data for the whole world. Basically, it supplies detailed position information, including the longitude and latitude of the places around the world.
The dataset is a osm.pbf
(compressed format for .osm
file), the file can be downloaded from Geofabrik Download Server
Installing Dependencies
The following command is installing Linux dependencies:
Ensure that the requirements.txt
file contains the following dependencies:
The following command is installing Python dependencies:
Running the Script
To run coreset locally, you need to convert from compressed pbf
format to geojson
format:
The following command is to run the Python script to generate the coreset:
coreset.py
contains the following script here
Containerize Script using Docker
To build your own docker container, create a Dockerfile
, which contains instructions on how the image will be built, and what extra requirements will be included.
We will use the python:3.8
image, we run the same commands for installing dependencies that we used locally.
See more information on how to containerize your script/app here
Build the container
We will run docker build
command to build the container:
Before running the command replace:
hub-user
with your docker hub username, If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name
with the name of the container, you can name it anything you want
tag
this is not required but you can use the latest tag
In our case:
Push the container
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name or tag.
In our case:
Running a Bacalhau Job
After the repo image has been pushed to Docker Hub, we can now use the container for running on Bacalhau. We've already converted the monaco-latest.osm.pbf
file from compressed pbf
format to geojson
format here. To submit a job, run the following Bacalhau command:
Structure of the command
Let's look closely at the command above:
bacalhau docker run
: call to bacalhau--input https://github.com/js-ts/Coreset/blob/master/monaco-latest.geojson
: mount themonaco-latest.geojson
file inside the container so it can be used by the scriptjsace/coreset
: the name of the docker image we are usingpython Coreset/python/coreset.py -f monaco-latest.geojson -o outputs
: the script initializes cluster centers, creates a coreset using these centers, and saves the results to the specified folder.
Additional parameters:
-k
: amount of initialized centers (default=5)
-n
: size of coreset (default=50)
-o
: the output folder
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Declarative job description
The same job can be presented in the declarative format. In this case, the description will look like this:
The job description should be saved in .yaml
format, e.g. coreset.yaml
, and then run with the command:
Checking the State of your Jobs
Job status: You can check the status of the job using bacalhau job list
.
When it says Published
or Completed
, that means the job is done, and we can get the results.
Job information: You can find out more information about your job by using bacalhau job describe
.
Job download: You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory (results
) and downloaded our job output to be stored in that directory.
Viewing your Job Output
To view the file, run the following command:
To view the output as a CSV file, run:
Support
If you have questions or need support or guidance, please reach out to the Bacalhau team via Slack (#general channel).
Last updated