EasyOCR (Optical Character Recognition) on Bacalhau
Introduction
In this example tutorial, we use Bacalhau and Easy OCR to digitize paper records or for recognizing characters or extract text data from images stored on IPFS, S3 or on the web. EasyOCR is a ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic etc. With easy OCR, you use the pre-trained models or use your own fine-tuned model.
TL;DR
Running Easy OCR Locally
Install the required dependencies
Load the different example images
List all the images. You'll see an output like this:
Next, we create a reader to do OCR to get coordinates which represent a rectangle containing text and the text itself:
Containerize your Script using Docker
You can skip this step and go straight to running a Bacalhau job
We will use the Dockerfile
that is already created in the Easy OCR repo. Use the command below to clone the repo
Build the Container
The docker build
command builds Docker images from a Dockerfile.
Before running the command replace:
hub-user with your docker hub username, If you don’t have a docker hub account follow these instructions to create docker account, and use the username of the account you created
repo-name with the name of the container, you can name it anything you want
tag this is not required but you can use the latest tag
Push the container
Next, upload the image to the registry. This can be done by using the Docker hub username, repo name, or tag.
Running a Bacalhau Job to Generate Easy OCR output
Prerequisite
To get started, you need to install the Bacalhau client, see more information here.
Now that we have an image in the docker hub (your own or an example image from the manual), we can use the container for running on Bacalhau.
Structure of the imperative command
Let's look closely at the command below:
export JOB_ID=$( ... )
exports the job ID as environment variablebacalhau docker run
: call to bacalhauThe
--gpu 1
flag is set to specify hardware requirements, a GPU is needed to run such a jobThe
--id-only
flag is set to print only job id-i ipfs://bafybeibvc......
Mounts the model from IPFS-i https://raw.githubusercontent.com...
Mounts the Input Image from a URLjsacex/easyocr
the name and the tag of the docker image we are using-- easyocr -l ch_sim en -f ./inputs/chinese.jpg --detail=1 --gpu=True
execute script with following paramters:-l ch_sim
: the name of the model-f ./inputs/chinese.jpg
: path to the input Image or directory--detail=1
: level of detail--gpu=True
: we set this flag to true since we are running inference on a GPU. If you run this on a CPU - set this flag to false
Since the model and the image aren't present in the container we will mount the image from an URL and the model from IPFS. You can find models to download from here. You can choose the model you want to use in this case we will be using the zh_sim_g2
model
When a job is submitted, Bacalhau prints out the related job_id
. We store that in an environment variable so that we can reuse it later on.
Declarative job description
The same job can be presented in the declarative format. In this case, the description will look like this:
The job description should be saved in .yaml
format, e.g. easyocr.yaml
, and then run with the command:
Checking the State of your Jobs
Job status
You can check the status of the job using bacalhau job list
.
When it says Completed
, that means the job is done, and we can get the results.
Job information
You can find out more information about your job by using bacalhau job describe
.
Job download
You can download your job results directly by using bacalhau job get
. Alternatively, you can choose to create a directory to store your results. In the command below, we created a directory and downloaded our job output to be stored in that directory.
After the download has finished you should see the following contents in results directory
Viewing your Job Output
Now you can find the file in the results/outputs
folder. You can view results by running following commands:
Last updated