Training Tensorflow Model
TensorFlow is an open-source machine learning software library, TensorFlow is used to train neural networks. Expressed in the form of stateful dataflow graphs, each node in the graph represents the operations performed by neural networks on multi-dimensional arrays. These multi-dimensional arrays are commonly known as “tensors,” hence the name TensorFlow. In this example, we will be training a MNIST model.
TD;lR
Running any type of Tensorflow model with Bacalhau
Training TensorFlow models Locally
This section is from TensorFlow 2 quickstart for beginners
TensorFlow 2 quickstart for beginners
This short introduction uses Keras to:
Load a prebuilt dataset.
Build a neural network machine learning model that classifies images.
Train this neural network.
Evaluate the accuracy of the model.
Set up TensorFlow
Import TensorFlow into your program to check whether it is installed
Build a machine-learning model
Build a tf.keras.Sequential
model by stacking layers.
For each example, the model returns a vector of logits or log-odds scores, one for each class.
The tf.nn.softmax
function converts these logits to probabilities for each class:
Note: It is possible to bake the tf.nn.softmax
function into the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to provide an exact and numerically stable loss calculation for all models when using a softmax output.
Define a loss function for training using losses.SparseCategoricalCrossentropy
, which takes a vector of logits and a True
index and returns a scalar loss for each example.
This loss is equal to the negative log probability of the true class: The loss is zero if the model is sure of the correct class.
This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.math.log(1/10) ~= 2.3
.
Before you start training, configure and compile the model using Keras Model.compile
. Set the optimizer
class to adam
, set the loss
to the loss_fn
function you defined earlier, and specify a metric to be evaluated for the model by setting the metrics
parameter to accuracy
.
Train and evaluate your model
Use the Model.fit
method to adjust your model parameters and minimize the loss:
The Model.evaluate
method checks the models performance, usually on a "Validation-set" or "Test-set".
The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the TensorFlow tutorials.
If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:
the following method can be used to save the model as a checkpoint
Converting the notebook into a Python script
You can use a tool like nbconvert
to convert your Python notebook into a script.
After that, you can create a gist of the training script at gist.github.com copy the raw link of the gist
Testing whether the script works
Running on bacalhau
The dataset and the script are mounted to the TensorFlow container using an URL we then run the script inside the container
Structure of the command:
-i https://gist.githubusercontent.com/js-ts/e7d32c7d19ffde7811c683d4fcb1a219/raw/ff44ac5b157d231f464f4d43ce0e05bccb4c1d7b/train.py
: mount the training script-i https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
: mount the datasettensorflow/tensorflow
: specify the Docker imagepython train.py
: execute the script
By default whatever URL you mount using the -i flag gets mounted at the path /inputs so we choose that as our input directory -w /inputs
Where it says Completed
, that means the job is done, and we can get the results.
To find out more information about your job, run the following command: