Predictions
Restore an inference model and make predictions on an input dataset.
We'll cover the following...
Chapter Goals:
- Learn how to restore an inference model and retrieve specific tensors from the computation graph
- Implement a function that makes predictions using a saved inference model
A. Restoring the model
To restore an inference model, we use the tf.compat.v1.saved_model.loader.load
function. This function restores both the inference graph as well as the inference model’s parameters.
Since the function’s first argument is a tf.compat.v1.Session
object, it’s a good idea to restore the inference model within the scope of a particular tf.compat.v1.Session
.
Press + to interact
import tensorflow as tftags = [tf.compat.v1.saved_model.tag_constants.SERVING]model_dir = 'inference_model'with tf.compat.v1.Session(graph=tf.Graph()) as sess:tf.saved_model.loader.load(sess, tags, model_dir)
The second argument for tf.compat.v1.saved_model.loader.load
is a list of tag constants. For inference, we use the SERVING
tag. The function’s third argument is ...