TF Lite Interpreter (Part 1)

Learn to apply TF Lite Interpreter to make inferences on mobile devices.

To run a ML/DL model on a mobile device, we first define, compile, and train a TF model. We use the TF Lite converter to convert this model to the FlatBuffers format, which is suitable for mobile devices. Optionally, we use some test data to verify the working of the converted model. Finally, we deploy the converted model to a mobile device: Android or iOS.

To make inferences on mobile devices, we need an interpreter that can execute TF Lite models on a variety of platforms and devices. Let’s explore the functionality of the TF Lite Interpreter and its input/output details by first converting a TF model to the TF Lite format, initializing the TF Lite Interpreter, checking input/output details, and invoking the interpreter to perform inferences.

Model conversion

Convert a TF model to the TF Lite format; for instance, the following code converts a SavedModel:

Get hands-on with 1200+ tech skills courses.