ML Kit for Pretrained Models
Learn about the ML Kit, Google’s software development kit, to bring ML capabilities to mobile (Android/iOS) devices.
Google’s mobile software development kit (SDK), ML Kit, provides a set of prebuilt ML models and APIs for mobile and web applications. It simplifies the integration of ML capabilities and functionalities into mobile apps and web applications. It also provides a range of prebuilt models for tasks such as text recognition, face detection, and object tracking.
The ML Kit also provides APIs for custom model integration, allowing us to train our models and incorporate them into our apps. ML Kit uses TF Lite to provide powerful ML capabilities with minimal configuration required. Let’s cover the ready-to-use APIs offered by the ML Kit.
ML Kit APIs
ML Kit APIs can work on-device and on the cloud. The former doesn’t require any internet connections; however, the latter needs an active connection to use the Google Cloud Platform (GCP). We can use ML Kit in conjunction with TF Lite, NNAPI, or Google Cloud APIs.
ML Kit utilizes TF Lite as a backend for running TF Lite models and benefits from its optimizations for mobile deployment.
ML Kit can integrate with Google Cloud APIs of the GCP to leverage their advanced ML capabilities and perform tasks that require cloud-based processing.
ML Kit can leverage
to optimize and accelerate the execution of ML tasks on supported Android devices.NNAPI NNAPI allows developers to run computationally intensive ML operations on Android devices using dedicated hardware acceleration.
One of the ways to deploy our custom TF Lite models using ML Kit is by uploading the models to the
Supported ML tasks
ML Kit supports numerous ML tasks:
Vision tasks: These include optical character recognition, object detection, object tracking, face detection, image labeling, and barcode scanning.
Natural language tasks: These include language identification and on-device translation.
The details of some of the APIs provided by the ML Kit are given below.
Image labeling
The image labeling API of ML Kit can recognize objects in input images. This API is available on-device as well as in the cloud. The mobile app using this API is able to recognize and label objects, such as people, places, and activities, in an input image. The API also outputs a score to each label that indicates the confidence level of the ML model.
Face detection
This on-device API of the ML Kit detects faces and facial features in input images. The API supports the generation of avatars from the extracted faces. This feature works in real-time; therefore, we can integrate it into games and video chats.
Barcode scanning
This API reads barcode data from most barcode formats. The on-device barcode scanning API enables us to find encoded data, such as contact information or payment details.
Text translation
This API can translate input text into multiple languages. The user can have the translation of various language combinations. This API employs the models integrated into the Google Translate offline app.
Model installation options
We can install models in ML Kit APIs in one of the following ways:
Unbundled models: Google Play Services downloads and manages these models. The models don’t contribute to the app size. When a new version of the model is available, these are automatically updated.
Bundled models: These are linked to the app at build time and contribute to the app size. After app installation, all features of the model are immediately available.
Dynamically downloaded models: These are downloaded on demand. To update the model, we have to update the app. When we uninstall the app, these models are removed from the app-specific storage.
ML Kit allows us to use custom models because they can help us integrate the features we need. However, custom models might make the app size too large because the model’s file size might become much larger than the size of a typical app.
Adding local models to ML Kit
To add an on-device model to an existing object detection app, we have to follow these steps:
Create a TF Lite model using TF and Keras.
Convert the TF Lite model to an on-device format.
Add the on-device model to the app’s assets directory.
Load the on-device model using the
LocalModel
class of the ML Kit.Update the object detector options to use the on-device model by calling the
setLocalModel
method on theObjectDetectorOptions.Builder
object and passing in the loaded on-device model.
Here’s an example of how we can update the object detector options to use an on-device model:
Get hands-on with 1400+ tech skills courses.