Packaging the ML Library
Learn how to package the ML library for inference.
We'll cover the following...
The need for packaging
There are two main parts to any ML project: training and inference. We receive data from some source (historical in the case of training, streaming in the case of inference) and apply our algorithm (training or inference) in both. Then we use the results (evaluation for training, downstream tasks for inference). The diagram below demonstrates the process.
We can see that there are a lot of similarities between the two flows. Preprocessing and feature engineering are exactly the same blocks in both cases because any transformation applied to the data during training must also be applied during inference. This includes any encoding we perform on the input data before passing it to the model for training or inference and any decoding we perform on the model output.
Depending on the company, the data ...