Using Deployment Frameworks with PyTorch

Learn to convert PyTorch models to ONNX and run inference.

Deployment with ONNX

The ONNX framework provides us with very handy libraries to convert our PyTorch-implemented image classification model without any difficulties.

Check the onnx_deploy.ipynb file to see how to convert our previously trained PyTorch model to ONNX and run inference faster. This section’s coding tutorial is not for rerunning but only examining the code along with the results produced using our previously trained model.

Note: We can’t put the trained model (model.pt file) due to its big size; that’s why if we rerun the code provided in the tutorial, it would throw an error saying there is no model.pt file. But in case of rerunning the code and having the mentioned error, we don’t lose the tutorial’s saved outputs. The outputs from the rerun are kept until the time is up for the current Jupyter session, which is 15 minutes. So when launching a new session, we will see the initial and correct outputs. Another solution is to leave all the tutorial pages and open the related one again to clean the cache. Unfortunately, refreshing or restarting the kernel doesn’t clean the cache in the background in Jupyter, and these two approaches are the only solutions.

Rerunning this part on our local machine is still an option. The only additional thing to do is to train our previous model to obtain a trained model file, model.pt.

Get hands-on with 1200+ tech skills courses.