Hands-On: Custom ML Docker Images in AWS Lambda
Explore deploying custom machine learning models using Docker images in AWS Lambda. Understand how to build, push, and manage these images with the Serverless framework while handling cold start issues and deployment constraints. Gain practical skills to effectively deploy and test ML models seamlessly in a serverless AWS environment.
We'll cover the following...
We'll cover the following...
Deployment
We have deployed our service the same way as before both in development and production. What SLS will do differently this time is build our Docker image and push it to ECR next to other resources that it will deploy to AWS.
Test yourself
Replace the value of the
SERVERLESS_SERVICEenvironmental variable with the name of your choice.
import joblib
import numpy as np
import json
model_file = "/opt/ml/model.joblib"
model = joblib.load(model_file)
def handler(event, context):
x = event["body"]
prd = model.predict(x)
body = {
"service": "sls-aws-python-starter-ml-serving-sf",
"model": "linear",
"version": "1.0",
"prd": prd.tolist(),
}
return {
"statusCode": 200,
"body": body,
}Test yourself here
As a result, ...