Wrap Up! Putting It All Together

Concluding remarks for the course.

Congratulations on completing the course! We have learned various capabilities in Azure Machine Learning for building pipelines.

  • We can leverage various resources from Azure. We can use them individually or together for an end-to-end solution.

    • Compute: We have learned how to use the CPU and GPU computes with different levels of cores. We choose the right compute, which provides a balance between processing power and cost.

    • Storage: We can store the data and models in Azure. We can store the data in any form—from simple formats like .csv and .tsv to blob storage to databases. We can also store models in the cloud.

    • Serving: For online streaming and batch deployments, we can choose our custom models or Azure-generated models seamlessly.

    • Fully managed using VMs: If we want to use more flexibility and manage everything during the development phase, we can also rent out a few VMs. VMs are the most expensive and flexible option.

  • We can build versioning and tracking.

    • Azure stores every version of the dataset/model. This helps in keeping track of the incremental developments of the model. Also, for every experimental run, the code will be saved. This is an excellent feature for keeping track of job updates.

    • Using MLflow, we can track the experiment status. It also helps store the model's metadata in MLflow format and helps manage the models.

  • Framework for building and reproducing end-to-end frameworks.

    • The cool feature of Azure Machine Learning is pipeline creation. We can connect multiple steps and build an end-to-end framework easily.

    • We can also schedule the pipeline at regular intervals, which makes it easy to repeat.

    • We can use an existing environment or create a new environment required for running the experiment. Environment makes the experiments easily reproducible.

  • Troubleshooting and monitoring.

    • Every job generates logs at different infrastructure levels.

    • Using MLFlow integration, we can add additional metrics for logging, metrics, and parameters.

    • We can also enable monitoring and alerting for different failures.

  • Finally, using Responsible AI, we can understand the explainability, reliability, and in-depth details of the models. It helps us build responsible ML models.

Get hands-on with 1200+ tech skills courses.