Report Generation

Learn about report generation in the ML pipeline.

Each run of an ML pipeline must be associated with an export of training metrics. This is done so that data scientists can run multiple experiments and compare results.

Experiment tracking

Consider the following scenario: We’re doing a classification problem, and our baseline model has an accuracy of 92%. To try to improve the accuracy, we change a hyperparameter and train the model again. This time, we get an accuracy of 91%. Changing another hyperparameter results in an accuracy of 93%. Each of these trials is called an experiment. As data scientists, we need to be able to see the result and what changed so we don’t duplicate work by running the same experiment a second time or lose track of a successful experiment. This process is called experiment tracking.

Metric visualization

Sophisticated training pipelines export metrics to a database called a metric store. Those metrics can then be visualized either in tabular form or graphically. Metrics are also usually output to flat files as part of the training run’s artifacts.

Get hands-on with 1200+ tech skills courses.