Metrics
Let's explore some metrics that will help evaluate the performance of the vision-based self-driving car system.
We'll cover the following
Component level metric
The component of the self-driving car system under discussion here is the semantic segmentation of objects in the input image. In order to look for a suitable metric to measure the performance of an image segmenter, the first notion that comes to mind is the pixel-wise accuracy. Using this metric, you can simply compare the ground truth segmentation with the model’s predictive segmentation at a pixel level. However, this might not be the best idea, e.g., consider a scenario where the driving scene image has a major class imbalance, i.e., it mostly consists of sky and road.
📝 For this example, assume one-hundred pixels (ground truth) in the driving scene input image and the annotated distribution of these pixels by a human expert is as follows: sky=45, road=35, building=10, roadside=10.
If your model correctly classifies all the pixels of only sky and road (i.e., sky=50, road=50), it will result in high pixel-wise accuracy. However, this is not really indicative of good performance since the segmenter completely misses other classes such as building and roadside!
Level up your interview prep. Join Educative to access 80+ hands-on prep courses.