There are many different techniques for evaluating the quality and the relevancy of the captions generated. We’ll briefly discuss several such metrics we can use to evaluate the captions. We’ll discuss four metrics: BLEU, ROGUE, METEOR, and CIDEr.
All these measures share a key objective: to measure the text’s adequacy (the meaning of the generated text) and fluency (the grammatical correctness of text). To calculate all these measures, we’ll use a candidate sentence and a reference sentence, where a candidate sentence is the sentence or phrase predicted by our algorithm, and the reference sentence is the true sentence or phrase we want to compare with.
BLEU
BLEU was proposed by Papineni and others in
Get hands-on with 1400+ tech skills courses.