05 - More on evaluations


  • Description:: continuing last lession on evaluations

Most common measures for verification

  • FAR: False Acceptance Rate

  • FRR: False Rejection Rate

  • EER: Equal Error Rate

  • DET: Detection Error trade-off

  • ROC: Receiving Operating Curve

  • all such measures depend on the adopted acceptance threshold

  • topMatch(p_j, identity returns the best match between pj and the templates associated to the claimed identity in the gallery s(t1, t2) returns the similarity between t1 and t2

    • it can be more than a result


  • a score is said genuine (authentic) if it results from matching two samples of the biometric trait of a same enrolled individual;

  • it is said impostor if it results from matching the sample of a non-enrolled individual.

Acceptance threshold is crucial and depends on our application needs!

A too low threshold causes many type 1: FRR (rejected genuine)

A too high threshold causes many type 2: FAR (accepted impostors)

Therefore, a common good choice is FRR (ratio) = ERR: FAR

Open set

An open set problem is comparing someone against templates, without a claim on behalf of the person.

For example: terrorism checks at the airport

  • rank(pj) = the position in the list where the first template for the correct identity is returned

  • DIR (at rank k) (Detection and Identication Rate): probability of correct identification at rank k (the correct subject is returned at position k)

  • FAR or more specifically FPIR (False Acceptance Rate or False Positive Identification Rate) or False Alarm Rate (Watch List): the probability of false acceptance/alarm

  • EER (Equal Error Rate): the point where the two probability errors are equal