Who gets held accountable when a facial recognition algorithm fails? And how?

Ellen Broad – Medium
Facial recognition is the next big area where questions about data ownership, data accuracy and algorithmic bias will arise – and indeed are arising. Some of those questions have very close parallels with their equivalents in other areas of personal data, others are more distinctive – for example, discrimination against black people is endemic in poor algorithm design, but there are some very specific ways in which that manifests itself in facial recognition. This short, sharp post uses the example of a decision just made in Australia to pool driving licence pictures to create a national face recognition database to explore some of the issues around ownership, control and accountability which are of much wider relevance.

Leave a Reply

Your email address will not be published. Required fields are marked *