•  
  •  
 

Abstract

Automated decision systems appear to carry higher risks today than they ever have before. Digital technologies collect massive amounts of data and evaluate people in every aspect of their lives, such as housing and employment. This collected information is ranked through the use of algorithms. The use of such algorithms may be problematic. Because the results obtained through algorithms are created by machines, they are often assumed to be immune from human biases. However, algorithms are the product of human thinking and, as such, can perpetuate existing stereotypes and social segregation. This problem is exacerbated by the fact that algorithms are not accountable. This Article explores problems related to algorithmic bias, error, and discrimination which exists due to a lack of transparency and understanding behind a machine’s design or instruction.

This Article deals with the European Union’s legal framework on decision-making on the General Data Protection Regulation (“GDPR”) and some Member State implementation laws, with specific emphasis on French law. This Article argues that the European framework does not adequately address the algorithm’s problems of opacity and discrimination related to machine learning processing and the explanations of automated decision-making. The Article proceeds by evaluating limitations to the legal remedies provided by the GDPR. In particular, the GDPR’s lack of a right to individual explanation regarding these decisions poses a problem. Furthermore, the Article also argues that the GDPR allows for too many flexibilities for individual Member States, thus failing to create a “digital single market.” Finally, this Article proposes certain solutions to address the opacity and bias problems of automated decision-making.

Share

COinS