Frank Pasquale (Professor of Law at Brooklyn Law School and world famous legal scholar, author of The Black Box Society and New Law of Robotics) and Gianclaudio Malgieri (Associate Professor at EDHEC Augmented Law Institute) have presented their co-authored paper on the notion of “AI Justification”. Their proposed regulatory model goes beyond traditional “ex post model” (notice, consent, contest) and opens to an ex ante justificatory model.
Building both on the connection between Articles 5 and 22 of the GDPR and on the new proposed regulatory framework for AI in the proposed EU AI Act, the two professors have proposed a “licensing model” for high-risk AI. To avoid circumventions and mitiage risks of troublesome AI, the authors propose a model of “AI unlawfulness until proven otherwise”: according to this proposal, a regulator should give a pre-authorisation (after a self-justification notice) to companies that want to produce and commercialise high-risk AI systems.
The EU AI Act seems to go in this more “ex ante” direction, but with some limits and challenges (e.g., limited list of risks; limited sanctions in case of non-justification of high-risks).
.
Curious which risks you find missing and which sanctions you find limited: 6% or 4% of global turnover?
Thank you for your remark/questions, Mireille!
Re risks, while Article 7 (the theoretical framework of high risk) is written extremely well, the hierarchy of risks in the interplay between Art. 5, Annex III and Art. 52 is still disputable: AI systems exploiting vulnerability based not on age/disability (but, e.g., on individual vulnerabilities, cognitive biases, contextual power imbalance) give rise to many concerns, but at the moment are out of the risk list. We might say the same for manipulative AI producing economic harms (out of Art. 5, Annex III). Even emotion recognition AI systems: it’s true that if they produce psycho-physical harms or they exploit age-disability they are prohibited and if they are used in scoring or critical/sensitive contexts they are high risks, but many people may disagree that emotion recognition in personality/brain enhancement tools (e.g. neuro-technological consumer tools) or in personalised marketing applications or in covert research (or in even in non-scoring scholastic contexts) should be considered just limited risk (with some generic transparency obligations).
The 6% global turnover sanction is clearly not inadequate, however what the paper will try to argue is not the “how much” of the sanction but the “when/how”. Comparing the proposed ex-ante (“unlawful until authorised”) model with the the proposed AI Act, the only differing point would be that the sanction is not a prohibition. But we welcome the Regulation as one of the best starting point possible (not just in the EU, but in the world!)