Liability Rules for AI-Related Harm: Law and Economics Lessons for a European Approach

Shu Li*, Michael Faure, Katri Havu

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

11 Citations (Scopus)
141 Downloads (Pure)

Abstract

The potential of artificial intelligence (AI) has grown exponentially in recent years, which not only generates value but also creates risks. AI systems are characterised by their complexity, opacity and autonomy in operation. Now and in the foreseeable future, AI systems will be operating in a manner that is not fully autonomous. This signifies that providing appropriate incentives to the human parties involved is still of great importance in reducing AI-related harm. Therefore, liability rules should be adapted in such a way to provide the relevant parties with incentives to efficiently reduce the social costs of potential accidents. Relying on a law and economics approach, we address the theoretical question of what kind of liability rules should be applied to different parties along the value chain related to AI. In addition, we critically analyse the ongoing policy debates in the European Union, discussing the risk that European policymakers will fail to determine efficient liability rules with regard to different stakeholders.
Original languageEnglish
Pages (from-to)618-634
Number of pages17
JournalEuropean Journal of Risk Regulation
Volume13
Issue number4
DOIs
Publication statusPublished - 16 Sept 2022

Bibliographical note

Publisher Copyright:
©

Fingerprint

Dive into the research topics of 'Liability Rules for AI-Related Harm: Law and Economics Lessons for a European Approach'. Together they form a unique fingerprint.

Cite this