| dc.description.abstract |
In 2021, the European Commission introduced the Artificial Intelligence Act (hereinafter ‘AIA’), a landmark legislative framework aimed at regulating AI systems. This initiative builds on earlier efforts, such as ethical guidelines for trustworthy AI, expert advisory bodies, and regulatory sandboxes like Spain's AI testing environment. The AIA is poised to become a blueprint for global AI governance, addressing growing calls for regulatory oversight amid rapid technological advancements. This article explores the issue of liability in the context of artificial intelligence systems, focusing on attributing responsibility for real-world harm caused by AI systems and the complexity of legislating against this. Our examination exploits two frameworks used widely in the philosophy of automata: the identity and accountability gaps in culpability. The “identity gap” questions whether AI systems can be considered legal persons, a status typically reserved for entities with legal rights and obligations. Arguing that consciousness is not required for liability, we compare AI with artificial personhood for corporations. Instances where non-humans have been found vicariously or directly liable for utilitarian reasons are cited to support personhood. The article then transitions to the accountability gap, analysing policy approaches for assigning responsibility. Examples such as the fatal accident involving an Uber self-driving vehicle and social media political deepfakes highlight the need for clear regulations. The meteoric rise of AI has posed numerous challenges for policymakers and legal scholars. Scholars like Prof. Gabriel Halevi and John Kingston have proposed liability models for AI-based entities, but their adoption across diverse legal systems remains uncertain. |
en_US |