DSpace Repository

In Search Of The Holy Grail Of Ai Accountability: A Brief Primer On The Artificial Intelligence Act

Show simple item record

dc.contributor.author Mohanty, S.
dc.contributor.author Avanti, D.
dc.contributor.author Bharatha Chakravarthy, H.
dc.date.accessioned 2026-01-27T06:37:46Z
dc.date.available 2026-01-27T06:37:46Z
dc.date.issued 2025
dc.identifier.uri http://repo.lib.jfn.ac.lk/ujrr/handle/123456789/12118
dc.description.abstract In 2021, the European Commission introduced the Artificial Intelligence Act (hereinafter ‘AIA’), a landmark legislative framework aimed at regulating AI systems. This initiative builds on earlier efforts, such as ethical guidelines for trustworthy AI, expert advisory bodies, and regulatory sandboxes like Spain's AI testing environment. The AIA is poised to become a blueprint for global AI governance, addressing growing calls for regulatory oversight amid rapid technological advancements. This article explores the issue of liability in the context of artificial intelligence systems, focusing on attributing responsibility for real-world harm caused by AI systems and the complexity of legislating against this. Our examination exploits two frameworks used widely in the philosophy of automata: the identity and accountability gaps in culpability. The “identity gap” questions whether AI systems can be considered legal persons, a status typically reserved for entities with legal rights and obligations. Arguing that consciousness is not required for liability, we compare AI with artificial personhood for corporations. Instances where non-humans have been found vicariously or directly liable for utilitarian reasons are cited to support personhood. The article then transitions to the accountability gap, analysing policy approaches for assigning responsibility. Examples such as the fatal accident involving an Uber self-driving vehicle and social media political deepfakes highlight the need for clear regulations. The meteoric rise of AI has posed numerous challenges for policymakers and legal scholars. Scholars like Prof. Gabriel Halevi and John Kingston have proposed liability models for AI-based entities, but their adoption across diverse legal systems remains uncertain. en_US
dc.language.iso en en_US
dc.publisher The Department of Law, Faculty of Arts, University of Jaffna / Surana and surana International Attorneys India en_US
dc.subject Artificial intelligence en_US
dc.subject Attribution en_US
dc.subject Legal personhood en_US
dc.subject Corporate personality en_US
dc.title In Search Of The Holy Grail Of Ai Accountability: A Brief Primer On The Artificial Intelligence Act en_US
dc.type Conference paper en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record