تحلیل دکترین مسئولیت نیابتی به‌مثابه مبنای مسئولیت ناشی از هوش مصنوعی در حقوق ایران، با الهام از رویه قضایی کامن‌لا

نوع مقاله : مقاله پژوهشی

نویسندگان

1 دانشیار، گروه حقوق جزا و جرم‌شناسی، دانشکده حقوق و علوم سیاسی، دانشگاه تهران، تهران، ایران.

2 دانشجوی دکتری حقوق جزا و جرم‌شناسی، دانشکده حقوق و علوم سیاسی، دانشگاه تهران، تهران، ایران.

چکیده

هوش مصنوعی، نظام‌های حقوقی در سراسر جهان و از جمله نظام حقوقی ایران را با مسائل نوینی مواجه ساخته که شاید برجسته‌ترین آن، نحوه انتساب مسئولیت ناشی از خروجی‌های هوش مصنوعی و رفع «خلاء مسئولیت» ناشی از آن است. پژوهش حاضر با هدف ارائه یک چارچوب حقوقی کارآمد، به تحلیل ظرفیت دکترین مسئولیت نیابتی برای پاسخگویی به این مسأله می‌پردازد. هدف اصلی، تبیین مبانی موجِّهه اعمال این دکترین بر اشخاص به‌کارگیرنده سامانه‌های هوشمند، و امکان‌سنجی انطباق عناصر ماهوی آن با ویژگی‌های منحصربه‌فرد هوش مصنوعی خواهد بود. در این راستا، نوشتار حاضر با روش توصیفی تحلیلی، مبانی و عناصر این دکترین را در پرتو ویژگی‌های هوش مصنوعی صورت‌بندی کرده و با اصول مشابه در حقوق ایران تطبیق می‌دهد. تحلیل مبانی مسئولیت نیابتی مؤید آن است که اصولی چون ظرفیت برتر اقتصادی اصیل جهت جبران خسارت بزه‌دیده و مدیریت ریسک، منطق حقوقی اقتصادی ایجاد ریسک توسط شرکت استفاده‌کننده و ضرورت درونی‌سازی هزینه‌ها، و همچنین ماهیت ابزاری و عملکرد ادغام‌شده هوش مصنوعی در فعالیت اصیل و کنترل راهبردی وی بر آن، توجیه‌پذیری اعمال مسئولیت نیابتی در این حوزه را مدلل می‌سازد. بر این اساس، عناصر دکترین مذکور مورد بررسی قرار گرفته و استدلال می‌شود که معیار کامن‌لایی «رابطه شبیه به استخدام» به‌نحو مؤثری قادر به تبیین پیوند حقوقی میان شخص اصیل و سامانه هوشمند است، و معیار «ارتباط نزدیک» اتصال میان فعل مجرمانه ناشی از سیستم و رابطه مذکور را برقرار می‌سازد. نهایتاً، این نتیجه حاصل شده است که دکترین مسئولیت نیابتی، چارچوبی منسجم، قابل دفاع و کارآمد برای انتساب مسئولیت به به‌کارگیرندگان هوش مصنوعی فراهم می‌آورد، و این چارچوب می‌تواند به‌مثابه مبنایی عملی برای قانون‌گذاری هوش مصنوعی در نظام حقوقی ایران و البته دیگر نظام‌های حقوقی که از این «خلاء مسئولیت» رنج می‌برند مورد توجه قرار گیرد؛ بر همین بنیاد، نگارندگان پیشنهادهایی نیز در خاتمه مقاله ارائه کرده‌اند.

کلیدواژه‌ها

موضوعات


عنوان مقاله [English]

An Analysis of the Doctrine of Vicarious Liability as a Legal Basis for Responsibility Arising from Artificial Intelligence Outputs in Iranian Law, Inspired by Common Law Judicial Practice

نویسندگان [English]

  • Abbas Shiri 1
  • Mohammad Reza Barzegar 2
1 Associate Professor, Department of Criminal Law and Criminology, Faculty of Law and Political Sciences, University of Tehran, Tehran, Iran.
2 PhD Student in Criminal Law and Criminology, Faculty of Law and Political Sciences, University of Tehran, Tehran, Iran.
چکیده [English]

‌Context & Objective: The most significant challenge which Artificial Intelligence (AI) poses for jurists is concerning the attribution of liability for the use of such systems and plugging the existing "liability gap." This research seeks to provide a solution to the issue of AI liability by harnessing the promise of the doctrine of vicarious liability. The primary aim of this research is to explain why this doctrine is being applied to deployers (principals) of intelligent systems. After explaining the application of the theory to AI, it discusses how possible it is to establish the substantive components of the doctrine despite the unique characteristics of AI.
Method & Approach: The paper adopts a doctrinal method and a descriptive-analytical approach, relying on library resources, including books, articles, and court cases. The economic, risk-based, operational, and control-based principles of the doctrine have been extracted. The principles are compared with analogous principles in Islamic law, such as the rule of "man lahu al-ghunm faʿalayhi al-ghurm" (he who enjoys the benefits of a thing must also bear its losses), and are applied to the case of an AI deployer. Subsequently, the existence of the substantive elements of vicarious liability—i.e., "the existence of a relationship" and "the connection of the wrongful act to that relationship"—is examined in the context of AI and its deployer. To establish a legal relationship between the AI system and the principal and link the wrongful act to that relationship, the "relationship akin to employment" test and the "close connection test" are proposed.
Findings: Analysis of the justifications of vicarious liability establishes its extension to deployers of AI as fully justified on the basis of: 1) The superior economic capacity of the principal to compensate for injuries and take on risk (deep pockets); 2) The legal-economic principle of risk creation by the party benefiting from it and the necessity of internalization of the costs its commercial venture causes; 3) The instrumental nature of AI and incorporation of its operation into the core business of the enterprise and value generation for it; and 4) Control by the principal over establishing or re-establishing objectives, training data, and monitoring of the intelligent system, as well as their ability to, in effect, "pull the plug." The article confirms that when an organization deploys an AI (for instance, a self-driving taxi) and harm is caused through the AI, the "relationship akin to employment" test, which has been adopted in common law, can effectively establish the legal relationship between the organization and the AI because the AI is "hired" for the fulfillment of the organization's objectives. In addition, the "close connection test" is able to determine the connection between the wrongful act of the system and its link with the company, since the harm resulting is usually the realization of an inherent risk connected with the very activity that is undertaken by the principal who has gained and profited by using the AI. By adopting such principles and such requirements, the responsibility is placed on the firm, whereas traditional rules of law are faced with challenges in attributing responsibility due to, for example, the autonomous action of AI, its black-box nature, and the complexity of the causal chain.
Conclusion: The study concludes that the doctrine of vicarious liability, through its flexible tests such as the "relationship akin to employment" and the "close connection" test, provides a coherent, justifiable, and practicable foundation for attributing liability to AI deployers and can effectively fill in the existing liability gap. By applying this doctrine to AI, the responsibility is on the one who controls and also benefits from the technology. The proposed model is aligned with acknowledged axioms of Islamic law, such as the axiom of "man lahu al-ghunm faʿalayhi al-ghurm" (he who enjoys the benefits of a thing must also bear its losses), and can be employed by the legislator for developing clear-cut rules in the field of artificial intelligence.

کلیدواژه‌ها [English]

  • Artificial Intelligence
  • Vicarious Liability
  • Close Connection Test
  • Cost Internalization