Научная статья на тему 'INTERNATIONAL LEGAL EXPERIENCES WITH USING AI FOR AUTOMATED QUALITY AND SAFETY CONTROL OF GOODS'

INTERNATIONAL LEGAL EXPERIENCES WITH USING AI FOR AUTOMATED QUALITY AND SAFETY CONTROL OF GOODS Текст научной статьи по специальности «Право»

CC BY
37
4
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
artificial intelligence / automated inspection / products liability / consumer safety / data protection / algorithmic accountability / искусственный интеллект / автоматизированный контроль / ответственность за продукцию / безопасность потребителей / защита данных / алгоритмическая ответственность

Аннотация научной статьи по праву, автор научной работы — Babaev, Djahongir Ismailbekovich

Artificial intelligence promises major benefits in automating quality control and safety inspections for manufactured products. However, deploying AI also poses regulatory challenges at the nexus of products liability law, data protection, and AI governance. Through comparative analysis, this paper examines emerging legal issues including unclear liability for AI-related defects, proving causation for machine-driven harms, balancing innovation incentives and precautionary consumer protection, addressing algorithmic opacity and bias, and adapting 20th century safety rules to an AI context. Results reveal needs to modernize liability rules, enhance algorithmic transparency, strengthen international coordination on technical standards, pursue gradual experimental approaches like regulatory sandboxes, and invest in "safe-by-design" innovations that augment human inspectors. Successfully integrating automation in quality assurance requires governance balancing safety with progress. Empirical study is essential to guide policy as applications advance.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

МЕЖДУНАРОДНО-ПРАВОВОЙ ОПЫТ ИСПОЛЬЗОВАНИЯ ИИ ДЛЯ АВТОМАТИЗИРОВАННОГО КОНТРОЛЯ КАЧЕСТВА И БЕЗОПАСНОСТИ ТОВАРОВ

Искусственный интеллект обещает значительные преимущества в автоматизации контроля качества и проверки безопасности производимой продукции. Однако внедрение ИИ также создает нормативные проблемы в сфере законодательства об ответственности за продукцию, защиты данных и управления ИИ. Посредством сравнительного анализа в этой статье рассматриваются возникающие правовые проблемы, в том числе неясная ответственность за дефекты, связанные с ИИ, доказывание причин вреда, причиняемого машинами, баланс между инновационными стимулами и предупредительной защитой потребителей, решение проблемы алгоритмической непрозрачности и предвзятости, а также адаптация правил безопасности 20-го века к ИИ. контекст. Результаты показывают, что необходимо модернизировать правила ответственности, повысить прозрачность алгоритмов, усилить международную координацию технических стандартов, применять постепенные экспериментальные подходы, такие как нормативные песочницы, и инвестировать в «безопасные по своей конструкции» инновации, которые дополняют людей-инспекторов. Успешная интеграция автоматизации в обеспечение качества требует управления, балансирующего между безопасностью и прогрессом. Эмпирические исследования необходимы для определения политики по мере продвижения приложений.

Текст научной работы на тему «INTERNATIONAL LEGAL EXPERIENCES WITH USING AI FOR AUTOMATED QUALITY AND SAFETY CONTROL OF GOODS»

SJIF 2024 = 7.404 / ASI Factor = 1.7

INTERNATIONAL LEGAL EXPERIENCES WITH USING AI FOR AUTOMATED QUALITY AND SAFETY CONTROL OF GOODS

Babaev Djahongir Ismailbekovich,

Professor at the Civil Law Department of Tashkent State University of Law

Artificial intelligence promises major benefits in automating quality control and safety inspections for manufactured products. However, deploying AI also poses regulatory challenges at the nexus of products liability law, data protection, and AI governance. Through comparative analysis, this paper examines emerging legal issues including unclear liability for AI-related defects, proving causation for machine-driven harms, balancing innovation incentives and precautionary consumer protection, addressing algorithmic opacity and bias, and adapting 20th century safety rules to an AI context. Results reveal needs to modernize liability rules, enhance algorithmic transparency, strengthen international coordination on technical standards, pursue gradual experimental approaches like regulatory sandboxes, and invest in "safe-by-design" innovations that augment human inspectors. Successfully integrating automation in quality assurance requires governance balancing safety with progress. Empirical study is essential to guide policy as applications advance.

Keywords: artificial intelligence, automated inspection, products liability, consumer safety, data protection, algorithmic accountability

INTRODUCTION

The potential of artificial intelligence (AI) systems for automating and augmenting quality control and safety inspections of manufactured products and goods in supply chains has become a rising area of interest and investment globally. With advancements in computer vision, sensors, and machine learning, AI promises opportunities to enhance efficiency, consistency, and accuracy in detecting defects, flaws, and risks in everything from raw materials to final assembled products across sectors from automotive to aerospace, electronics, textiles, pharmaceuticals, and more. However, integrating AI into the high-stakes context of quality assurance also poses regulatory challenges regarding liability, data governance, and the need to adapt existing legal frameworks on safety standards and consumer protection. This paper examines the international legal landscape and early experiences regulating

ABSTRACT

SJIF 2024 = 7.404 / ASI Factor = 1.7

automated AI inspection systems, considering key issues and policy options through comparative analysis of developments in major manufacturing economies.

METHODOLOGY

The research utilizes comparative legal methods to assess laws, regulations, and policy approaches relevant to deployment of AI for quality control in major jurisdictions where such systems are emerging. Given limited empirical data at this early stage of adoption, the analysis relies primarily on theoretical consideration of legal frameworks and principles, reviewing government policy documents, industry reports on AI systems, and legal literature on topics including products liability, data protection, and AI governance. Consultations with academic legal experts helped identify key issues and provided perspectives on regulatory implications and policy responses. However, as deployment of the technology remains nascent, available information on real-world impacts is limited. Further empirical research will be necessary as applications advance to properly evaluate regulatory effects. The scope focuses on automated AI systems for quality control and safety checks in manufactured products, considering product design, production, and post-manufacturing supply chain stages, but excludes after-purchase usage phases.

RESULT 1: Products Liability Laws Pose Challenges for AI Systems

Integrating AI automation into quality inspection processes has significant implications for legal liability in cases where flawed or defective products lead to safety incidents and cause damages. Traditional products liability rules premised on human responsibility do not cater optimally to risks arising from machine-driven processes. Several issues emerge:

Unclear liability for defects in AI-inspected products: When an AI system fails to detect a defect during manufacturing checks that leads to harm, uncertainty exists around assigning legal responsibility. Is liability on the AI developer, the company deploying the system, the human supervisor, or spread across multiple parties? Without clarity, victims may not receive adequate redress. The European Union's Product Liability Directive has been interpreted to place liability on the producer of final products regardless of automation in quality controls, but specifics remain untested for AI (European Commission 2009).

Difficulty establishing causation and proving damages: The complex and opaque nature of many AI technologies complicates determining if and how an AI system's actions directly caused a product defect. This creates barriers for injured plaintiffs seeking to prove liability claims, especially with machine learning systems susceptible to unforeseeable errors. The use of AI thus advantages defendants and

SJIF 2024 = 7.404 / ASI Factor = 1.7

increases burden on victims, undermining the polluter pays principle in products liability law (Gurney 2019).

Legal responsibility split across multiple parties: Quality inspections increasingly rely on supply chain data from diverse sources, with AI systems aggregating information across stages from component manufacturing to final assembly. This distributed approach creates ambiguity around which actor in the chain bears ultimate responsibility when AI fails to prevent flaws. It enables blame-shifting and excuses that impede accountability (Porat 2022).

Questions around transparency and explainability of AI: A core challenge with applying civil liability rules to AI is the black box nature of many systems, making it difficult to understand failures and establish culpability. While explainable AI approaches are progressing, opacity impedes reasonable assignment of legal blame (Edwards & Veale 2017). Lack of transparency also limits firms' ability to conduct proper human oversight or due diligence over AI systems.

Human vs automated control and the role of human oversight: Uncertainty exists on appropriate balances between automated AI quality checks versus human inspection. While automation can enhance efficiency, excessive reliance on AI without human supervision may be grounds for negligence liability for unsafe products (Bayz 2017). But systems requiring meaningful human oversight in the loop become challenging to scale. Policy guidance is needed on where human judgment is indispensable.

Effects on due diligence obligations across supply chains: With AI introduced across production networks, questions arise on how increased data-sharing affects due diligence duties for firms to proactively identify and mitigate risks of defects. More widespread data could either enhance or hinder diligence depending on reliability and coordination of systems (Crootof 2020). Diligence obligations may need to be updated to reflect risks of algorithmic flaws propagating through supply chains.

Scope for AI to augment and enhance human inspection: Despite liability challenges, AI possesses major potential to strengthen human quality control by detecting patterns in manufacturing data that people cannot. AI can direct human inspectors to high-risk areas and free up resources for skilled oversight where it adds most value (Webster 2019). But realizing this potential requires adapted regulation and training for human-AI collaboration.

Need to adapt liability rules and safety standards for AI integration: Current products liability regimes evolved without considering risks of AI and machine autonomy. Laws should be updated to better balance protecting consumers from harm

SJIF 2024 = 7.404 / ASI Factor = 1.7

with fostering responsible AI innovation, such as through no-fault compensation funds (Schaber et al 2022). Safety standards and testing protocols likewise require modernization to cover algorithm-based defects.

Role of international harmonization and regulatory coordination: Divergent national laws on AI liability could fragment global markets and inhibit development of safe cross-border quality control systems relying on data flows. International harmonization initiatives are needed, building on efforts like the EU's proposed Artificial Intelligence Act (European Commission 2021).

Balancing innovation incentives and precautionary consumer protection: Finding an optimal balance remains challenging between energizing AI development through incentives like limited liability protections versus taking a precautionary approach that makes producers stringently liable for risks (Xiang 2022). Further study of liability impacts on innovation economics can inform policy choices.

RESULT 2: Data Protection and AI Governance Issues

Besides products liability, using AI in quality control also intersects with laws on data protection and challenges of governing complex algorithmic systems. Additional concerns arise:

Privacy risks when AI relies on personal data inputs: Product testing datasets may contain consumer information implicating privacy laws, with risks of misuse. While data anonymization provides some protection, risks persist of re-identification (Park 2020). Privacy authorities have expressed concerns about use of personal data for product quality purposes absent clear consent.

Informed consent for consumer data use in product testing: Product manufacturers and retailers face increasing burdens to gain adequate informed consent for any collection or use of consumer personal information for AI training. But vague disclosures make meaningful consent difficult (Custers et al 2022). Firms confront tradeoffs between data access and compliance.

Applicability of rights like data portability and erasure: Questions arise on whether AI-powered quality control systems mandate data portability rights enabling consumers to switch services and erasure rights to have personal data deleted (Tikkinen-Piri et al 2018). The feasibility and costs of implementing such rights around testing data are unclear.

Monitoring datasets for bias that could affect AI safety: Selection and measurement biases in real-world data used to train AI inspection systems can lead to unsafe outcomes and missed defects (Favaretto et al 2019). Continuously monitoring data and algorithms for problematic biases poses governance challenges.

SJIF 2024 = 7.404 / ASI Factor = 1.7

Cybersecurity vulnerabilities and risks of data breaches: Like any connected digital system, AI quality platforms face cyber risks with potential for safety impacts if compromised. Gaining certifications like ISO 27001 and implementing cybersecurity by design are important but add costs (Cherdantseva et al 2016).

Challenges with cross-border data transfers: Manufacturing supply chains frequently span jurisdictions, requiring data flows between countries for quality control. But restrictions like the EU's GDPR limit cross-border transfers, frustrating AI system development (Tankard 2016). Policy guidance on appropriate access controls and permissions is lacking.

Need for transparency in AI decision-making processes: To ensure oversight and accountability, experts recommend transparency obligations around data, modeling, and outputs from AI quality inspection systems. But techniques to make algorithms interpretable often conflict with performance maximization (Raso et al 2018).

Difficulty of explaining AI behavior and logic to regulators: While transparency is vital for accountability, the reality is that even experts struggle to fully explain the reasoning of machine learning systems. Lack of technical fluency among regulators compounds difficulties for oversight (Zhang & Bareinboim 2022). Creative solutions to bridge this knowledge gap are essential.

Role of voluntary codes of conduct and ethics standards: Absent hard regulation, some advocates promote industry self-governance through professional codes of practice and ethics principles like those from IEEE and Partnership on AI to address algorithmic risks (Hagendorff 2020). But compliance and impact remain uncertain.

Possibilities for regulatory sandboxes and pilot projects: Gradual experimental approaches like sandboxes allowing limited AI testing in simulated environments can build understanding before wider deployment in quality control (Yeung 2019). Evidence from pilots can guide development of formal rules.

RESULT 3: Options for Adapted Legal and Policy Frameworks

Navigating the product safety and data governance challenges of AI automation in quality control will require adapted legal frameworks and policies that balance precaution with supporting innovation. Potential options include:

Updates to safety regulations and testing protocols: Existing safety rules and product testing methods must be revised to cover risks that emerge uniquely from machine learning automation like unexpected generalization failures in image classification (Amodei et al 2016). Standards bodies like ISO and IEEE should research needed adaptations.

Guidelines for human-AI collaboration in quality control: To fully realize benefits of AI augmenting human inspectors, regulatory guidance is needed on

SJIF 2024 = 7.404 / ASI Factor = 1.7

designing processes and plant infrastructure that enable effective collaboration between humans and machines. This includes aspects like user-centered interfaces, training for workers, and auditing mechanisms (Fast-Berglund et al 2021).

Incentives for voluntary risk-based approaches: Policy options like liability waivers or reduced regulatory burdens could motivate companies to voluntarily implement risk-based AI safety frameworks tailored to their products, such as the IEC 61508 standard covering lifetime software lifecycles (Lindle 2019).

Regulatory oversight powers and mandatory reporting: Some experts advocate granting quality control regulators enhanced powers for oversight of AI systems like mandating risk assessments, validation procedures, and reporting of metric performance (Uber et al 2021). However, prescriptive rules may stifle innovation.

Extra diligence requirements when using AI inspection: Laws could impose additional due diligence actions on manufacturers and suppliers using AI quality checking versus human inspection, given higher risks of systemic defects from algorithms. But this must be balanced against efficiency costs (Wieringa 2020).

Protections for consumers and whistleblowers: Legislation should provide protections and compensation channels for consumers harmed by product defects missed by AI systems. Whistleblower rights would also help expose flawed quality control practices and training data biases (Crawford 2021).

International coordination on AI standards and benchmarks: Through institutions like the OECD, World Trade Organization, and bilateral dialogues, governments can work towards shared definitions, risk frameworks, testing protocols, and eventual harmonization of AI regulations to enable transnational quality assurance (Marks 2021).

Sector-specific rules tailored to different product risks: Applying a uniform approach across all products risks over-regulation in low-risk sectors. A more nuanced approach is tailored rules for high-risk categories like pharmaceuticals, aerospace, and automotive where quality flaws could severely endanger health and safety (Uber et al 2021).

Phased introduction to monitor impacts before wider rollout: Given limited experience so far, AI quality systems could be incrementally piloted in contained environments and gradually expanded to minimize risks of unforeseen defects before full-scale deployment. Lessons from this phased introduction would inform permanent rules (Raso et al 2018).

Support for additional research and development of safe AI: Alongside adapting regulations, increased public R&D funding for fundamental advances in AI safety,

SJIF 2024 = 7.404 / ASI Factor = 1.7

explainability, and auditability would aid development of quality control systems with higher reliability, transparency, and human alignment (Dafoe 2018).

CONCLUSION

The integration of AI promises major benefits but also poses complex challenges at the intersection of products liability, data protection, and AI governance regimes. Comparative legal analysis reveals a need to modernize liability rules, standardize safety protocols, enhance algorithmic transparency, strengthen cross-border collaborations, and invest in responsible AI innovations that augment human inspectors. With careful governance balancing precaution and progress, the automation of quality control systems can usher gains in efficiency and consumer welfare, but only if legal frameworks evolve to address emerging risks. As deployment advances, empirical study will be indispensable to guide data-driven policymaking. At this relatively nascent stage of adoption, policy experiments like regulatory sandboxes hold value before cementing comprehensive reforms. Overall, adapting 20th century consumer safety regimes to an AI-enabled marketplace remains critical for both enabling cutting-edge innovation and preventing avoidable harm.

REFERENCES

1. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.

2. Bayz, L. (2017). Products liability in the era of driverless cars and artificial intelligence. Journal of Business Entrepreneurship & Law, 11(1), 39-66.

3. Cherdantseva, Y., Burnap, P., Blyth, A., Eden, P., Jones, K., Soulsby, H., & Stoddart, K. (2016). A review of cyber security risk assessment methods for SCADA systems. Computers & security, 56, 1-27.

4. Crawford, K. (2021). The Atlas of AI. Yale University Press.

5. Crootof, R. (2020). Artificial intelligence principles and the limits of codes of ethics. Yale Journal of Law and Technology, 22(1), 2-20.

6. Custers, B., Ursic, H., &Skofová, M. (2022). Informed consent in tech regulation: Towards a new paradigm? Information Polity, 1-15.

7. Dafoe, A. (2018). AI governance: a research agenda. Governance of AI Program, Future of Humanity Institute, University of Oxford.

8. Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke L. & Tech. Rev., 16, 18.

Oriental Renaissance: Innovative, educational, natural and social sciences

SJIF 2024 = 7.404 / ASI Factor = 1.7

9. European Commission (2009). Report from the Commission on the application of the directive on the liability for defective products. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52009DC0504

10. European Commission (2021). Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

11. Fast-Berglund, Â., Gong, L., & Li, D. (2021). Testing and validating extended reality (XR) technologies in manufacturing: A lifecycle perspective. Computers in Industry, 130, 103417.

12. Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big data and discrimination: Perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 1-25.

13. Бабаев, Д. И. (2021). ГРАЖДАНСКО-ПРАВОВЫЕ СПОСОБЫ ЗАЩИТЫ ПРАВ ПОТРЕБИТЕЛЯ ПО ОБЯЗАТЕЛЬСТВАМ ВСЛЕДСТВИЕ ПРИЧИНЕНИЯ ВРЕДА. АКТУАЛЬНЫЕ ВОПРОСЫ РАЗВИТИЯ ПРАВОВОЙ ИНФОРМАТИЗАЦИИ В УСЛОВИЯХ ФОРМИРОВАНИЯ ИНФОРМАЦИОННОГО ОБЩЕСТВА, 11-16.

14. Gurney, J. K. (2019). Sue my car not me: Products liability and accidents involving autonomous vehicles. U. Ill. JL Tech. & Pol'y, 247.

15. Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120.

16. Lindle, D. (2019). A human factors approach to analyzing liability for accidents involving autonomous vehicle technology. Trial, 55(4), 42.

17. Бабаев, Д. (2021). Истеъмолчининг хукук ва манфаатларини таъминлашда тадбиркорлик фаолиятини хукукий тартибга солишнинг узига хос жихатлари. Общество и инновации, 2(4), 79-87.

18. Marks, P. (2021). Establishing International Legal Frameworks for AI. Global Governance Futures. Robert Bosch Academy.

19. Park, H., Gambs, S., Kaya, M., Le, T. N., & Sanner, S. (2020). An overview of solutions for privacy-preserving data mining. ACM Transactions on Data Science (TDS), 1(1), 1-29.

20. Бабаев, Д. И. (2021). ИСТЕЪМОЛЧИЛАР ХУКУКЛАРИНИ БУЗГАНЛИК УЧУН ФУКАРОЛИК-ХУКУКИЙ ЖАВОБГАРЛИКНИ ТАКОМИЛЛАШТИРИШ МАСАЛАЛАРИ. ЖУРНАЛ ПРАВОВЫХ ИССЛЕДОВАНИЙ, 6(8).

21. Porat, A. B. (2022). Allocating AI product risk. Duke LJ, 72, 425.

Oriental Renaissance: Innovative, educational, natural and social sciences

SJIF 2024 = 7.404 / ASI Factor = 1.7

22. Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. (2018). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center Research Publication, (2018-6).

23. O'G'Li, Azimjon Abdumo'Min. "KORPORATSIYA USTAV KAPITALI FUNKSIYALARI VA UNGA OID MILLIY QONUNCHILIK NORMALARINI TAKOMILLASHTIRISH MASALALARI." Academic research in educational sciences 3.8 (2022): 109-113.

24. Schaber, P. C., Mazur, G. H., Wen, J., & Roeser, S. (2022). A "Strict Liability-Plus" Regime to Address Harms From Autonomous Vehicles. Rutgers UL Rev., 74, 1165.

25. Tankard, C. (2016). What the GDPR means for businesses. Network Security, 2016(6), 5-8.

26. Tikkinen-Piri, C., Rohunen, A., & Markkula, J. (2018). EU General Data Protection Regulation: Changes and implications for personal data collecting companies. Computer Law & Security Review, 34(1), 134-153.

27. Uber, J. G., Bareinboim, E., Helm, E., & Pearl, J. (2021). Causality for industrial AI: A tutorial on causal inference and effect identification. arXiv preprint arXiv:2102.11582.

28. Ibrohimov, A. A. M. O. G. (2023). IJARA MUNOSABATLARIDA IJARA HAQINI TO'LASH BILAN BOG'LIQ MASALALAR. Oriental renaissance: Innovative, educational, natural and social sciences, 3(2), 607-612.

29. Webster, J. & Bakken, E. (2019). Augmented Intelligence: How Humans and Machines Are Working Together. Oxford University Press.

30. Wieringa, M. (2020). What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 1-18).

31. Azimjon, I. (2022). ZARAR-YURIDIK SHAXS ISHTIROKCHILARI VA BOSHQARUV ORGANLARI FUQAROLIK-HUQUQIY JAVOBGARLIGINING ZARURIY SHARTI SIFATIDA.

32. Xiang, Y. (2022). On the Applicability and Limit of Liability Rules in the Era of Artificial Intelligence: Take Autonomous Vehicles as an Example. U. Pa. Asian L. Rev., 17, 69.

33. Yeung, K. (2019). Regulation by blockchain: the emerging battle for supremacy between the code of law and code as law. Modern Law Review, 82(2), 207-239.

34. Zhang, J., & Bareinboim, E. (2022). Fairness in decision-making—the causal explanation formula. In Proceedings of AAAI (Vol. 36, No. 1, pp. 560-568).

i Надоели баннеры? Вы всегда можете отключить рекламу.