Научная статья на тему 'Artificial Intelligence in the French Law of 2024'

Artificial Intelligence in the French Law of 2024 Текст научной статьи по специальности «Право»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
digital law / AI Act / predictive justice / liability / intellectual property / automated processing of personal data / machine learning / generative AI / DataJust / foundation models / GDPR / Justice Reform Act / цифровое право / Закон об искусственном интеллекте / предиктивное правосудие / ответственность / интеллектуальная собственность / автоматизированная обработка персональных данных / машинное обучение / генеративный ИИ / DataJust / базовые модели / Закон о реформе правосудия

Аннотация научной статьи по праву, автор научной работы — Alain Duflot

The use of artificial intelligence in France is growing and intensifying in many areas, particularly in the field of justice. French President Macron has made it one of his government’s priorities to build on these assets and make France a world leader in AI. In parallel, the French government has deployed some efforts towards anticipating the regulatory challenges related to AI, the “National Strategy for Artificial Intelligence” launched as part of «France 2030». As an illustration of the developments in artificial intelligence and its specific regulation, the French parliament passed a law to ensure the proper conduct of the 2024 Olympic and Paralympic Games (Law N° 2023-380 of 19.05.2023). The law permits the use of the experimental “augmented video-protection” technology, which uses cameras equipped with AI systems to detect and report specific events in real time. French regulations begin already now in the area of justice and must continue in the fields of AI liability and intellectual property. AI is a source of fears, particularly for the respect of human rights, and requires a very elaborate legal and ethical environment that is flexible enough to avoid slowing down the development of AI. The AI Liability EU Directive complements the Artificial Intelligence Act by introducing a new liability regime that ensures legal certainty, enhances consumer trust in AI, and assists consumers’ liability claims for damage caused by AI-enabled products and services. But the new European AI Act does not resolve all issues that therefore need to be addressed nationally.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

ИCКУССТВЕННЫЙ ИНТЕЛЛЕКТ ВО ФРАНЦУЗСКОМ ПРАВЕ 2024 г.

Использование искусственного интеллекта (ИИ) во Франции нарастает и интенсифицируется во многих областях, особенно в сфере правосудия. Президент Франции Эммануэль Макрон сделал одним из правительственных приоритетов развитие этих активов и превращение Франции в мирового лидера в области искусственного интеллекта. Параллельно с этим французское правительство предпринимает усилия, чтобы предвосхитить проблемы регулирования, связанные с ИИ. В рамках программы «Франиця 2030» действует Национальная стратегия в области искусственного интеллекта. В русле развития искусственного интеллекта и его специфического регулирования Национальное собрание приняло Закон о Олимпийских и Паралимпийских играх 2024 года. Он разрешает использовать экспериментальную технологию «дополненной видеозащиты», применяющую камеры, оснащенные системами искусственного интеллекта, для обнаружения и сообщения о событиях в режиме реального времени. Во Франции регулирование ИИ уже сейчас вступает в силу в области правосудия и должно продолжаться в области ответственности за ИИ и интеллектуальной собственности. ИИ является источником опасений, особенно в русле соблюдения прав человека, и требует продуманной правовой и этической среды, одновременно гибкой, чтобы не замедлить развитие ИИ. Директива ЕС об ответственности за ИИ в дополнение к Закону об искусственном интеллекте вводит новый режим ответственности, который обеспечивает правовую определенность, повышает доверие потребителей к ИИ и помогает потребителям предъявлять претензии об ответственности за ущерб, причиненный продуктами и услугами с поддержкой ИИ. Однако и европейский закон об искусственном интеллекте не решает всех проблем, которые необходимо решать на национальном уровне.

Текст научной работы на тему «Artificial Intelligence in the French Law of 2024»

Legal Issues in the Digital Age. 2024. Vol. 5. No. 1. Вопросы права в цифровую эпоху. 2024. Т. 5. № 1.

Research article УДК 347

DOI:10.17323/2713-2749.2024.1.37.56

Artificial Intelligence in the French Law of 2024

Ё3ш1 Alain Duflot

University Jean Moulin, Avenue des Frères Lumière, 69008 Lyon, France, aduflot@arrue-associes . com, ORCID 0009-0000-0112-4564

The use of artificial intelligence in France is growing and intensifying in many areas, particularly in the field of justice . French President Macron has made it one of his government's priorities to build on these assets and make France a world leader in AI . In parallel, the French government has deployed some efforts towards anticipating the regulatory challenges related to AI, the "National Strategy for Artificial Intelligence" launched as part of «France 2030» . As an illustration of the developments in artificial intelligence and its specific regulation, the French parliament passed a law to ensure the proper conduct of the 2024 Olympic and Paralympic Games (Law N° 2023-380 of 19. 05 . 2023) . The law permits the use of the experimental "augmented video-protection" technology, which uses cameras equipped with AI systems to detect and report specific events in real time . French regulations begin already now in the area of justice and must continue in the fields of AI liability and intellectual property. AI is a source of fears, particularly for the respect of human rights, and requires a very elaborate legal and ethical environment that is flexible enough to avoid slowing down the development of AI . The AI Liability EU Directive complements the Artificial Intelligence Act by introducing a new liability regime that ensures legal certainty, enhances consumer trust in AI, and assists consumers' liability claims for damage caused by AI-enabled products and services . But the new European AI Act does not resolve all issues that therefore need to be addressed nationally.

Keywords

digital law; AI Act; predictive justice; liability; intellectual property; automated processing of personal data; machine learning; generative AI; DataJust; foundation models; GDPR; Justice Reform Act .

For citation: Duflot A . (2024) Artificial Intelligence in the French Law of 2024. Legal Issues in the Digital Age, vol . 5, no . 1, pp . 37-56. D0I:10.17323/2713-2749.2024.1.37. 56

© Duflo A., 2024

This work is licensed under a Creative Commons Attribution 4.0 International License

Introduction

France, like other countries, has embarked on the path of AI and the COVID-19 pandemic has accelerated the intellectual reflection and use of AI in many areas [Heinlein M., Kaplan A., 2019: 5—14]. The French President Macron has made it one of his government's priorities to build on these assets and make France a world leader in AI. In parallel, the French government has deployed some efforts towards anticipating the regulatory challenges related to AI [Villani J., 2018: 5-25].

The French AI strategy was launched in 2017, the year in which the first Macron Government began to reflect on its development. Named National Strategy for Artificial Intelligence, it was launched as part of «France 2030». This economic support plan has a budget of €100 billion, of which €40 billion will be partly financed by the European plan, divided into two phases between 2018 and 2025. The strategy aims to preserve and consolidate the country's economic, technological and political sovereignty in the field of AI. As part of «France 2030", the strategy is endowed with €1.5 billion for the development of a national policy in this area.1

In 2024 France has ambitions in terms of artificial intelligence. Indeed, the country launched the French Generative Artificial Intelligence Committee on September 19, 2023, demonstrating its commitment to the development and exploration of AI. In addition, the French Minister of Culture has formed a specific group of experts composed of professors specializing in intellectual property, digital law, and economic growth and innovation, as well as authors, artists, and entrepreneurs, to study the impact of AI in the cultural sector. These experts will examine various aspects, including the potential of AI in enhancing creativity and access to culture, the evolution of the legal framework to protect copyright, the promotion of French and Francophone cultural works and content, as well as the impact of AI on creative professions and education. Besides, in 2024, France will host at the next Summit on the Security of Artificial Intelligence (AI Safety Summit) that testifies to its active involvement in the regulation and security of AI.

Additionally, the French government has been experimenting with using AI for certain aspects of governance. In particular, the Courts of Appeals of Rennes and Douai tested predictive justice software on various appeals cases in 2017. The results were not encouraging2 [Benesty M., 2017].

1 Vignaud M. (2021) France 2030: grandes ambitions, petits effets? Le Point, 18 octobre.

2 Coustet T. L'utilisation de l'outil Predictice déçoit la cour d'appel de Rennes. Dal-loz actualité, 2017, 16 oct.; Prevost S., Sirinelli P. Madame Irma, Magistrat. Dalloz IP/ IT : droit de la propriété intellectuelle et du numérique, N° 11, 2017, p. 557.

France, however, has not yet approved full legislation on AI and algorithms because, like all other European Union states, it was waiting for the new European AI regulation framework, since AI is one of the three major priorities for the EU which wants to become a reference and a world power in this strategic area of AI [Bensamoun A., 2018: 122].

On 8 December 2023, the European Parliament and the Council of the European Union have reached an agreement on the text that will be the first law on artificial intelligence (AI Act) in the world. The objectives of this proposed regulatory framework are to:

ensure that AI systems placed on the market are safe and comply with existing fundamental rights legislation, EU values, the rule of law and environmental sustainability;

ensure legal certainty to facilitate investment and innovation in the field of AI;

strengthen the governance and enforcement of existing legislation on security requirements for AI and fundamental rights systems;

facilitate the development of a single market for legal and safe AI applications and prevent market fragmentation.

More specifically, the proposed regulation establishes: prohibition of some practices; specific requirements for high-risk AI systems; harmonized transparency rules applicable; AI systems designed to interact with people; emotion recognition and biometric categorization systems; generative AI systems used to generate or manipulate images or audio or video content.

Consistency is ensured with the European Union Charter of Fundamental Rights, but also with European Union secondary legislation on data protection (GDPR), consumer protection, non-discrimination and gender equality. The proposal complements existing non-discrimination law by providing requirements that aim to minimize the risk of algorithmic discrimination, with obligations for testing, risk management, documentation and human monitoring throughout the lifecycle of AI systems [Mush S., Borelli M., 2023].

This very flawed text (AI Act) is the result of a compromise between those European states that want a strict regulation of AI and some other countries such as France, Germany and Italy intending to protect very successful European start-ups like Mistral AI and Aleph Alpha [Bensamoun A., Loiseau G., 2019: 38—53]. Therefore, the text only concerns high-risk AI systems.3

3 Bertuzzi L. Spanish presidency pitches obligations for foundation models in EU's AI law. Euractiv, 2023, 7 novembre.

As an illustration of the developments in artificial intelligence and its specific regulation the French National Assembly has passed a law to ensure the proper conduct of the 2024 Olympic and Paralympic Games (Law N° 2023-380 on 19.05.2023). This law permits the use of the experimental "augmented video-protection" technology, which uses cameras equipped with AI systems to detect and report specific events in real time.4

The modalities and safeguards of this system have been further specified by a French decree published in August 2023, that states that augmented cameras may only be used to record predetermined events in real time, and that such recordings may only be viewed by authorized agents. The decree therefore provides for:

a restrictive list of predetermined events, for example abandoned objects, use of weapons, failure to respect the common direction of traffic, crossing a sensitive or forbidden area, crowd movements, excessive density of people, starting fires;

a ban on the use of biometric identification systems;

a description of how processing will be carried out during the design and operation phases;

cooperation of the French national cybersecurity agency (ANSSI), which must be "involved in the choice of processing to ensure compliance with cyber-security requirements".

It is noteworthy that augmented cameras are one of the CNIL's priority control themes for 2023, which may lead to investigation into the practices of companies specializing in this field.5

The risks arising from the use of this technology are numerous: Algorithmic surveillance is therefore a technology that will be used "during the period of the Olympic and Paralympic Games" for more security and to detect in real time events that present security risks. But the use of this technology has been strongly criticized by several international organizations and associations for the defense of rights in digital spaces. Despite the government's claims that it will not use biometric data to identify people, the algorithms will still assess people's behaviours in public spaces, using body data that is part of personal data. There would therefore be an undeniable risk to the right to privacy.6 The CNIL (The French Data Protection Authority) has

4 Lequesne G. La fin de l'anonymat : reconnaissance faciale et droit à la vie privé. Dalloz IP/IT, 2021, p. 309.

5 Commission Nationale Informatique et les Libertes. Comment permettre à l'Homme de garder la main. Rapport sur les enjeux éthiques des algorithmes et de l'intelligence artificielle, 2017, 15 déc., pp. 16-19.

6 Seramour C. L'Assemblée nationale adopte la vidéosurveillance algorithmique aux JO 2024. Le Monde, 2023, 24 mars.

recognized that France was experiencing a turning point with the arrival of artificial intelligence in the processing of images related to law enforcement and security. In addition, the use of algorithmic video surveillance refers to a more secure state, by giving more powers to the police. We can also fear a certain lack of responsibility on the part of the State in the event of a false arrest, for example, by putting the blame on the algorithm because it is a system of action detection in an autonomous way without prior human intervention. In addition to the risk of misidentification of a person, the use of these processes also generates a risk of discrimination. The problem has already been noted in the United States, in cases where algorithms were wrong between African Americans and Asians.

In addition, the Law of 19 May 2023 is not limited to the Olympic and Paralympic Games planned at Paris in 2024. These will be excuses to implement these technologies, because the period of use of algorithmic video surveillance is supposed to extend until 2025, that is one year after the end of the Olympic Games. According to the decree of October 11, 2023, a committee will be responsible for issuing a report specifying the advantages and disadvantages of this experience.

Despite this search for balance, algorithmic video surveillance remains suspect for organizations that defend rights in digital spaces. It could make it possible to reduce and detect crowd movements, which from a legal point of view can also infringe on the right to freedom of assembly and association in public spaces.

For such reasons, the National Assembly is already discussing an ethical Charter on AI that could be incorporated into the Preamble to the French Constitution and thus have a value greater than the law, of equal value to the Universal Declaration of Human Rights.

The proposal is to enshrine in constitutional law that an AI cannot have a legal personality. The notion of artificial intelligence is understood in the charter as "an algorithm that evolves in its structure and learns beyond its initial programming". It sets out principles that AI must respect (such as respecting human orders) and includes requirements for audits and monitoring the evolution of AI towards decision-making autonomy. However, the proposal has not been incorporated into the Constitution and no longer seems to be under consideration.

Considering that the protection of personal data is a major challenge for the design and use of these tools, the CNIL publishes its action plan on artificial intelligence; its aims are—among other things—to frame the development of generative AI.

Faced with challenges related to the protection of freedoms, the acceleration of AI and news related to generative AI, the regulation of artificial intelligence is a main focus of the CNIL's action.

The action plan (2023—2024) is structured around four objectives: understanding the functioning ofAI systems and their impact on citizens; promoting and regulating the development of privacy-friendly AI that respects personal data, among others the application of the GDPR to AI; especially for the training of generative AI; supporting and collaborating with innovative actors in the AI ecosystem in France and Europe, auditing and controlling AI systems to protect individuals.

Man's priority over AI must be found in the field of intellectual property of AI systems and their results.

French regulations begin already now in the area ofjustice (I) and must continue in the fields of AI liability (II) and intellectual property (III).

1. French Justice and AI

France has just adopted a digital transformation plan that aims to develop a fully functional digital public justice service by 2022, enabling (among other things) users to follow cases online. But it is the citizen who is well and truly at the heart of the project: the transformation is a supplementary means of access to justice. It is not a substitute for traditional modes of referring cases to courts [Goodman J., 2016].

Digital availability of judicial decisions will also enable deployment of artificial intelligence. The project is an opportunity both for citizens and law professionals, who will have easier access to case-law, as well as for judges, as artificial intelligence will act as a decision support tool without depriving them of their role [Garapon A., 2018: 22—57].

These goals must be implemented in full respect of private life as guaranteed by Article 8 of the European Convention on Human Rights. In decisions that are published online, any content that might enable identification of the individuals concerned will have to be deleted. Many other principles that will have to guide the development of artificial intelligence, identified by the European Commission for the Efficiency of Justice in its Ethical Charter, include respect of fundamental rights, non-discrimination, neutrality, transparency, user control, hosting security and controlled use of predictive justice [Ferrie S., 2018: 502].

A fundamental debate is needed to critically assess what role, if any, AI tools should play in our justice systems. Increasing access to justice by re-

ducing the cost of judicial proceedings through the use of AI tools may sound like a desirable outcome, but there is little value in increasing access to justice if the quality ofjustice is undermined in doing so. Therefore, AI tools must be properly adapted to the justice environment, taking into account the principles and procedural architecture underpinning judicial proceedings [Christian B., 2020].

To this end, the following main issues should be considered by Courts: possibility for all parties involved to identify the use of AI in a case; the possibility to identify the use of AI; all parties involved in a judicial process should always be able to identify, within a judicial decision, the elements resulting from the implementation of an AI tool. There should be a strict separation between data or results from the operation of an AI system and other data in the dispute.

Non-delegation of the judge's decision-making power: the role of AI tools should be defined in such a way that the use of the tools does not interfere with the judge's decision-making power. Under no circumstances should the judge delegate all or part of his/her decision-making power to an AI tool. AI tools should neither limit nor regulate the judge's decisionmaking power, for example in the context of the making of an automated decision. When the judge's decision is partially based on the elements resulting from the implementation of an AI tool, it should be properly justified and explained in the judgement.

Possibility to verify the data input and reasoning of the AI tool: in cases where the decision is likely to be based, in whole or in part, on the data of the outcomes it provides. As a result, "Learning software" should only be used to the extent that it would still be possible to verify how the machine achieved the proposed result and to distinguish the elements resulting from the use of AI from the judge's personal reflection.7

The possibility of discussing and contesting AI outcomes: the parties of the litigation should have the opportunity to discuss the data and conclusions deriving from an automated system. Therefore, the deployment of AI should always be carried out outside the deliberation phase and with a reasonable time for discussion by the parties.

In a startling intervention that seeks to limit the emerging litigation analytics and prediction sector, the French Government has banned the publication of statistical information about judges' decisions — with a five-year prison sentence set as the maximum punishment for anyone who breaks the new law.8

7 Ortega P., Maini V. Deep mind safety team. 2018. Medium, 27 September. Available at: https://medium.com/ (accessed: 12.04.2022)

8 Articles 226-18, 226-24 et 226-31 of the Code Penal.

The new Law of 23 March 2019, or the Justice Reform Act, is aimed at preventing anyone — but especially LegalTech companies focused on litigation prediction and analytics — from publicly revealing the pattern of judges' behaviours in relation to court decisions.

A key passage of the new law states: Article 33 of the Justice Reform Act now provides that: 'The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analyzing, comparing or predicting their actual or alleged professional practices.'

This is the first example of such a ban anywhere in the world. It is therefore forbidden to use the identity of the judges to model how certain judges behave in relation to particular types of legal matter or argument, or how they compare to other judges.

A study for example showed that judgements handed down in the morning were more favourable to the accused person. With AI, it can be possible to know what type of evidence or arguments is better for this or that judge. Another study (carried out within the framework of the Toulouse School of Economics) showed that in the criminal field, the sentences handed down were less severe if the judgement was handed down on the day of the defendant's birthday. The "anniversary rebate" amounts to between 1 and 3% in the decisions of the French criminal courts. It can be as high as 15% in the United States, among Louisiana state judges. It is at its maximum when the accused appears in person and is not tried in his or her absence. These examples show that the analysis of court data by AI programmes is likely to reveal ignored elements, the knowledge of which could be used to improve the functioning ofjustice. Indeed, in the above case, the strong difference between French and American judges is probably explained by the fact that Louisiana judges are not professional magistrates and that they have not received training to neutralize or counterbalance cognitive biases and affect [Chen V., Philippe A., 2022].

This possibility is forbidden now in France. This law has been criticized and called a complete shame for French democracy. But the legality of the prohibition has yet to be discussed. As the criminalization of judicial behaviour research is clearly an interference with free speech, the question is whether it is also in violation of human rights law. If we take as a departure point the right to freedom of expression in Article 10 in the European Convention on Human Rights, France must demonstrate that the prohibition has a legitimate aim, is necessary, and balanced in its impact. We are highly doubtful that the law meets these standard requirements in a proportionality test.

By providing a legal framework for the anonymization of magistrates, the law clearly runs counter to the position of the first president of the Court

of Cassation and the first presidents of the courts of appeal, who claim that this anonymization is contrary to the principle that the judge dispenses justice in the name of the French people, and that the assessment of a risk to the safety ofjudges was too delicate to carry out and justify.

But AI in Justice came about by a decree of 27 March 2020 concerning the automated processing of personal data, called the Datajust decree, in order to respond to the claims of the many victims of the COVID-19 who might want to seek responsibility for health services or administrators in the mismanagement of the consequences of this pandemic.9 This decree is intended to provide the courts, and administrations with a scale of compensation and documentation to reach judgements as well as to assess through the analysis of the AI the impact of the laws on these amounts of compensation in order to consider, if necessary, reforms of the laws.10 This data processing is made possible by the law for a digital republic of 7 October 2016, which authorizes the publication of anonymized court decisions in open data [Prévost S., 2016: 2-9].

Today and since 2022, this project has been abandoned by the Ministry of Justice. This failure is partly due to the specific form of court decisions that, while they do not suffer from ambiguity when read by a human, have a form and syntax that are too particular for the usual algorithms to be able to derive the relevant information. A decision-making tool would therefore first require that court decisions know rules that would make it possible to standardize the essential data (process of the decision; terms used) to allow the software to detect them without risk of error and to learn from their detection.

This project also met with significant criticism from judges, lawyers and victims' associations who feared that compensation would be too standardized to the detriment of complex individual situations.

In France, entrusting the Court of Cassation with the development of its own algorithm allows the State to retain its prerogatives. Chantal Arens, first President of the Court of Cassation, says that the Court of Cassation will be attentive to the implementation of control mechanisms and to "the support of judges". It assures that "the risks of errors are well identified", following the recommendations of the Cadiet report.11

9 Prevost S. Justice prédictive et dommage corporel: perspectives critiques. Gaz. Pal. 2018. 30 janvier, N° 312 b3, pp. 43-45.

10 Dufour O. Qui a peur du décret «DataJust»? Actualités juridiques, 2020. Available at: https://www.actu-juridique.fr/sante-droit-medical/qui-a-peur-du-decret-data-just/ (accessed : 16.04.2023)

11 L. Cadiet (dir.) L'Open data des décisions de justice. Rapport au Garde des sceaux. 2018. La documentation française, pp. 3-19.

It is essential that AI does not deliver court decisions, it must only provide solutions. This technology is "a remedy for the slowness ofjustice" and promotes access to justice and information. However, it should not be given a "performative use" that would push judges to make the same decisions over and over again and call into question the independence of the judge. It is up to the State to guarantee the impartiality of the algorithms used. The role of public authorities is to control LegalTech that can affect our values.

In this respect, the creation of a public and independent authority to regulate the use of algorithms to prevent any excesses of "predictive" justice would be an additional and essential guarantee.

To illustrate a successful French AI project, we can mention the creation of the digital labour code (code du travail numérique).

Announced by Article 1 of Ordinance No. 2017-1387 of 22 September 2017 on the predictability and security of employment relations, the purpose of the Digital Labour Code is, according to the law, to allow, "in response to a request from an employer or an employee on his or her legal situation, access to legislative and regulatory provisions as well as to contractual stipulations, in particular of branch, undertaking and establishment, subject to their publication, which are applicable to it".

The tool is intended directly for the public, and not for legal professionals, to enable them to know their labour rights in an easily accessible and simple to understand way.

The easy French query tool, on the other hand, is genuinely based on AI techniques, since it involves applying a set of legal texts (if possible limited) relating to a situation described in free language. It is therefore not a system of querying by keyword, as the user is not supposed to have a precise command of the legal vocabulary.

This experience is therefore an example of the successful use of AI in legal matters. It is not a question of providing the decision (and even less of predicting it) but, more modestly, of giving all litigants access to the texts applicable to their situation. An important feature of this tool is that it has legal value in itself. Users can avail themselves of the answers provided by this engine to the legal authorities to which their case could subsequently be presented. In concrete terms, if the answer given by the Digital Labour Code is incorrect, the user in good faith can oppose it to his interlocutor — between private persons or between private persons and the administration — which gives this answer a greater force than that of a simple legal information. The State therefore assumes its own responsibility in the event of incorrect answers.

Since its creation on 1 January 2020, the Digital Labour Code has had a very positive record: more than 22 million visits; more than 2 million searches; but also, more than 18,000 referenced contents.

There are therefore areas of justice that can naturally be entrusted to AI, because they require simple automation, and it would be a shame to deprive ourselves of the effectiveness of AI in this area in order to put it at the service of the judge so that he or she can properly perform his or her function, or even, so that certain functions are simply carried out. In order for AI to enter these areas, it is essential, upstream, that a simple and circumscribed objective, an adapted AI methodology, be determined. The open data project is a particularly successful example of this. To ensure the public availability of court decisions that have been requested for many years by professionals, and to do so in a transparent manner, respectful of individual rights, and free of charge, it was necessary to succeed in effectively anonymized decisions.

A second field for the AI regulation is the question of liability. 2. Liability and AI

From a legal point of view, the new problems that are emerging with AI are of the same nature as in the past [Gautrais V., Moyse P., 2017: 3—39]. Whether the decision is taken by a machine or whether the machine is a decision-making aid of the nominally competent person, the question of liability and its attribution arises. In both cases, it is the result of the process, legal act or legal fact, that the legal system seizes. In both cases, tensions arise between law and technology, between legal informatics, IT law and liability law [Borghetti J.-S., 2019: 9—11].

France, a member state of the EU, must implement European principles in the field.

The European Parliament believes that "there is no need for a complete revision of (...) liability regimes" but only for "specific and coordinated adjustments".

The European Union suggests the responsibility in principle of the AI system operator (both the frontend operator and the backend operator). For "high-risk autonomous AI-systems", they believe it is "reasonable to set up a common strict liability regime" (no-fault liability). This is the meaning adopted by the AI Liability Directive in complement of the Artificial Intelligence Act by introducing a new liability regime that ensures legal certainty, enhances consumer trust in AI, and assists consumers' liability claims for damage caused by AI-enabled products and services.

It applies to AI systems that are available on the EU market or operating within the EU market [Bensoussan A., Bensoussan J., 2022: 97].

In fact, the European Commission has on the one hand updated the existing 1985 Defective Products Directive and on the other hand created the new AI Liability Directive. These 2 directives complement each other.

The new redaction of the Defective Products Directive takes in consideration AI.

The proposal for a revised Directive reinforces the current rules, which have been well established for almost 40 years (since Council Directive 85/374/EEC of 25 July 1985), which provide for no-fault liability of manufacturers and compensation for personal injury, damage to property or loss of data caused by defective products. It ensures fair and predictable rules for both businesses and consumers.

The proposed new Defective Products Directive modernizes product liability rules in the digital age, allowing for damage to be repaired when products such as "robots, drones or smart home systems are made unsafe by software updates, AI or digital services necessary for the operation of the product, as well as when manufacturers fail to remediate cybersecurity vulnerabilities." The text provides for a reduction in the burden of proof for victims in complex cases, "such as those involving pharmaceuticals, smart products or products using AI".

The contribution of the new AI Liability Directive completes the arsenal of protection for AI users. While the AI Regulation aims to prevent harm, the AI Liability Directive "establishes a safety net to obtain redress in the event of harm."

The objective of the AI Accountability Directive is threefold:

establish uniform rules for access to information and reduction of the burden of proof regarding damage caused by AI systems;

introduce broader protection for victims (whether individuals or businesses) and promote the AI sector by strengthening safeguards.

It will harmonize certain rules for claims for damages outside the scope of the Defective Product Liability Directive, in cases where damage is caused by wrongful conduct (breaches of privacy, damage caused by safety issues, etc.).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

More specifically, the AI Liability Directive complements the European civil liability framework, introducing specific rules for damage caused by AI systems, based on two main measures:

access to evidence held by companies or suppliers, when they use "high-risk" AI, as defined in the AI Regulation (Article 3);

the "presumption of causation", which will relieve victims of the obligation to explain in detail how the damage was caused by a specific fault or omission (Article 4).

Indeed, based on the observation that AI systems can be complex, opaque, making it difficult, if not impossible, for the victim to discharge the burden of proof, the European legislator considered that the liability regime must allow effective access to justice, resulting in access to reparation for the victim, in accordance with the Charter of Fundamental Rights of the European Union.

According to the European Commission, the new Directive is also in the interests of companies, which will be better able to anticipate how the existing liability rules will be applied and thus assess and ensure their exposure to liability risks. "This is particularly the case for companies operating cross-border, especially small and medium-sized enterprises (SMEs), which are among the most active in the AI sector."

The objective of the AI Liability Directive is to establish uniform rules for access to information and to reduce the burden of proof regarding damage caused by AI systems, to provide broader protection for victims (whether individuals or businesses) and to favour the AI sector by strengthening safeguards. It will harmonize certain rules for claims for damages outside the scope of the Product Liability Directive, in cases where damage is caused by wrongful conduct. This concerns, for example, privacy breaches or damage caused by security issues. The new rules will, for example, make it easier to obtain redress if a person has been discriminated against during a recruitment process using AI technology.

The AI Liability Directive simplifies the legal process for victims when it comes to proving that a person's fault has caused damage, by introducing two main elements. First, in circumstances where relevant fault has been established and a causal link to the performance of AI seems reasonably likely, the 'presumption of causation' will address the difficulties faced by victims when they have to explain in detail how harm was caused by a particular fault or omission, which can be particularly difficult when it comes to understanding and navigating complex AI systems. Secondly, victims will have more tools to seek redress in court, thanks to the introduction of a right of access to evidence from companies and suppliers, when high-risk AI systems are used.

The liability would cover "violations of the important legally protected rights" to life, health, physical integrity, and property. It should also set out the amounts and extent of compensation, as well as the limitation period.

Artificial intelligence can also be a threat to democratic debate, one example being the propagation of fake news during election periods [Marique

E., Strowel A., 2019: 383—398]. The integrity of electoral processes, election campaigns and polling has been undermined in France as it has elsewhere, leading to the opening of criminal investigations in a number of countries. We must therefore remain extremely vigilant regarding opinion manipulation through propagation of fake news, often by automated means. It is not a question of attacking freedom of expression but rather of preserving freedom of opinion. This being so, France enacted a law against manipulation of information on 22 December 201812 . Online platforms now have obligations of transparency with regard to content containing sponsored information and identity of sponsors where significant remuneration (100 euros) is involved. Platforms must also appoint a legal representative on French territory and make their algorithms public. Only the biggest platforms are concerned, i.e. those with over 5 million single visitors a month. The law also institutes an emergency judicial procedure, known as "référé anti-in-fox", an interim ruling to eliminate deliberate dissemination of information seeking to undermine the fairness of an election.

When the matter is referred to him or her, the judge hearing the application for interim relief must assess, within 48 hours, whether this false information is disseminated "artificially or automatically" and "massively".

In its decision of 20 December 2018, the Constitutional Council has specified that the judge could only stop the dissemination of information if the inaccurate or misleading nature of the information was manifest and the risk of altering the sincerity of the vote was also manifest.

The French political system is built on many elections (municipal, regional, national) not to mention in addition to the European elections like in 2024, so that France is almost permanently in an electoral period allowing the use of this law. And last but not the least is the question of intellectual property of AI.

3. Intellectual Property and AI

A specific priority over AI must be found in the field of intellectual property of AI systems and their results [Larrieu J., 2013: 125—133].

The attribution of copyright protection to artificial intelligence raises questions. Consistently, the work protected by copyright is a so-called "original" work, which is a fundamental criterion for protection. It is also said that the work must reflect the imprint of the author's personality. Originality is defined in copyright law as the expression, however minimal, of

12 Law N°2018-1201.

the human spirit. It is therefore not the best place to grant legal protection over the literary or artistic production of robots, regardless of the degree of intelligence, which is artificial.

This is the principle adopted by the French Intellectual Property Code in its Article L 111-1: "The author of a work of the mind enjoys, by the mere fact of its creation, an exclusive intangible property right over that work, enforceable against all."

It is clear that only the natural or legal person behind the creation of the algorithms could hold intellectual property rights and the AI system being not a legal person would be deprived of this right in an absolute and definitive way.13

In addition to creations resulting from artificial intelligence processing, two types of "AI creations" could be considered schematically. The former, computer-aided creations are independent of the software used, with artificial intelligence acting only as a tool in the creative process supervised by a human being. The second, creations generated spontaneously by artificial intelligence, are the result of software, without decisive human intervention to the point that some believe that in this case it is essentially the programmer and the machine that will generate the final work, or even consider that artificial intelligence contains its own creative process.

In the case of AI-assisted creations where AI is used as a simple tool, it is possible to consider that the mark of the author's personal intervention remains essential. The creation could thus attain the status of a work and be protected by copyright for the benefit of the natural person at the origin of that work [Larrieu J., 2014: 11—43].

With regard to creations spontaneously generated by AI, those in favour of their protection by copyright are divided between those who believe that it is still possible to distinguish in these creations the mark of the subjectivity of the various stakeholders and those who argue for the adoption of an objective conception of the key concepts of copyright, and more particularly the notions of intellectual work and originality to bring these creations under copyright.

In these two cases, the characterization of originality will require a specific analysis of the said creations, taking into account, depending on the chosen design, the AI method used, the scope of its intervention as well as the latitude left to the user or to the one who, for example, selected the

13 Enser N. L'entrée dans le "Paradis" du droit d'auteur: pas sans un être humain à l'origine de la création! Dalloz actualité, 2023, 18 septembre.

"input" data, proceeded with processing settings or intervened in postproduction.

As for intellectual property rights, the EU Parliament stressed the importance of having an effective system to further develop AI, including patents and new creative processes. Among the outstanding issues are the problems of determining who owns the intellectual property of something developed entirely by AI.

Accordingly, they suggest that this assessment focuses on the impact and implications of AI "under the current system" of patent law, trade mark and design protection, copyright and related rights, including the applicability of the legal protection of databases and computer programmes, and the protection of undisclosed know-how and business information ('trade secrets') against their unlawful acquisition, use and disclosure.

Moreover, considering the development of AI, it is important "to distinguish between AI-assisted human creations and creations autonomously generated by AI". In this connection, "works autonomously produced by artificial agents and robots might not be eligible for copyright protection, in order to observe the principle of originality" with the human creative spirit and with respect and reward for the expression of human creativity.

On 12 September 2023 eight members of the National Assembly introduced a proposal (the Proposed Legislation No. 1630), to amend the first book of the French Intellectual Property Code with respect to copyright. This legislative change has been proposed to address issues such as the use of copyright works in the development and operation of AI systems and the approach to authorship and copyright ownership of works generated by AI systems. Key aspects of this proposal include:

requiring the authorization of authors or right-holders of intellectual works protected by copyright for the incorporation and exploitation of their works by AI systems;

ensuring that, in cases where a work was generated by AI without direct human intervention, the only right-holders of such work are the author(s) or right-holders of the works that enabled its conception;

allowing certain collective copyright management organizations or other collective management organizations to represent right-holders and to collect fees relating to the exploitation of copyright work by AI systems;

requiring all AI-generated works to include the reference "work generated by AI" and the names of the authors of the works that enabled their creation;

imposing a tax on the operators of an AI system, where a piece of work was created by the AI system, but the initial work cannot be determined.

This tax is intended to increase the value of creation and is paid to the organization responsible for collective management.

However, the draft seems to lack nuance and understanding of the complexities inherent in generative AI.

The proposal, by requiring authors' permission for the integration of their works into AI systems, seems to ignore the technical reality of machine learning algorithms. These algorithms, especially deep neural networks, require large amounts of data to train. The requirement to obtain authorization for each integrated work could not only hinder technological development, but also pose insurmountable logistical challenges. In addition, this provision could be in contradiction with existing copyright exceptions, such as fair use or use for research purposes, provided for in the articles of the Intellectual Property Code.

Taxation attempts to provide a source of income for creators but is ill-suited to the complexity of AI technology. Taxation, for example, could be seen as a barrier to innovation and could deter companies from pursuing AI projects. In addition, the transparency required by this proposed law could be at odds with the trade secrets and intellectual property rights of the companies developing these technologies.

The new European legislation of the AI Act addresses the subject of copyright by establishing the principle of respect for copyright and the identification of artificial content. The issue of copyright in the AI Act has been the subject of many discussions between European countries and has led to a compromise. The stakes are high because it was necessary to find a balance that was difficult to achieve: to promote innovation and the use of artificial intelligence in Europe while preserving citizens' fundamental rights, in particular copyright.

Generative AIs must now ensure data compliance and copyright compliance, with clear identification of artificial content.

Creators of generative artificial intelligence models will have to comply with several obligations: first, and this is probably the most important although the wording is vague, "make public a sufficiently detailed summary" of the content they use to train their algorithms.

This transparency will then allow for a right to remuneration." In other words, authors, screenwriters, writers, media, artists whose works have been used to train generative AI models could enter into negotiations to be paid.

Another cause for celebration for copyright holders is the obligation for AI companies to respect the European copyright law. This may seem trivial, but it was not necessarily self-evident for companies located outside the

EU. In particular, AI systems will have to comply with opt-out clauses, a right to object to the use of data by AI systems. The rule already existed, but it was not necessarily respected, this is a way of reaffirming it. We will now have to define common standards and it will not be easy.

However, these formulas contained in this text are too imprecise to guarantee the effective implementation of the protection of intellectual property rights.

Conclusion

Will robots replace judges? The fear of an automatic and dehumanized justice system often comes up in criticisms of artificial intelligence in France.

Foreign experiments are already using software to deliver justice, thereby relieving congestion in the courts and reducing costs. In the Canadian province Ontario, a "virtual court" is responsible for settling disputes between neighbours or between employees and employers. In another Canadian province, Quebec, software is also used to settle small commercial disputes. In Estonia, a robot should soon establish a person's guilt for "minor" disputes (less than 7,000 euros).

The risk of a "Netflix of law" is of concern. The common law lends itself particularly well to the promises of algorithmic justice but, transposed to France, it could lead to a considerable impoverishment of the French legal culture and a less "room for manoeuvres of legal professionals". 14

Ethical questions about the opacity of algorithms and possible biases in their analysis remain unanswered. In North America lawyers are already denouncing racial bias in algorithms that penalize ethnic minorities.

However, the use of AI in justice can bring considerable benefits. Lawyers must adapt and ensure that ethical rules are respected. The subject matter is by nature evolving, it is at the heart of practice to continually adjust the rule to the concrete realities of the time.

Three issues now seem to guide the future of French justice when it relies on algorithms. First of all, legal certainty, which requires that digital tools be sufficiently reliable to form the basis for predictable decisions without undermining citizens' legitimate trust in public authorities. Secondly, there is the question of compensation for any damage caused by algorithms, through judicial review and appropriate compensation principles. Finally,

14 Harroch J. Déployer une IA éthique sera l'enjeu du siècle qui vient. Le Monde, 2022, 30 decembre.

the degree to which the control of the judge, who is traditionally reluctant to enter into considerations of expertise or morality, is being deepened at the very moment when a regulatory conception is developing, through preventive ethics, which, in its arrangements, does not give digital law the superior value that it should have in order to frame all legal and judicial activity and constitute an essential guarantee of the effectiveness of democracy.

References

1. Benesty M . (2017) L'open data et l'open source, des soutiens nécessaires à une justice prédictive fiable? -Journal of Open Access to Law, vol . 5, no . 1, pp . 1-11.

2. Bensamoun A. (2018) Stratégie européenne sur l'intelligence artificielle: toujours à la mode éthique. Paris: Dalloz, p . 122 .

3 . Bensoussan A . , Bensoussan J . (2022) Harmoniser les règles civiles de responsabilité en matière d'IA en Europe . Revue Lamy droit de l'immatériel, novembre, p . 97 .

4 . Bensamoun A . , Loiseau G . (dir. ) (2019) Droit de l'intelligence artificielle . Paris: LGDJ, pp . 38-53.

5 . Borghetti J . -S . (2019) Civil liability for Artificial Intelligence: what should its basis be . Romanian Journal of Society and Politics, no . 5, pp . 9-11.

6 . Cadiet L . (dir. ) (2018) L'Open data des décisions de justice . Rapport au Garde des sceaux . La documentation française, pp . 3-19 .

7 . Chen V. , Philippe A . (2023) Clash of norms, judicial leniency on defendant birthdays . Journal of Economic Behavior & Organization, vol . 211, July, pp . 324344

8 . Christian B . (2020) Alignment problem: machine learning and human values. New York: W W Norton, 476 p .

9 . Ferrié S . (2018) Intelligence artificielle: Les algorithmes à l'épreuve du droit au procès équitable . Journal of Community Publishing Group, no . 11, p . 502 .

10 . Garapon A . (2018) La justice digitale . Paris: Presses universitaires de France, pp 22-57

11. Gautrais V. , Moyse P. (2017) Droit et machine. Montreal: Éditions Thémis, pp 3-39

12 . Goodman J . (2016) Robots in law: how AI is transforming legal services . London: Ark group, 148 p .

13 . Haenlein M . , Kaplan A . (2019) A brief history of artificial intelligence: the past, present and future of artificial intelligence . California Management Review, no . 4, pp 5-14

14 . Larrieu J . (2013) La propriété intellectuelle et les robots . Journal International de Bioéthique, vol . 24, no . 4, pp . 125-133 .

15 . Larrieu J . (2014) Le robot et le droit d'auteur. Mélanges en l'honneur d'André Lucas, Lexis Nexis, juin, pp . 11-43 .

16 . Marique E . , Strowel A . (2019) La régulation des fake news et avis factices sur les plateformes . Revue internationale de droit économique, t . XXXIII, no . 3, pp . 383-398.

17 . Musch S . , Borrelli M . , Kerrigan C . (2023) The EU AI Act As Global Artificial Intelligence Regulation . August 23 . SSRN: Available at: https://ssrn . com/ abstract=4549261 or http://dx . doi . org/10 . 2139/ssrn . 4549261 (accessed: 15. 01. 2024)

18 . Prévost S . (2016) Loi pour une République numérique-décryptage . Paris: Dalloz, pp . 2-9 .

19 . Villani C . (2018) Donner un sens à l'intelligence artificielle . Rapport au Premier ministre . Documentation française, pp . 5-25 .

Information about the author:

A . Duflot — Master of Law, Lecturer.

The article was submitted to editorial office 20 . 02 . 2024; approved after reviewing 04. 03 . 2024; accepted for publication 04. 03 . 2024.

i Надоели баннеры? Вы всегда можете отключить рекламу.