Научная статья на тему 'Limitations of Leveraging ai technologies for court decisions prediction'

Limitations of Leveraging ai technologies for court decisions prediction Текст научной статьи по специальности «Право»

CC BY
158
25
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
The Scientific Heritage
Область наук
Ключевые слова
ARTIFICIAL INTELLIGENCE / JUDICIAL DECISIONS COURT DECISIONS PREDICTION / CASE FORECASTING / AI LIMITATIONS

Аннотация научной статьи по праву, автор научной работы — Kaliazin V., Kaliazina N.

The use of AI for court predictions has been widely investigated in recent years. However, there are still some doubts about the use of predictive analytics in law. While AI promises to significantly improve the life of legal professionals in many ways, there are some major concerns about how beneficial it is in a matter of using AI for legal forecasting. We observed five main shortfalls regardless of the topic; going from more practical to more philosophical levels, they are classified as challenges related to data, model transparency, AI objectivity, the nature of law, and AI legitimacy.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Limitations of Leveraging ai technologies for court decisions prediction»

Список литературы

1. Акимова Ю.А. Конституционная концепция гендерного равноправия // Государственная служба и кадры. - 2018. - №2. - С.22-32.

2. Воронина О.А. Политика гендерного равенства в современной России: проблемы и противоречия // Женщина в российском обществе. - 2013. - №3(68). - С.12-20.

3. Официальный сайт Государственной думы Федерального собрания [Электронный ресурс]. -Режим доступа: http://duma.gov.ru

4. Официальный сайт Федеральной службы государственной статистики [Электронный ресурс]. - Режим доступа: https://www.gks.ru

5. Официальный сайт Уполномоченного по правам человека в Российской Федерации [Электронный ресурс]. - Режим доступа: http://ombudsmanrf.org

6. Поленина, С.В. Женский вопрос и строительство социалистического правового государства. Труд, семья, быт советской женщины. - М.: Юридическая литература, 1990. - 401 с.

7. Права женщин и мужчин в России: реализация принципа равенства / под ред. Г.Н. Комковой. - М.: Проспект, 2019. - 216 с.

LIMITATIONS OF LEVERAGING AI TECHNOLOGIES FOR COURT DECISIONS PREDICTION

Kaliazin V.,

Docent of Moscow City Teacher Training University,

Kaliazina N.

MSc in Business Information Systems, Corvinus University Budapest

Abstract

The use of AI for court predictions has been widely investigated in recent years. However, there are still some doubts about the use of predictive analytics in law. While AI promises to significantly improve the life of legal professionals in many ways, there are some major concerns about how beneficial it is in a matter of using AI for legal forecasting. We observed five main shortfalls regardless of the topic; going from more practical to more philosophical levels, they are classified as challenges related to data, model transparency, AI objectivity, the nature of law, and AI legitimacy.

Keywords: Artificial Intelligence, judicial decisions, court decisions prediction, case forecasting, AI limitations.

Introduction

The court decision prediction is the process of making a statement about the court's verdict, usually in binary terms (affirm, reverse). The first discussions about the use of computer technologies for it go back into the 1960s when a pioneer researcher in the field, Lawlor, identified the status of studies in the prediction of judicial decisions and applied his prediction method to the United States Supreme Court (1963).

Today, we still debate how technologies, and AI particularly, can be used in the legal industry and for court decision forecasting. Researches have learned how to use the increased volume of observational data to build a model that may assist people in their daily routine, or even more sophisticated tasks. These tools allow cases identification, patterns extraction, and estimation of winning the litigation. They help to understand judicial decision-making and to identify areas for its improvement.

However, some experts have expressed doubts about the use of predictive analytics in law. Limitations of this approach were discussed by Devins, Felin, Kauffman, and Koppl (2017). They noted that predictive models are mostly based on the measured dimensions neglecting the unmeasured ones without knowing whether they are meaningful or not. Additional weaknesses of the approach refer to the big data design paradigm and risk assessment model. Overall, there is a trend for overestimating the advantages of big data and related technologies, not considering that it is, primary,

an algorithm that cannot be updated without human interaction and, moreover, cannot go behind its boundaries. The law system, on the contrary, is abstract and value-oriented, it is constantly being developed and interpreted in different ways by different judges.

This paper identifies five main constraints on using AI for forecasting in the legal sector. They are listed and discussed below.

AI & Data

Determining outcomes of legal cases is demanding, time-consuming, and tedious. Even though there are many publicly available databases with case records, it is not an easy task to aggregate all this data in a systematic way. The first matter that arises is not data insufficiency but that the data is locked and difficult to access. Court records must be found, read and classified according to the appropriate branch of law. After that, the data should be cleaned and transformed — categorical variables into easy-to-analyse and built into the model binary ones. During this process, some useful valuable insights may be missed. Moreover, the problem of missing data might also occur — historical data is possible to omit some values that may have a significant influence on the future model. Secondly, a case itself must be defined. Vital case-related information is often not part of the court record. For instance, official court documents rarely indicate whether a monetary settlement has been reached, and, if so, the terms of this settlement. Instead, the court records simply state that the court affirmed or dismissed the

case. Why the case was dismissed or under what conditions the case is terminated is rarely revealed (Harris, Peeples & Metzloff, 2008).

However, if one is determined to find data that will support his initial hypothesis, most probably, she will find it; it is another data-related issue. Many psychological and perceptual experiments emphasize how our expectations or initial impressions influence people to seek, perceive, and find evidence to support their initial ideas (Awh, Belopolsky & Theeuwes, 2012). The forth downside is the opposite of the previous one; we do not have any missing data but a large and all-encompassing input data set. With the increasing volume of data, the number of correlations expands dramatically too. Some of them uncover previously hidden insights, but others are just useless. This leads to selecting a limited number of features, and, therefore, to the model simplification. Eventually, we face up to a data dilemma, on the one hand, the data is limited to observe all possible correlation, on the other hand, there is so much data that we need to remove the complexity while accepting the risk of missing important dependencies. How to find the golden mean? It is the question we still do not have an answer for.

AI & Model transparency

The use of AI in the legal industry raises questions about the fairness of decisions made by machines. Most AI-based programs are locked in the "black boxes". The lack of transparency might cause a lack of trust in the models. Neural networks and other machine learning algorithms are designed to work like a human brain, thus, due to their very nature, cannot be transparent. This process is hidden and constantly changing; the models create their own connections and correlations rather than being explicitly programmed. It causes the risk of limiting the ability of the judge to make a data-driven decision that may be fully explained and the defender's ability to logically argue it while protecting his clients. While AI methods allowing tracking which input data features lead to a specific outcome is being gradually developed, they still only define how certain variables are combined and not why they are combined.

AI is considered to be necessarily explainable with regard to healthcare or self-driving cars of the automotive industry. But the legal field is not an exception. It is a question of "white box", which is total transparency, versus "black box", which reveals nothing about how the algorithm works. These expectations are aimed to provide predictive routing with a good level of transparency. This transparency explains which case factor, and to what extent, plays a role in making optimum decisions either by an attorney or by a lawyer. In this case, transparency allows to improve the training of legal professionals or to add restrictions to the system in order to avoid possible bias. One of the studies revealed that one tends to apply greater examination to information when expectations are violated (Harvard Business Review, 2018). In the experiment conducted by René Kizilcec, a Stanford PhD student, three levels of transparency were tested. The provided students with a low, medium and high explanation of how the grading algorithm works. While medium transparency in-

creased trust significantly, high, on the contrary to expectations, minimalised it to the level of trust equal or even lower than low transparency.

Another transparency issue is that it makes algorithms vulnerable. If the full code is realised, it is possible for defendants to adjust case facts description to add more keywords to increase the chances of a favourable outcome.

Nowadays AI technology allows the automation of various tasks using the most complex algorithms. Some of them are so sophisticated that it is impossible to explain them in a simplified manner. Thus, a "white box" approach that reveals all the steps behind the model is not feasible, but a "black box" is also usually not acceptable. The best approach may be to understand the basic information about the factors that determine algorithmic solutions and how this contribution is analysed or to integrate multiple AI paradigms into a hybrid solution combining them with more traditional solution techniques (Chowdhury & Sadek, 2012).

AI as an objective approach

It is commonly believed that AI will bring in our lives data-driven decisions bringing anomalies to light while omitting bias and human-based errors. It may encourage impartial court proceedings highlighting only those variables that should be crucial for a specific case or the article of law.

However, this common belief about the objectivity of AI has not been dealt with in-depth. Even though AI relies on mathematical calculations, it is still developed and controlled by humans. Thus, there is a chance that AI may contain some bias inherited from its creator. Another fundamental issue is that the AI model is only as fair as the input data sets. Since machine learning is usually trained on real data, it is natural that real-world deviations are reflected in the final models. For example, the researchers from ProPublica found that the risk assessment rates are more favourable to white individuals rather than black (ProPublica, 2016). This phenomenon is known as "machine or algorithmic bias" — when regardless of the model used, any mathematical predictions relying on pattern recognition will exhibit the same kind of biases present in the training data. In addition, AI bias is not limited to the matter of data, it also includes model, learner and system prejudice (Massachusetts Institute of Technology, 2017).

One of the most commonly eliminated sources of AI bias is the human interpretation of algorithm results. Data always demands experts to assess the gained values and insights; findings can be made solely on the basis of correlations while the importance of causation decreases. Because of this interdependence, it may be quite a hard task to determine why the AI system worked in an anomaly way that hides bias that the person, who built it, might have added earlier. Then, the attempt to explain the results as well as the possible limitations of the solution would be difficult too.

Human-in-the-loop model has some imperfect-ness. In the study of 2016, Chen, Moskowitz, and Shue found that the judges are subject to the phenomenon of "gambler's fallacy" when one starts to doubt the correctness of her decision if she has taken several of them in a row. For instance, after several following each

other decisions to grant asylums for migrants, the judge tends to deny the next because of the belief that he becomes too indulgent. Other anomalies that correlate with a court sentence were listed in the paper of Chen aggregating scientific works in the field (2019). Among them are such arbitrary and unfairly factors as the political situation, discriminative factors (political views, race, masculinity), and others, including birthday and name of the defendant, weather, courtroom temperature, shared biographies and even football game outcomes. Judges may be partial and their bias combined with bias in case outcome predictions could lead to even more unfair results.

Static AI & Dynamic Law

Well-grounded definition of law as a metaphor was proposed by Devins, Felin, Kauffman, and Koppl (2017). They saw law as a dynamic field that is constantly evolving. With time passes, legal experts and judges can gradually interpret law in different ways according to the political and historical situations as it has a set of affordances, or possible uses and interpretations. Case law is also eventually changing with appearing of new precedents and new social context. Moreover, every legislation and regulation is based on language, and language has a dual nature too. Although it can be used to build logical systems, its essence is vague, contextual and constantly developing.

The same rule applies for law, on the one hand, it is systematic and logical, on the other, it still semantic, undefined and built on compromise. As for AI, it is a set of precise algorithms that are empirical and deterministic. It is not always capable to mine new meanings in law which provide new patterns of action and, therefore, new outcomes in a form of risk or reward, which, in turn, stimulates new legal adaptations. Furthermore, the AI model may not be complete enough to follow the process of judges finding inconsistencies between legal principles or reacting to unforeseen events. One example of such changes is the law of surrogacy. When surrogate motherhood became possible, the justice could not agree who "the mother" of the child was (Devins, Felin, Kauffman, & Koppl, 2017). To distinguish the "genetic mother" from the "native mother", the law had to split the idea of motherhood which was an unexpected and previously unneeded distinction.

Natural language processing (NLP) promises to solve the problem of language ambiguity and sentiments. Though NLP models also limited to specific words or indexing not focusing on more general concepts. They perfectly fit for the automation of legal search allowing scanning, retrieving, and ranking documents based on the wording. However, for the tasks of understanding the dual meaning of law, it is still needed to be enhanced.

AI & Its legitimacy

Discussing possible drawbacks of the AI realization in the legal field, we will shift now to its big picture and a more abstract level. It is crucial to keep in mind that decisions made by AI-based applications will raise legal issues such as issues of liability for civil or even criminal wrongs. The process of AI implementation has already been started. In New Jersey, bail hearings are

replaced with algorithmically informed risk assessments (The New York Times, 2017). Anyone eligible for the release can eliminate cash bail if she meets certain criteria. To ensure objective, scientific decisions, judges rely on machine evaluations and predictions. Automated recommendation serves as guidance and does not replace judicial discretion. However, the program raises questions about the declared neutrality of machine thinking and the wisdom of reliance on the mechanical judgment. Most of the models do not reach 99% accuracy, the average likelihood of a correct forecast is about 80%, so, nearly every fifth case may be a false one (LawGeek, 2018). When the analytical model fails in its prediction, it may be an issue to allocate liability under current legal schemes (Barfield, 2018). Moreover, since these algorithms are proprietary, they are not subject to open government state or federal laws. This leads to defendants being unable to challenge results accuracy and infringing their right to due process.

Conclusion

AI legal solutions are still only assisting tools that cannot exist sufficiently without human interaction.

We have discussed the main reasons why AI maybe still a challenge for the judicial system. AI simply cannot consider all case and law circumstances and nuances; this factor violates its implementation in the process of justice. It is not yet possible to outsource human values and ethics to empirical and logical models, but AI methods should give the judges a set of guidelines to work with. It will create a more secure legal system where the AI keeps the judges informed and the judges consider based-on-data forecasts but do not unconditionally rely on them. This way two sides are going to help one another.

References

1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

2. Awh, E., Belopolsky, A. V., & Theeuwes, J. (2012). Top-down versus bottom-up attentional control: a failed theoretical dichotomy. Trends in cognitive sciences, 16(8), 437-443

3. Barfield, W. (2018). Liability for Autonomous and Artificially Intelligent Robots. Journal of Behav-ioralRobotics, 9, 193-203

4. Chen, D.L. (2019) Machine Learning and the Rule of Law. Computational Analysis of Law, forthcoming

5. Chen, D.L., Moskowitz, T.J., Shue, K. (2016). Under the Gambler's Fallacy: Evidence from Asylum Judges, Loan Officers, and Baseball Umpires. The Quarterly Journal of Economics, 131(3), 1181-1242

6. Chowdhury, M. & Sadek, A.W. (2012). Advantages and limitations of artificial intelligence. Artificial Intelligence Applications to Critical Transportation Issues, 6, E-C168

7. Devins, C., Felin, T., Kauffman, S., Koppl, R. (2017). The Law and Big Data. Cornell Journal of Law

and Public Policy, 27:2, 3, Retrieved from https://scholarship.law.cornell.edu/cjlpp/vol27/iss2/3

8. Foderado, L.W. (2017, February 6). New Jersey Alters Its Bail System and Upends Legal Landscape. The New York Times. Retrieved from https://www.nytimes.com/ 2017/02/06/ nyregion/new-jersey-bail-system.html

9. Harris, C.T., Peeples, R. & Metzloff, T.B. (2008) Does Being a Repeat Player Make a Difference? The Impact of Attorney Experience and Case-Picking on the Outcome of Medical Malpractice Lawsuits. Yale Journal of Health Policy, Law, and Ethics, 8, 2 (1)

10. Hosanagar, K. & Jair, V. (2018, July 23). We Need Transparency in Algorithms, But Too Much Can Backfire. Harvard Business Review. Retrieved from

https://hbr.org/2018/07/ we-need-transparency-in-al-gorithms-but-too-much-can-backfire

11. LawGeek, (2018). Comparing the Performance of Artificial Intelligence to Human Lawyers in the Review of Standard Business Contracts. [pdf] Retrieved from https://www.lawgeex.com/re-sources/aivslawyer/

12. Lawlor, R.C. (1963). What Computers Can Do: Analysis and Prediction of Judicial Decisions. American Bar Association Journal, 49, 4, 337-344

13. Massachusetts Institute of Technology (2017). The Ethics and Governance of Artificial Intelligence. Class 2: Algorithmic Bias [Video file] Retrieved from https://www.media.mit.edu/courses/the-ethics-and-governnce-of-artificial-intelligence/

НЕКОТОРЫЕ УГОЛОВНО-ПРАВОВЫЕ И УГОЛОВНО-ПРОЦЕССУАЛЬНЫЕ ПРОБЛЕМЫ ОКОНЧАНИЯ ПРЕСТУПЛЕНИЯ, ПРЕДУСМОТРЕННОГО Ч.1 СТ. 299 УК РФ «ПРИВЛЕЧЕНИЕ ЗАВЕДОМО НЕВИНОВНОГО К УГОЛОВНОЙ ОТВЕТСТВЕННОСТИ»

Кудрявцев В.Л.

доктор юридических наук, профессор кафедры уголовного права и процесса

Санкт-Петербургского института (филиала) Всероссийского государственного университета юстиции (РПА Минюста России),

руководитель магистерской программы г. Санкт-Петербург

SOME CRIMINAL LEGAL AND CRIMINAL PROCEDURAL PROBLEMS OF THE END OF THE CRIME PROVIDED BY PART 1 OF ARTICLE 299 OF THE CRIMINAL CODE OF THE RUSSIAN FEDERATION «ATTRACTION OF AN OBLIGATORY INNOCENT TO CRIMINAL

RESPONSIBILITY»

Kudryavtsev V.

doctor of legal sciences, professor department of criminal law and procedure of the St. Petersburg Institute (branch) of the All-Russian State University Justice (RPA Russian Ministry of Justice), head of master's program

St. Petersburg

Аннотация

Статья посвящена некоторым уголовно-правовым и уголовно-процессуальным проблемам окончания основного состава такого преступления как привлечение заведомо невиновного к уголовной ответственности. Раскрыты и проанализированы три точки зрения на момент окончания преступления в зависимости от рассмотрения привлечения в качестве обвиняемого как действия или как процесса с учётом дифференциации процессуальной формы предварительного расследования. В обосновании своих доводов автор приводит не только положения законодательства и теории уголовного права и процесса, но судебную практику. В конце статьи делается вывод, что основной состав такого преступления как привлечение заведомо невиновного к уголовной ответственности должен считаться оконченным с момента: либо вынесения постановления о привлечении в качестве обвиняемого следователем в ходе предварительного следствия либо вынесения обвинительного акта дознавателем по окончании дознания в общем порядке либо составления обвинительного постановления дознавателем по окончанию дознания в сокращённой форме.

Abstract

The article is devoted to some criminal law and criminal procedure problems of ending the basic corpus delicti of such a crime as bringing a knowingly innocent person to criminal liability. Three points of view were revealed and analyzed at the time the crime ended, depending on the consideration of involvement as an accused as an action or as a process, taking into account the differentiation of the procedural form of the preliminary investigation. In substantiating his arguments, the author cites not only the provisions of the law and theory of criminal law and process, but judicial practice. At the end of the article, it is concluded that the main corpus delicti of bringing a knowingly innocent person to criminal responsibility should be considered completed from the moment of: either issuing a decision to prosecute the investigator as an accused during the preliminary investigation or issuing an indictment by the inquiry officer at the end of the inquiry in general order or drafting the indictment by the inquiry officer at the end of the inquiry in abbreviated form.

i Надоели баннеры? Вы всегда можете отключить рекламу.