Научная статья на тему 'DETERMINING THE ‘RESPONSIBILITY’ PARADOX THE CRIMINAL LIABILITY OF ARTIFICIAL INTELLIGENCE IN THE HEALTHCARE SECTOR'

DETERMINING THE ‘RESPONSIBILITY’ PARADOX THE CRIMINAL LIABILITY OF ARTIFICIAL INTELLIGENCE IN THE HEALTHCARE SECTOR Текст научной статьи по специальности «Клиническая медицина»

CC BY
101
24
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Russian Law Journal
Scopus
ВАК
Область наук
Ключевые слова
Artificial Intelligence / Medical Sector / Criminal Liability / Healthcare Services

Аннотация научной статьи по клинической медицине, автор научной работы — Vidushi Goel, Aditya Tom

People are becoming more advanced as technology develops. Nowadays, practically everything has been digitalized to the point where artificial intelligence is used in almost every industry. AI is now essential to many other industries and enterprises, not just those that provide medical services, including healthcare systems, economics, commerce, and industry. Using AI technology in hospital facilities seemed useful dur ing the COVID 19 crisis. Even though surgical robots have several benefits, the rise in legal conflicts that use artificial intelligence to challenge them is concerning. By 2035, technology is expected to have changed significantly, but as with all improve ments, there will also be problems. It appears that advances in artificial superintelligence are being made to compete with human intelligence. In the years to come, AI will probably have an increasing impact on healthcare expenditures. Medical malpractice lawsuits are already using AI enabled patient records more and more. This essay aims to look into the specific legal effects of artificial intelligence (AI) in medical services under tort, medical negligence, and other laws that are in place right now.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «DETERMINING THE ‘RESPONSIBILITY’ PARADOX THE CRIMINAL LIABILITY OF ARTIFICIAL INTELLIGENCE IN THE HEALTHCARE SECTOR»

DETERMINING THE 'RESPONSIBILITY' PARADOX- THE CRIMINAL LIABILITY OF ARTIFICIAL INTELLIGENCE IN THE HEALTHCARE SECTOR

VIDUSHI GOEL1 PROF. (DR.) ADITYA TOMER2

1Research Scholar, Amity Law School, Amity University Uttar Pradesh 2 Additional Director, Amity University, Noida, Amity University Uttar Pradesh.

Abstract- People are becoming more advanced as technology develops. Nowadays, practically everything has been digitalized to the point where artificial intelligence is used in almost every industry. AI is now essential to many other industries and enterprises, not just those that provide medical services, including healthcare systems, economics, commerce, and industry. Using AI technology in hospital facilities seemed useful during the COVID-19 crisis. Even though surgical robots have several benefits, the rise in legal conflicts that use artificial intelligence to challenge them is concerning. By 2035, technology is expected to have changed significantly, but as with all improvements, there will also be problems. It appears that advances in artificial superintelligence are being made to compete with human intelligence. In the years to come, AI will probably have an increasing impact on healthcare expenditures. Medical malpractice lawsuits are already using AI-enabled patient records more and more. This essay aims to look into the specific legal effects of artificial intelligence (AI) in medical services under tort, medical negligence, and other laws that are in place right now.

Keywords- Artificial Intelligence, Medical Sector, Criminal Liability, Healthcare Services.

Table of Contents

1. INTRODUCTION

2. AI APPLICATION IN DIFFERENT INDUSTRIES:

3. INTRODUCTION OF ARTIFICIAL INTELLIGENCE IN HEALTHCARE APPLICATION OF AI IN MEDICAL

SERVICE

4. PITFALLS OF INTRODUCING AI IN MEDICAL SERVICE

5. MISCONDUCT IN HEALTHCARE AND TORT LIABILITY

6. STRICT LIABILITY

7. ACTUS REUS

8. MENS REA

9. DIRECT LIABILITY

10. CASE OF FORESEEABLE EVENTS

11. ETHICAL DILEMMAS OF USING AI IN HEALTHCARE

11.1 Lacking The Idea of Consent

11.2 Data Privacy Breach

11.3Distortions In Algorithm

12. POTENTIAL LEGAL ISSUES INVOLVING AI

12.1 Cyber Security

12.2 Employability And Labour Regulation

13. CASES INVOLVING AI IN HEALTHCARE

14. CASE STUDIES:

14.1 Use Of AI In Detecting ALS (Amyotrophic Lateral Sclerosis)

14.2 Google Collaborating withAravind Eye Care Hospitals

15. CHALLENGES OF USING AI IN HEALTH SERVICE

15.1 Data Base

15.2 Why Is It A Matter Of Concern?

15.3 Comprehensibility And Clarity

15.4 Why Is It a Matter of Concern?

16. RECOMMENDATIONS FOR USING AI IN HEALTH SERVICE

16.1 Educating Students About the Developing Technology

16.2 Establishing A Regulatory Framework to Manage AI In India

16.3 Making Consumers Aware to Raise Their Voices Thoughtfully

17. CONCLUSION AND SUGGESTIONS

1. INTRODUCTION

Synthetic artificial intelligence can learn new information from scratch and engage in mental functions, including deductive reasoning and reasoning skills. This species' cognitive powers would have allowed it to create long-term future visions.1The term "artificial intelligence" refers to a created form of intellect that uses algorithms to mimic actual intelligence. Scientists have been attempting to achieve true intelligence, but it has not yet reached that point.2Most programs can only work freely in a relatively restricted area, severely limiting their utility. Over the past ten years, artificial intelligence technologies have exploded in this incredibly innovative realm, using highly specialized and complex technology to produce inventive, cunning, and scholarly AI systems. Because of this, the time when these sentient robots begin making amazing and useful discoveries on their own without the help of human minds is not far off.3 Technology (AI) has advanced and witnessed a surge in utilization recently. Every industry is eager to take advantage of Al's promise and invests a sizable sum of money. Technology has great potential to boost creativity and productivity within a company. But as the use of this technology grows, so do its drawbacks.4 Most programmers fail to understand how AI grows, adapts to new situations, and makes decisions. It would be impossible to assign guilt if something truly dreadful occurred in this manner.

2. AI APPLICATION IN DIFFERENT INDUSTRIES:

a. Manufacturing:5AI is used to assess factory workload and needs, help with proper logistics and scheduling for the procurement of materials, and help with overall project timeframes, among other things.

b. Banking:6The banking system uses AI to analyze credit scores precisely, look for indications of fraud in banking transactions, and perform other activities.

c. Law:7 The technologies used by companies in the legal sector assist lawyers in quickly identifying clauses in agreements, making the repetitive and time-consuming activity much more effective and quicker. AI can handle massive, repetitive activities like finding phrases in a document collection or filling out specific paperwork, freeing up lawyers' time for more important ones. After AI has concluded its analysis, the attorney can quickly review the document to ensure that it is simple for the clients to understand. A lawyer's job is made easier by AI, which frees them up to focus on client talks, debates, and presentations rather than tedious jobs and tedium. AI also calculates the chances that the arguments will succeed. This allows the lawyer to present one of the most important pieces of evidence at trial.

d. Retail:8The area of the Amazon webpage that advises things you'd probably use, or buy is the best illustration of applying AI in retail. These are AI-generated suggestions based on factors like how long you spent searching for a particular item, how interested you were in it and similar items, etc.

3. INTRODUCTION OF ARTIFICIAL INTELLIGENCE IN HEALTHCARE APPLICATION OF AI IN MEDICAL

SERVICE

There is one doctor for every 1,445 people in India, according to the most recent estimates of 135 crore people, which is less than the WHO-recommended ratio of one doctor for every 1000 people.9This could be done by collaborating with clinicians and AI in the medical field. AI will increase access to and improve the efficacy of treatment. Since no further humans would contract the disease, it could revolutionize disease surveillance and monitoring, particularly for infectious diseases. Since 1980, intelligent surgical robots have been available on the market, proving that intelligent machines are not new. PUMA 560 was used to do neurosurgical biopsies in 1985. Hip replacement operations use ROBODOC, the first intelligent robot approved by the FDA.10AI is already replacing doctors; it can order and interpret diagnostic tests based on symptoms, scan X-rays, and scan X-rays. AI has been introduced to carry out intricate surgical procedures in addition to being employed as a "consulting physician" already. Examples of AI in surgery include "FAceMOUSe" which does not use any body-contact device.11

The AI timely monitors the surgeon's facial movements. The motion of the laparoscope may be precisely and readily controlled by the surgeon's facial expressions, allowing for non-intrusive and verbal communication during various surgical procedures.12 In 2017, an AI-driven robot was used to perform a keyhole surgery at Maastricht University Medical Center in the Netherlands. A person with lymphedema (a serious disease that is frequently a consequence of chemotherapy treatment that results in swelling because of accumulated fluids) had capillaries sutured by the robotic device

between 0.03 and 0.08 millimeters in diameter. His hand movements are now performed by "robot hands," which are significantly thinner and more accurate.13 Artificial intelligence (AI) has been useful in cardiac surgery for classifying patients with growing thoracic aortic aneurysms and, more accurately, identifying potential dangers from congenital heart surgery, including the possibility of death.14 Sepsis prevalence and its associated effects, including death, can be reduced with the aid of AI. Robotic or AI-powered solutions can be created for intensive care units to support the team's therapeutic judgment and analytical skills. The AI-driven surgical robot was also used to correct the surgeon's shaky movements to ensure the procedure was completed successfully.15

Further applications of AI include hair transplant programs and surgeries requiring the precise placement of small incisions, such as cardio-related procedures like tumor removal, valve replacement, cardiac tissue ablation, and the correction of heart defects that all use cardio-robotic surgery. An experienced surgeon has a 98% chance of success without robotic technologies. AI-supported telemedicine services are now used in almost all facets of healthcare. Sharing knowledge with other institutions, preoperative diagnostic testing, post-operative follow-up, and long-term follow-up are some of the key applications of telemedicine in operations. Tele- robotics is another new area, albeit difficulties with logistics and funding constrain its applicability. AI is helpful for early detection or improving access to health care, especially for the elderly, in addition to the diagnostic uses already described. Making smart homes using AI-enabled sensors and connection developments is crucial, especially for elderly or chronically ill patients who need long-term follow-up. Telehealth and telemedicine are crucial components in this area of AI.16 A robotic surgical equipment can now maneuver around a beating heart on its own.

Pierre Dupont et al. created a robotic catheter routinely used during procedures to deliver equipment or drugs. They used 2000 images of the inside of a heart to create an algorithm to control the catheter's movements. Then it was tested in five pigs that had leaky hearts implanted. Eighty-three trials were undertaken, and the proper location was reached in 95 percent of them. Benjamin and others are creating synthetic skin with a haptic feeling.17 It was shown that it could tell stress balls from plastic in terms of their stiff and soft forms. Robots may undergo a significant change when in contact with skin, enabling them to differentiate between healthy tissue and malignancies and take the necessary action. As a result, it's feasible that business practices will drastically alter within the next ten years.18

4. PITFALLS OF INTRODUCING AI IN MEDICAL SERVICE

It would take another 15 years for robotic and AI technology to operate flawlessly and be integrated into every hospital's operating room service in India, predicts Dr. AachiMithin, a leading orthopedic surgeon at Apollo Hospitals in Secunderabad.19 Without a doubt, its significance to healthcare will grow dramatically. Artificial intelligence (AI) is prone to making mistakes like humans. No matter how desirable it may seem, AI has a price. Robots lack the unique ability or empathy required for surgery and medical care. Compassion and interpersonal contact are distinctive traits that an AI program cannot imitate. Two further limitations are the expense and practicality of using this idea on a big scale.20The robots might be directly instructed, given the opportunity to see a procedure, or even given virtual reality training. Robotic systems can learn new abilities by watching movies or conducting surgery; regrettably, an autonomous robot may learn new skills considerably faster than a person. Early surgical automation attempts focused mostly on task breakdown and the autonomous execution of simple tasks like wound closure. Although, as was already said, it may be difficult to replicate actual intelligence because it requires the ability to acquire sensory inputs and exact knowledge of how to carry out the surgical purpose safely. At Stanford University School of Medicine in California, Dr. Ross, an assistant professor of surgery and medicine, has highlighted that none of these models or any others are meant to take the position of doctors or their decision-making processes with machine learning or its equivalent. "AI was established to enrich mankind," she continued, "it needs to function in the background, discover our weaknesses, and help us make better decisions about patient outcomes.21

5. MISCONDUCT IN HEALTHCARE AND TORT LIABILITY

Kenji Udhara, an engineer at a Kawasaki heavy enterprises company where a robot was put to perform a specific manufacturing activity, was the first human fatality brought on by a robot on Earth. As a result, Kenji failed to close the robot while he was fixing it, and the machine saw him as a threat. As a

result, Kenji was ruthlessly driven into neighboring machinery by the robot's strong mechanical arms, killing him instantly.22 Many international laws are now unable to establish a strong criminal legal framework for dealing with legitimate situations where robots are accused of committing a specific crime or hurting a person. The ability to handle such a quick rate of change has been added to the world's access to several new features thanks to the development of AI. States must pass legislation outlining the laws governing mishaps, related offenses, machines, and artificially intelligent software.

• Imagine a situation where an AI system identifies an infectious disease and recommends a course of treatment. Based on the patient's information, the AI will recommend a drug. The patient has a medication allergy.23 However, their medical records didn't contain this information. It may be concluded that the AI followed medical best practices because it was improbable to have learned about sensitivity, just as a human doctor would not have.

• Another option is that the AI system actually has a programming or mechanical bug. An AI programmer may be liable for subpar or irresponsible programming, similar to how product producers are. The one or individuals in charge of supervision.24

Therefore, healthcare personnel who misuse technology or use it carelessly are just as liable for harm as a doctor who uses unclean tools or overmedicates a patient. The FDA is confident that doctors who employ robotics should possess specialized abilities, pertinent knowledge, and high evaluation.25 There needs to be at least one more surgeon on site who is equally adept at operating the device. Additionally, the FDA requires that producers fully disclose all risks to surgeons and provide them with practical training. The complexity of scenarios concerning the employment of AI in medicine is increased by the fact that there is genuine debate about the level of training required to qualify a surgeon to use the robotic device. Currently, there are various accreditation processes used by various healthcare institutions. As a result, it may be challenging to identify the specific obligations and liabilities of the several parties involved. Failure of the item does not make the doctor accountable for the treatment. The procedure, any potential adverse effects, and what would happen if something went wrong must all be explained to patients by doctors. It is the responsibility of manufacturers to warn consumers about the risks associated with their products adequately. Doctors must also reveal these details to patients. If a manufacturer gave the doctor enough warnings and instructions, they wouldn't be held responsible. If a doctor fails to advise the patient sufficiently about the risks and limitations of the device, they may be charged with medical negligence. The conflicting relationship between the surgeon's responsibility and the manufacturer's duty in this robotic surgery litigation adds significant complexity. Every institution and facility should specify the minimal requirements that each surgical procedure must meet before issuing such an approved certification of permission, as well as a systematic in-service training course for the physician who will practice robotic surgery. When determining a surgeon's level of skill, the following elements should be considered: familiarity with the tools and equipment used, time spent performing robotic surgery, overall time spent performing the procedure, approximate blood loss, complications, number of changes made to an open surgical procedure, choice of the best patient, and adherence to general safety regulations.26

6. STRICT LIABILITY

Strict responsibility, discussed in tort law, states that the maker is responsible for a defective product. The strict liability rule allows those who weren't even at fault to be held liable for an undesirable outcome, including the product's manufacturers, distributors, and merchants.27 In other words, despite having a sound design and implementation, technology may make mistakes. Consider a well-built production line, but owing to a mistake or an unfortunate coincidence, a manufacturing error caused harm to someone.28 Strict product liability in this circumstance may result in a judgment against the makers even if there is no finding of wrongdoing. This can appear to be the extreme norm. The variety, frequency, and volume of cases that should be covered by an education program intended to give candidates a certification of approval appear insurmountable in robotic surgery. Since they lack the mental abilityrequired for mens rea—the mental element/knowledge that one's behavior would be sufficient for a crime to occur—in the case of minors, animals, or artificial intelligence that commits crimes would be classified as innocent humans.29This rule is also applicable in cases of strict liability. The trainer or instructor would be held accountable if an innocent person had committed a crime that another person authorized. Therefore, according to this paradigm, the AI would be viewed as benign, whereas the creator or instructor would be blamed.30

7. ACTUS REUS

A person's deliberate action or inaction that would add to the conditions necessary for a criminal crime to occur is referred to as an actusreus. Criminal responsibility for software and AI systems emphasizes how the rule of law is always the cornerstone of any situation in which a crime has been committed. 31 Without such an actusreus, it is impossible to establish a person's criminal liability, and in the case of artificial intelligence in particular, an actusreus can only be established if the crime committed by such a structure can be articulated to a human being, allowing the act itself to be sufficient to punish and demonstrate a person's criminal liability.31

8. MENS REA

When it comes to mens rea, the prosecution must show that a decision made by an AI was made knowingly by its users to harm the target of the decision. The ultimate level of mens rea is the comprehension of whether or not a single person's intent might be justified by the direction or supervision of an artificial intelligence robot as it performs a specific action.32The minimal point of mens rea is when an AI system user commits a crime that would have been obvious to a reasonable person assuming his strict culpability. AI can recognize specific situations and respond according to what it has been taught to do or what it has learned via observation and training. John Searle33 asserts that after recognizing a scenario, AI either imitates the behavior of those who have encountered a similar situation or merely responds automatically,following the rules without understanding the significance of its behavior.34 Since AI cannot comprehend the significance of its actions and, consequently, their ramifications, it is argued that AI cannot satisfy the mens rea criteria for intentional wrongdoing. This assertion is the subject of a heated and fruitless debate. Therefore, it is impossible to assign mens rea to AI conclusively.35

9. DIRECT LIABILITY

According to this thesis, AI systems are given mens rea and actus reus.36An AI program's actusreus is simple to ascertain. Suppose the results of any operation by an AI system turn out to be criminal behavior or a failure to act while a reporting duty existed. In that case, that becomes the accusation's actusreus. Since finding mens rea in this instance may be difficult, the three-level mens rea approach is applied.37An AI system may very well be held accountable for unlawful behavior when a motive is not established or necessary in cases of strict liability offenses. The driver will be deemed responsible if an auto-pilot vehicle is involved in an overspeeding incident. Accordingly, this notion might be used to hold the healthcare department accountable for implementing AI aid for surgeries.38

10. CASE OF FORESEEABLE EVENTS

They are unintentionally activating an AI software program that was created with good intentions resulting in the commission of a crime. Professor of criminal law at Ono Academic College Hallevy explains how a collaborator can be held accountable for an operation regardless of whether a conspiracy is proven, so long as the accused's actions were expected results that the collaborators supported or encouraged, and they were aware that a criminal scheme was being carried out.39According to Section 111 of the Indian Penal Code, the idea of the end goal, which maintains that the outcomes of an act that is assisted and an act that is carried out are not the same, has been established in Indian criminal law (IPC). Except for the anticipated repercussions of abetment, the abettor would be held accountable for the offender's behavior almost identically as if he had enabled it. There is a consensus that until an act is committed, abatement cannot result in a sentence. In other cases, if the evidence is sufficient to condemn the abettor but insufficient to charge the perpetrator, the abettor is more likely to be found guilty based on the testimony and the circumstances. In these circumstances, the offender might receive a pardon.

As a result, the AI intermediaries and developers could be held liable for the actions of the AI software if they knew the behavior was a logical or likely result of employing their AI system. It is important to distinguish between AI systems with valid other goals and those intentionally built for illegal activities when interpreting this principle between those that were established with awareness of harmful intentions and those that were not. The first category of AI systems falls under this concept. However, the second category may not be criminally punishable owing to ignorance, even though strict liability would still be applied.The Indian court is the only source of hope left in the Indian legal system for

resolving these scenarios due to the legislative vacuum, the lack of a clear punishment, and the criminal and civil liability of such acts perpetrated by AI systems, computers, and robotics against all other individuals. Although there hasn't been a significant groundbreaking court decision on the rules for the use of artificial intelligence software or robots to avert the council of any criminal or civil liability, the judiciary is predicted to act in this way considering the rapid progress of artificial intelligence to convert such regulatory requirements and court rulings based on which the use of artificial intelligence could be altered immediately by recognizing the criminal and civil liability.

11. ETHICAL DILEMMAS OF USING AI IN HEALTHCARE

11.1 Lacking The Idea of Consent

Additionally, AI health applications and bots are being used increasingly, from data analysis from sensing devices to medical evaluations to support improving treatment adherence. Bioethicists might be concerned with how these apps' user conditions connect to their consent forms. In contrast to the usually written permission process, a user agreement is a legal agreement that a person acknowledgeswithout face-to-face dialogue. Most people routinely ignore user agreements because they don't take the time to read them. Further, because the program receives regular policy revisions, it would not be easy to understand the terms of service. In such a circumstance, it would be challenging to derive service terms that would also be morally sound. It becomes even more complicated when the application continuously monitors and uses the user's data for expert advice.

11.2 Data Privacy Breach

If patients and medical professionals lack confidence in AIs, their integration into healthcare will fail. Patients must be properly informed about the processing of their data, and continued communication with them must be encouraged to foster trust. In the context of data sharing and the use of AI, recent incident examples illustrating patient confidentiality issues include the legal case Dinerstein v. Google40 and Project Nightingale by Google and Ascension.41

11.3 Distortions In Algorithm

As we've previously covered, artificial intelligence is being used in the legal area. For example, automated technologies have mistakenly demonstrated that Black defendants are twice as likely to commit crimes as White defendants.42 Because artificial intelligence can cause serious harm when it reproduces the latent preconceptions, racism, and prejudices of the individuals who built the algorithms, falsifying programs is an important issue. Systemic racism was discovered after a study of a clinical risk system based on healthcare that about 200 million Americans have used. Which patients would need additional treatment would be predicted by the system. It was shown that white patients were treated better than black patients. The algorithm's design issue was that it wasn't thoroughly tested with all important races before being used.43 Even the use of artificial intelligence to produce a drama relied on stereotypical casting.44 This could risk the healthcare sector and raise ethical questions.The World Health Organization laid out six guiding principles in a set of guidelines for the ethical use of artificial intelligence in healthcare.45 The WHO hoped that the six guiding principles wouldbe the foundation for how businesses, governments, and authorities handle innovation. The six ethical principles listed by the WHO and their significance.

• Protect autonomy: All medical decisions must be made by people, not just by machines, and must always be subject to doctor override. AI shouldn't be used to direct patients' healthcare services, even without their consent, and personal data should be protected.

• Promoting safety: Ensuring that all AI technologies are operating as intended and causing no harm needs regular monitoring of all AI technologies.

• Safe transparency: Researchers should release the blueprints for AI tool designs to the general public. The technologies are frequently called "black boxes," which make it challenging for scientists and medical practitioners to comprehend how algorithms generate views. The WHO wants enough clarity so that authorities and users can properly assess and understand them.

• Encourage accountability: There must be procedures outlining who is responsible whenever something negative occurs with an AI system, such as when a device's final choice causes patients injury.

• Promoting Equity: One way to promote equity is to ensure that tools are created using diverse data sets and available in numerous languages. A thorough analysis of common health algorithms in recent years has shown that some involve racial bias.

• Encourage long-lasting AI: Companies should be able to make changes if a technology looks ineffective, and programmers should regularly upgrade such technologies. Organizations or enterprises must only provide solutions that can be corrected, including in underfunded healthcare systems.

12. POTENTIAL LEGAL ISSUES INVOLVING AI

AI must operate efficiently and safely. AI may be successfully incorporated into clinical practice by ensuring that databases are reliable and legitimate, distributing software updates regularly, and being transparent about their goods. Furthermore, appropriate control is necessary to ensure the effectiveness and security of AI. The following are a few legal concerns when it comes to AI:

12.1 Cyber Security

Cybersecurity would be the main issue with AI used in the healthcare industry. It is critical to remember that there is fierce competition in every field. Competition amongst hospital sectors could result in a software infection getting into the facility's AI. Infections with malware, totaling 88% in 2016, were targeted at the US healthcare industry.46The health industry may focus on hospital systems, diagnostic tools, trackers, wireless sensors, and medical devices like artificial intelligence (AI). The patient's care may suffer due to the gadget being infected with the software infection, which might also damage the hospital's reputation.According to SEO specialist Saket Gupta and Great Learning's AkritiGalav, there are three crucial things you should do right away:47

• Implement the strictest security measures feasible throughout the whole data realm.

• Ensure that a log of each input used in connection with each AI activity is created as part of the audit trail.

• Create trustworthy network management and verification systems.

Additionally, companies should keep pursuing longer-term strategic goals, including developing a data protection policy specifically for AI training, educating their personnel on the risks associated with AI and how to recognize falsified results, and maintaining a dynamic, forward-looking threat assessment system.

12.2 Employability And Labour Regulation

India, as a whole, had an unemployment rate of 7.80% in June 2022.48 This demonstrates that India still has problems filling positions in every area since there aren't enough jobs available. This resulted from the growing use of artificial intelligence and technology, which have diminished the quantity of labor people perform in various ways. AI may change the quality of the workforce by raising job efficiency standards and increasing competition for particular positions. However, adopting AI in the workplace may also lead to new legal defenses and causes of action based on unethical hiring procedures.According to research,49career opportunities should improve when AI is implemented, and there should be a significant need for new skills. Many professions, including nursing and therapy, call for intense focus and emotional reactions, which AI cannot yet replicate. Healthcare firms are integrating AI to enhance care delivery rather than fully replace it. Additionally, as AI advances in health care, there will be greater demand for workers with various specialized talents. In contrast, it has been asserted in another study that robots would eventually take over some human jobs held by nurses and other healthcare professionals. Two such instances of what is already happening are robots that can transfer people and collect blood. For instance, the "RoBear," developed by the Sumitomo Riko Company and the RIKEN-SRK Collaboration Center for Human-Interactive Robot Research, could transport and transfer patients from hospital beds to wheelchairs. AI will simplify for allied health professionals to satisfy the demand brought on by a population that is getting older.50

13. CASES INVOLVING AI IN HEALTHCARE

A lawsuit was filed against the doctor for using the robotic equipment improperly during a robotassisted cholecystectomy (a treatment to extract the gallbladder). Additionally, the plaintiff charged the university with failing to accredit the treating physician. Even if they have no experience with robotic systems, gynecologists in the USA must complete an automated surgery certificate program provided by the manufacturing facility before using the Da Vinci surgical equipment. In a case where a robotic hysterectomy (operation to remove all or part of the uterus) resulted in a bilateral ureteral injury that was swiftly fixed, a claim was filed because of poor communication. The patient said that if she had known the doctor wasn't sufficiently qualified and knowledgeable beforehand, she wouldn't have consented to a robotic hysterectomy.51Clinicians should be educated about using artificial intelligence (AI) in clinics for any purpose, as has already been mentioned. The patients would have a strong faith in the doctors and always think they were working to advance a healthier society. Therefore, the doctor must always sustain the patient's trust by keeping them updated on all aspects of the treatment and fostering their satisfaction and self-assurance. Most patients might cease receiving care if they consider the risks of an AI-powered surgery. To reduce the chance of being sued as much as possible, the doctor must win over the patient's trust and carry out the surgery with their full consent.

14. CASE STUDIES:

14.1 Use Of AI In Detecting ALS (Amyotrophic Lateral Sclerosis)

In the middle of 2014, the ice bucket challenge rose in popularity. The challenge was initially started by a family regularly nominating one another. To support a family member with ALS, this trend was started (a disorder of the neurological system that influences functional ability and weakening muscles). The main goal of this activity is to pick a small group of people and pour an ice-filled bucket of water over them. The nominees can make a $10 donation, spilling the ice out of the bucket, or forgoing the challenge and making a $100 donation in its place. Donations would be used to treat people living with ALS with the money raised. It is well known that most lab workers would use chemicals that would potentially injure the specific cells they would desire to study to develop the properties of that particular cell that could be seen with the naked eye or with a microscope. This procedure could take a long time and be very taxing. The ALS Association Neuro Collaborative, funded by donations in connection with the ALS Ice Bucket Challenge, collaborated with Google computer scientists to conduct groundbreaking research that was later published. Dr. Steven Finkbeiner, a senior investigator at the Gladstone Institute in San Francisco, oversaw the project. This study introduced deep learning as a method for recognizing minute details in images, even without more laborious and sophisticated techniques. AI can identify ALS patients in a variety of ways. Identifying the ALS subtypes and determining which would present major problems, for example, with this technology, researchers can learn more about the illness and find out how to treat it. Due to the increasing usage of induced pluripotent stem cells, which can be turned into motor neurons (these are the cells that are missing or have stopped functioning in ALS), scientists were able to match a person's biological cells to clinical data. Ultimately, this strategy may help identify subgroups of ALS patients with comparable biological traits and pair it with the most effective drug to treat their illness.52

14.2 Google Collaborating with Aravind Eye Care Hospitals

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Google joins together with the Aravind Eye Care System, an Indian hospital network that is working to lower the incidence of cataract-related vision impairment. The hospital had provided a few images for Google's image processing systems' training purposes, assisting in creating the retinal screening software. Using images from Google's image search and information storage system, the technique uses deep learning to distinguish between images of human and animal retinas. The process of putting this technology to use is currently underway.53 Lily Peng, the product manager for the Google Brain AI research team, asserts that there won't be any loss of work due to this discovery because there hasn't been any recent talent screening for retina jobs. The AI would examine the retinas and determine the extent of blindness or the degree to which all patients will have vision impairment in the future. Based on the results, the doctors would move forward with patient therapy.54

15.

CHALLENGES OF USING AI IN HEALTH SERVICE

15.1 Data Base

The difficulty of AI in healthcare is not methodological; rather, it is a matter of access to data. It can be difficult to get large healthcare databases for study purposes or other reasons. Because larger industries now have access to these facts, it is particularly challenging for newcomers to the sector.55 India has a lot of data, and the lack of a solid regulatory framework governing the exchange of health information makes it easier for enterprises to acquire huge volumes of data than in the circumstances, including rigorous privacy standards. However, there might be differences in how reliable this data is. India has adopted anEHR (electronic health report) policy. However, its implementation is not yet standardized. The terms "digitizing records" (i.e., photographing medical notes), "retention strategies and periods," and "thorough application throughout all healthcare data" are therefore used inconsistently. Additionally, India lacks a dedicated regulatory framework for protecting personal information. Startup companies have started anonymizing the data they acquire before using it for other purposes due to the enhanced freedom this has afforded enterprises. According to a 2019 HIMSS Media research report, 36% of robotic systems can quickly recognize the language, clinical indicators, and code elements. The healthcare industry should focus on standardizing health information to enhance the amount of data accessible for assessing AI systems.56

15.2 Why Is It A Matter Of Concern?

The lack of data restrictions may be a double-edged sword because it makes it easier for entrepreneurs to collect data while creating uncertainty about potential future changes. A healthcare department needs to have extremely accurate and reliable data because any inaccurate data would put patients' safety in danger.57 Since the majority of healthcare will manage patients using databases that contain information on previous medical histories, treatments, and services received. If there is a mistake in any of these facts, the entire therapy process will be useless and even hazardous.58 Even if the patient is the same but has many entries under their name, this would lead to contradicting and insufficient patient data in each medical report, which could impact the treatment and services provided. Overall, patient safety and care standards would be gravely jeopardized, and the standard of medical treatment would not be as high as anticipated.

Additionally, if it comes out that a patient was treated using the medical records of another patient, the owner of the medical records would be charged for treatments they had not used, and their insurance claim might be rejected. If the database were inaccurate, there would be many more problems. As a result, it needs to be kept tidy and contain data about the patient's medical history that is sufficiently consistent.59

15.3 Comprehensibility And Clarity

Understanding how AI generates judgments or suggestions for the healthcare sector is vital, especially for clinical decision support systems (CDSSs).60 Understanding the underlying origins and the traits that guide the creation of specific judgments is required. Traditional AI techniques like artificial neural networks use black-box models (ANN). This suggests that it would be challenging to understand the device's logic. Transparency and interpretability are two of the primary challenges and limitations of using AI in hospitals as it currently stands. In a 2020 article titled "Three Ghosts of Medical AI," experts concluded that failing to understand how software permits solutions limits iterative expertise techniques.61

15.4 Why Is It a Matter of Concern?

If AI technology could identify the presence of an illness like jaundice with the aid of lab results, doctors would need a convincing rationale before actually recommending any medications or beginning therapy.62 To come to a decision and start the treatment would take some time. Consider a scenario where a report screening expert requests assistance from an AI system and receives a response that is wholly inconsistent with what the expert would typically infer. In these situations, the caregiver may rely on the machine's conclusions rather than their own. Artificial intelligence system input computational procedures must be transparent to prevent such terrible decision-making situations.63 AI algorithms should be developed to alert users of the system's assumptions on the options presented if such a situation arises.

16. RECOMMENDATIONS FOR USING AI IN HEALTH SERVICE

16.1 Educating Students About the Developing Technology

It is crucial to provide the current and future labor forces with the skills necessary to use AI successfully. It is also vital to include computer technologies like AI in medical schools to train doctors to accept the technical skill sets and ethical standards required to employ AI in their practices.64 IT institutions might provide a course on morality, transparency, responsibility, etc., to give engineers and programmers a greater understanding of the challenges surrounding the services and technology they are building.

16.2 Establishing A Regulatory Framework to Manage AI In India

Governmental oversight is currently absent in this area, and there are worries that overregulation could hinder innovation. This calls for developing a framework to ensure the integrity and transparency of AI systems while promoting and enabling development, as well as establishing a national regulatory authority to keep an eye on AI breakthroughs.65

16.3 Making Consumers Aware to Raise Their Voices Thoughtfully

Most startups and established enterprises in the medical technology sector rely on customer reviews to succeed. Customers are given preferential treatment during the development and decision-making processes. There is no explicit basis for introducing innovations to the market without consumer demand. Customers should therefore use reason when using any product. The user license agreements must be thoroughly examined and should only be approved if the consumer is absolutely satisfied.66If you are concerned that your genetic information might be disclosed to the insurance provider, you should decline to donate blood. Instead of depending solely on the results reached by the AI, discuss your treatment objectives with your doctor to decide how you would like your therapy to proceed. Share your voice and data with caution.

17. CONCLUSION AND SUGGESTIONS

The fusion of AI and surgical technology may enable the expansion of surgical expertise to enhance results and increase care accessibility. AI cannot be held liable for the damage it causes since it is not currently recognized as a single entity under national or international law. As a result, the idea outlined in Article 12 of the UN Convention on the Use of Electronic Communications in International Contracts—which states that the person at whose direction the system was provided must ultimately be found responsible for just any crime committed or signal sent by a specific device be augmented to AI liability. AI is changing from a nice-to-have item to a necessary part of modern electronic systems. As we rely increasingly on AI for judgment, it is crucial to guarantee that decisions are made ethically and without unfair biases. We know the significance of transparent, accountable, and reliable AI systems. Artificial intelligence algorithms are employed more frequently to improve patient and surgical outcomes, often outperforming people. The advent of artificial intelligence in healthcare is likely to be constrained, coexist with current systems, or perhaps replace them. It can be considered immoral and unreasonable not to use AI.67

REFERENCES

[ 1] Ajay Agrawal, Joshua S. Gans & Avi Goldfarb, Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction, 33 The Journal of Economic Perspectives 31 (2019), https://www.jstor.org/stable/26621238 (last visited Jan 12, 2023).

[2] J. David Bolter, Artificial Intelligence, 113 Daedalus 1 (1984), https://www.jstor.org/stable/20024925 (last visited Jan 12, 2023).

[3] Id.

[4] Stephan De Spiegeleire, Matthijs Maas & Tim Sweijs, Ai - Today and Tomorrow, 43 (2017), https://www.jstor.org/stable/resrep12564.8 (last visited Jan 12, 2023).

[5] Charles T. Rubin, Artificial Intelligence and Human Nature, The New Atlantis 88 (2003), https://www.jstor.org/stable/43152855 (last visited Jan 12, 2023).

[6] Tracy B. Henley, Natural Problems and Artificial Intelligence, 18 Behavior and Philosophy 43 (1990), https://www.jstor.org/stable/27759223 (last visited Jan 12, 2023).

[7] Vladan Devedzic, Web Intelligence and Artificial Intelligence in Education, 7 Journal of Educational Technology & Society 29 (2004), https://www.jstor.org/stable/jeductechsoci.7.4.29 (last visited Jan 12, 2023).

[8] Christopher Dede, Artificial Intelligence Applications to High-Technology Training, 35 Educational Communication and Technology 163 (1987), https://www.jstor.org/stable/30219891 (last visited Jan 12, 2023).

[9] Raman Kumar & Ranabir Pal, India achieves WHO recommended doctor population ratio: A call for paradigm shift in public health discourse! 7 J Family Med Prim Care 841 (2018), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6259525/ (last visited Jan 12, 2023).

[ 10] Rong Liu, Yan Rong & Zhehao Peng, A review of medical artificial intelligence, 4 Global Health Journal 42 (2020), https://www.sciencedirect.com/science/article/pii/S2414644720300208 (last visited Jan 12, 2023).

[11] Dr. Liz Kwo, contributed: The power of AI in surgery, MobiHealthNews (2021), https://www.mobihealthnews.com/news/contributed-power-ai-surgery (last visited Jan 12, 2023).

[ 12] Liu, Rong, and Peng, supra note 10.

[13] G. Fagogenis et al., Autonomous Robotic Intracardiac Catheter Navigation Using Haptic Vision, 4 Sci Robot eaaw1977 (2019).

[ 14] Rubin, supra note 5.

[15] Id.

[ 16] Aziz Rezapour, Seyede Sedighe Hosseinijebeli & Saeed Bagheri Faradonbeh, Economic evaluation of E-health interventions compared with alternative treatments in older persons' care: A systematic review, 10 J Educ Health Promot 134 (2021).

[ 17] Fagogenis et al., supra note 13.

[18] Joseph Campbell, Scientists inspired by "Star Wars" create artificial skin able to feel, Reuters, Aug. 3, 2020, https://www.reuters.com/article/us-singapore-skin-idUSKBN24Z13D (last visited Jan 12, 2023).

[19] Simran Arora, Can robots replace a surgeon in an OT? Doctors bank on AI for accuracy, precision in spine surgeries, TimesNow (2022), https://www.timesnownews.com/health/can-robots-replace-a-surgeon-in-an-ot-doctors-bank-on-ai-for-accuracy-precision-in-spine-surgeries-article-96229112 (last visited Jan 12, 2023).

[20] Amit Gupta et al., Artificial intelligence: A new tool in surgeon's hand, 11 Journal of Education and Health Promotion 93 (2022), https://www.jehp.net/article.asp?issn=2277-9531;year=2022;volume=11;issue=1;spage=93;epage=93;aulast=Gupta;type=0 (last visited Jan 12, 2023).

[21] Daniel A. Hashimoto et al., Artificial Intelligence in Surgery: Promises and Perils, 268 Ann Surg 70 (2018).

[22] Robert Whymant, From the archive, 9 December 1981: Robot kills factory worker, The Guardian, Dec. 9, 2014, https://www. theguardian. com/theguardian/2014/dec/09/robot-kills-factory-worker (last visited Jan 12, 2023).

[23] Maria Stefania Cataleta, Humane Artificial Intelligence: The Fragility of Human Rights Facing AI, (2020), https://www.jstor.org/stable/resrep25514 (last visited Jan 12, 2023).

[24] Ulrike Franke, Harnessing Artificial Intelligence, (2019), https://www.jstor.org/stable/resrep21491 (last visited Jan 12, 2023).

[25] Agrawal, Gans, and Goldfarb, supra note 1.

[26] Tomasz Rogula, Pablo Acquafresca & Mohamed Bazan, Training and Credentialing in Robotic Surgery, 13 (2015).

[27] Stephen G. Gilles, Negligence, Strict Liability, and the Cheapest Cost-Avoider, 78 Virginia Law Review 1291 (1992), https://www.jstor.org/stable/1073455 (last visited Jan 12, 2023).

[28] Anita Bernstein, How Can a Product Be Liable? 45 Duke Law Journal 1 (1995), https://www.jstor.org/stable/1372947 (last visited Jan 12, 2023).

[29] Curtis E.A. Karnow, Liability for Distributed Artificial Intelligences, 11 Berkeley Technology Law Journal 147 (1996), https://www.jstor.org/stable/24115584 (last visited Jan 12, 2023).

[30] Id.

[31] Mariano-Florentino Cuellar, A Common Law for the Age of Artificial Intelligence: Incremental Adjudication, Institutions, and Relational Non-Arbitrariness, 119 Columbia Law Review 1773 (2019), https://www.jstor.org/stable/26810848 (last visited Jan 12, 2023).

[32] Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, 119 Columbia Law Review 1829 (2019), https://www.jstor.org/stable/26810851 (last visited Jan 12, 2023).

[33] Id.

[34] Amishi Aggarwal, Analysing the Possibility of Imposing Criminal Liability on AI Systems, The Criminal Law Blog (2021), https: / / criminallawstudiesnluj. wordpress. com/2021/01/ 19/analysing-the-possibility-of-imposing-criminal-liability-on-ai-systems/ (last visited Jan 12, 2023).

[35] Cuellar, supra note 31.

[36] Sabine Gless, Emily Silverman & Thomas Weigend, If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability, 19 New Criminal Law Review: An International and Interdisciplinary Journal 412 (2016), https://www.jstor.org/stable/26417695 (last visited Jan 12, 2023).

[37] Liis Vihul & Centre for International Governance Innovation, International Legal Regulation of Autonomous Technologies, 26 (2020), https://www.jstor.org/stable/resrep27510.7 (last visited Jan 12, 2023).

[38] Curtis E.A. Karnow, Liability for Distributed Artificial Intelligences, 11 Berkeley Technology Law Journal 147 (1996), https://www.jstor.org/stable/24115584 (last visited Jan 12, 2023).

[39] Id.

[40] See A.G.S., Annotation, Duty of manufacturer or seller to warn of latent dangers incident to article as a class, as distinguished from duty with respect to defects in particular article, 86 A.L.R. 947 (originally published in 1933); RESTATEMENT (THIRD) OF AGENCY § 5.03 (AM. L.INST. 2006); RESTATEMENT (SECOND) OF AGENCY § 272 (AM. L. INST.1958); RESTATEMENT (THIRD) OF TORTS: PROD. LIAB. § 2 cmt. m (AM. L.INST. 1998); Curtis, Collins & Holbrook Co. v. United States, 262 U.S.215, 222 (1923) ("The general rule is that a principal is charged with the knowledge of the agent acquired by the agent in the course of the principal's business.").

[41] Dinerstein v. Google. No. 1:19-cv-04311; 2019.

[42] Christophe Olivier Schneble, Bernice Simone Elger & David Martin Shaw, Google's Project Nightingale highlights the necessity of data science ethics review, 12 EMBO Mol Med e12053 (2020), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7059004/ (last visited Jan 12, 2023).

[43] Julia Angwin et al., Machine Bias * (2022), https: / / www. taylorfrancis. com/chapters/edit/10.1201/9781003278290-37/machine-bias-julia-angwin-jeff-larson-surya-mattu-lauren-kirchner (last visited Jan 12, 2023).

[44] Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Science 447 (2019), https://www.science.org/doi/abs/10.1126/science.aax2342 (last visited Jan 12, 2023).

[45] BILLY PERRIGO, An AI Helped Write This Play. It May Contain Racism, Time, https://time.com/6092078/artificial-intelligence-play/ (last visited Jan 12, 2023).

[46] Nicole Wetsman, WHO outlines principles for ethics in health AI, The Verge (2021), https://www.theverge.com/202V6/30/22557119/who-ethics-ai-healthcare (last visited Jan 12, 2023).

[47] Logix, Will Healthcare and Education Industry remain target for Cyber criminals in 2017? - Logix Blog, (2017), https://blog.logix.in/will-healthcare-education-industry-remain-target-cyber-criminals-2017/ (last visited Jan 12, 2023).

[48] VentureBeat, How to protect AI from cyberattacks - start with the data, VentureBeat (2022), https://venturebeat.com/ai/how-to-protect-ai-from-cyberattacks-start-with-the-data/ (last visited Jan 12, 2023).

[49] FORTUNE INDIA., India's unemployment rate in July at 6-month low, (2022), https://www.fortuneindia.com/macro/indias-unemployment-rate-in-july-at-6-month-low/109157 (last visited Jan 12, 2023).

[50] Mohit Sharma, Impact of AI on Jobs in Healthcare, (2019), https://www.mindfieldsglobal.com/blog/impact-of-ai-on-jobs (last visited Jan 12, 2023).

[51] Abdullah Shuaib, Husain Arian & Ali Shuaib, The Increasing Role of Artificial Intelligence in Health Care: Will Robots Replace Doctors in the Future? 13 Int J Gen Med 891 (2020), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7585503/ (last visited Jan 12, 2023).

[52] Yu L. Lee, Gokhan S. Kilic & John Y. Phelps, Medicolegal review of liability risks for gynecologists stemming from lack of training in robot-assisted surgery, 18 J Minim Invasive Gynecol 512 (2011).

[53] Eric M. Christiansen et al., In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images, 173 Cell 792 (2018).

[54] Soma Basu, A Madurai-based hospital and Google are working together to stop early blindness, The Hindu, Nov. 12, 2018, https://www.thehindu.com/sci-tech/technology/an-ai-for-an-eye/article25476723.ece (last visited Jan 12, 2023).

[55] Rohit Varma, Using artificial intelligence to automate screening for diabetic retinopathy, Ophthalmology Times (2018), https://www.ophthalmologytimes.com/view/using-artificial-intelligence-automate-screening-diabetic-retinopathy (last visited Jan 12, 2023).

[56] The Economist, Artificial intelligence's new frontier, The Economist, 2016, https://www.economist.com/leaders/2022/06/09/artificial-intelligences-new-frontier (last visited Jan 12, 2023).

[57] Konstantin Mirin, AI Challenges in Healthcare: What We Should be Aware of, PostIndustria (2021), https://postindustria.com/implementation-of-ai-in-healthcare-challenges-and-potential/ (last visited Jan 12, 2023).

[58] Anish Bhardwaj, Promise and Provisos of Artificial Intelligence and Machine Learning in Healthcare, 14 J Healthc Leadersh 113 (2022).

[59] George Maliha et al., Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation, 99 Milbank Q 629 (2021).

[60] Indrajit Hazarika, Artificial intelligence: opportunities and implications for the health workforce, 12 Int Health 241 (2020).

[61] Maliha et al., supra note 59.

[62] Mirin, supra note 57.

[63] Justus Wolff et al., Success Factors of Artificial Intelligence Implementation in Healthcare, 3 Front Digit Health 594971 (2021).

[64] Sandeep Reddy et al., A governance model for the application of AI in health care, 27 J Am Med Inform Assoc 491 (2020).

[65] Vitalii M. Pashkov, Andrii O. Harkusha & Yevheniia O. Harkusha, ARTIFICIAL INTELLIGENCE IN MEDICAL PRACTICE: REGULATIVE ISSUES AND PERSPECTIVES, 73 Wiad Lek 2722 (2020).

[66] Kamila Kolanska et al., Artificial intelligence in medicine: A matter of joy or concern? 50 J Gynecol Obstet Hum Reprod 101962 (2021).

[67] Julia Amann et al., Explainability for artificial intelligence in healthcare: a multidisciplinary perspective, 20 BMC Med Inform Decis Mak 310 (2020).

[68] Ravi B. Parikh, Stephanie Teeple & Amol S. Navathe, Addressing Bias in Artificial Intelligence in Health Care, 322 JAMA 2377 (2019).

i Надоели баннеры? Вы всегда можете отключить рекламу.