FIXING CRIMINAL LIABILITY AS PER ELEMENTS OF A CRIME: A REVIEW IN MODERN ERA OF AI AND ROBOTICS
1DR. MAHMOOD AHMED SHAIKH, 2DR. SHAFIQ UR RAHMAN, 3PROF. DR.
MUHAMMAD TAHIR, 4KHADIM HUSSAIN KHUHARO, 5DR. MUHAMMAD SHAHID
1Senior Assistant Professor, Bahria University Law School Islamabad Corresponding Author Email: [email protected], [email protected] 2ORCID ID: 0009-0001-4386-6643 Bahria University Law School Islamabad
Email: [email protected] 3(ORCID ID: 0000-0002-8921-6582) School of Law, Karachi University Email: [email protected] 4Advocate Supreme Court Email: [email protected] 5Email: [email protected]
ABSTRACT
The Artificial Intelligence and Robotics have started to guide humans how to plan something or make decisions upon an issue or problem. A debate is under process for last many years about fixing responsibility, liability and to some extent culpability for these human partners. Although status of a legal person has been recognized in many jurisdictions, the apportionment of the liability is still undecided. A legal person is always represented through a natural person to decide the extent of its involvement in a criminal matter or civil liability but AI robotics are independent of natural person due to their precision capability. A human can be the owner of an AI device but cannot be held responsible for its behavior like pets and tamed animals. The exploration of criminal law principles reveals an understanding of legal concepts, drawing from both Western and Islamic perspectives. Beginning with the actus reus, mens rea, and negligence, the discussion probes into intricacies of intent, recklessness, and blameless inadvertence. Strict liability offenses have been examined, emphasizing the significance of external standards and the distinction between Western and Islamic legal frameworks. The concept of vicarious liability sheds light on cases where individuals may be held responsible for the actions of others, resembling the tort doctrine of respondeat superior. Accomplice liability at common law introduces distinctions among principals, abettors, and accessories, offering insight into the intricacies of criminal participation. Inchoate offenses, including incitement, conspiracy, and attempt, showcase the complexities of criminal liability before the completion of an intended act. Since Islamic Laws are now being equally recognized globally, here common elements of a crime and criminal laws have also been discussed so that a way forward for criminal liability upon latest and newly
joining AI devices could be established.
Keywords: Criminal liability, Elements of crime, Legality of AI devices, Fixing of culpability
1. THE DEFINITION OF CRIME: IN LITERARY TERMS
1.1 The endeavor to precisely define a crime encounters inherent challenges within the dynamic landscape of criminal law. The aspiration to categorize an act (or omission) as definitively criminal faces practical complexities, leading to a shift from rigid definitions to a focus on describing the characteristics of a crime. It is for this reason that definitions have become unfashionable in the law these days. What writers normally do is to describe the characteristics of a crime.1 This transition is exemplified in legal codes such as Section 40 of the Pakistan Penal Code, which designates "offence" as something punishable under the Code, employing the term "thing" rather than specifying an "act" or "omission.". Similarly, Section 4(O) of the Criminal Procedure Code defines "offence" as "any act or omission made punishable by any law for the time being in force." Notably, both codes utilize the term "offence" rather than "crime," prompting a nuanced exploration of the correlation between criminal proceedings and the broader concept of crime. In the era of artificial intelligence (AI) and robotics, the concept of "offence" gains new dimensions. The characteristics of criminal behavior now extend to actions facilitated or committed by AI systems and robotic entities, challenging traditional understandings.
1.2. The question arises: Does every act or omission leading to criminal proceedings inherently qualify as a crime? This query underscores the complexity of legal semantics, challenging the accuracy of characterizing all criminal proceedings as intended for the punishment of c1rimes. "It is not accurate to describe "criminal proceeding" as one for the punishment of crime; for it is also employed for the punishment of some offences not regarded as crimes in the ordinary meaning of the term. Most criminal justice systems developed initially to deal with crimes of violence, and at a later stage for crimes against the security of the state. In more recent times criminal proceedings have been increasingly employed to deal with commercial crimes, and departing from its original uses, with offence against regulatory laws. The latter acts or omissions are not inherently anti-social and usually do not require a criminal intent."2 In the realm of AI and robotics, the expanding scope of regulatory offenses gains prominence. Legal frameworks must adapt to encompass offenses related to algorithmic decision-making, data privacy, and ethical considerations in the use of emerging
1 Vernon Fox, Introduction to Criminology (NJ: Prentice-Hall Inc, (1976), 27.
2 L. B. Curzon, Criminal Law 2ed (Plymouth: M & E Handbooks, (1977), 7.
technologies.
1.3. Crimes, by definition, constitute wrongs that inflict sufficient harm on the public to necessitate the application of criminal procedures for their resolution. While some acts manifestly pose an evident threat to the public, leading to a consensus on their criminal nature, others provoke diverse opinions. In essence, a crime can be characterized as an act that violates public rights, prompting the initiation of criminal proceedings and subsequent punitive measures. However, when these defining criteria are extended to the realm of Artificial Intelligence (AI) and Robotics, notable differences emerge. AI and Robotics, unlike recognized members of society, pose a unique challenge. They do not conform to the conventional application of general principles derived from behavioral sciences. Additionally, their status as entities with de jure personality is still under deliberation. Unlike natural persons, the recognition of AI instruments and Robotics as distinct legal entities remains an ongoing discourse, introducing complexities into the application of traditional legal frameworks to these technological entities.
2. CRIMINAL LIABILITY AND ELEMENTS OF A CRIME
2.1. Criminal liability is based on the principle that a man is not criminally liable for his conduct unless the prescribed state of mind is also present. This principle is frequently stated in the form of a Latin maxim: Actus non facit reum nisi mens sit rea. This means "An act does not make a man guilty of a crime, unless his mind be also guilty." The principle is not understood in the same meaning as it was used in medieval times; it has been defined and redefined. A famous writer says: "The full implications of the requirements of mens rea cannot be appreciated until not only the chapter and the entire book but many others have been studied. Culpability under Western legal systems, today, is founded upon certain basic premises that are more or less strictly observed by legislatures when formulating the substantive law of crimes. However, with the integration of AI and robotics, the notion of actus reus expands to encompass algorithmic actions and automated decisions. It is important to note that the burden of proving guilt beyond reasonable doubt rests upon the prosecution. There may be cases where Mens Rea ('Amd or Qasd e Jinayat) is not relevant. These are usually cases of strict liability and will be discussed under that topic. The courts keep these basic premises in view during trial. The same rules are followed in Pakistan. Consequently, the prosecution is generally required to prove the following elements of a criminal offence:
2.1.1. Actus Reus (guilty act): A physical act (or unlawful omission) by the defendant; in reality, a collection of elements other than the mental element, collectively referred to as the actus reus. The actus reus, traditionally rooted in
physical acts or omissions, takes on new dimensions in the age of AI. Algorithmic decisions and automated processes generate consequences, raising questions about the culpability of these non-human actors. Legal frameworks must adapt to define and address the actus reus in the context of AI-driven activities.
2.1.2. Mens Rea (guilty mind): The state of mind or intent of defendant at the time of his act. Defining the mens rea in AI involves examining issues of algorithmic intent, foreseeability of outcomes, and the ethical dimensions of automated decision-making. Legal scholars grapple with the concept of a "guilty mind" when the decision-maker is an algorithm rather than a human.
2.1.3. Concurrence: The physical act and the mental state should exist at the same time. In AI and robotics, establishing concurrence involves aligning algorithmic actions with the intended outcomes and the foreseeability of consequences. The integration of AI and robotics introduce complexities in establishing concurrence. Legal frameworks must grapple with the challenge of ensuring that algorithmic decisions align with the prescribed mens rea and occur concurrently, reflecting the dynamic nature of automated processes.
2.2. The Nature of the Elements: The attempt to extract elements common to all crimes has its limitations and problems. The reader should realize that the elements are most helpful when they are used to study specific crimes, but are likely to lead the discussion into details of exceptions and provisos when studied in an abstract all-embracing sense. The following points should be kept in mind while studying the elements of crime:
2.2.1. The analysis into actus reus and mens rea is for convenience of exposition only. The only concept known to the law is the crime and the crime exists when the actus reus and mens rea coincide. Once it is decided that an element is an ingredient of an offence, there is no legal significance in the classification of it as part of the actus reus or the mens rea. It is, perhaps, for this reason that Muslim jurists did not discuss the elements in an abstract form. They did, however, discuss the elements within specific crimes. Modern writers use terms like Rukn Madi and Qasd e jinayat to describe the elements. There is nothing Islamic about these terms; they have been borrowed from the Egyptian law. The word usually used by Muslim jurists for criminal intention is 'Amd, (deliberate) which is found in Qur'an.
2.2.2. In law, the bringing about of the actus reus implies no judgment as to its moral or legal quality. In Islamic law, on the other hand, there is a definite judgment as to the moral quality of an offence involving criminal intention and is
tied in with expiation (kaffarah).
2.2.3. It is not always possible to separate the actus reus from mens rea. Sometimes a word which describes the actus reus implies a mental element. There are many offences of "possession" of proscribed objects and it has always been recognized that possession consists in a mental as well as a physical element. The same is true of words like "permits," "appropriates," "cultivates" and many more. The significance of this is that any mental element which is part of the actus reus is necessarily a part of the offence.
2.2.4. It is extremely difficult to draw out the elements that are common to all offences.
3. PHYSICAL ACT: ACTUS REUS 3.1. The meaning of Actus Reus: A crime must have an actus reus, i.e. the act defined and prohibited by law and declared an offence. For this purpose, an act is defined as a bodily movement. A thought is not an act. Therefore, bad thoughts alone cannot constitute a crime. Note, however, that speech, unlike thought, is an act that can cause liability, e.g. perjury. The actus reus includes 'all the elements in the definition of the crime, except the accused's mental element'. It follows; therefore, that the actus reus is not merely an act. Further, it may consist in a "state of affairs," not including an act at all. Perhaps, it is for this reason that definition of "offence" in Section 40 of Pakistan Penal Code (PPC) includes a "thing." Traditionally, actus reus involves conscious voluntary bodily movements. In the era of AI, defining conduct extends beyond human actions to include algorithmic decisions. Here a question arises: Can the output of an AI system be considered a conscious, voluntary bodily movement? Legal scholars grapple with the implications of automated conduct and its relevance to criminal liability. The actus reus itself may be said to consist of the following elements:
3.1.1. Conduct: Conscious voluntary bodily movement. The actus reus requires proof of an act. An act may include, in some cases, an omission (conduct) or failure to act when the law imposes such a duty. Section 32 PPC says that "in every part of this Code, except when a contrary intention appears from the context, words which refer to acts done extend also to illegal omissions." Section33 says: "The word 'act' denotes as well a series of acts as a single act: the word 'omission' denotes as well a series of omissions as a single omission". 3 With AI systems increasingly making decisions autonomously, the legal community explores the concept of "conduct" in the context of algorithms. Understanding whether an AI's output qualifies as a conscious, voluntary bodily movement becomes crucial for
3 A M Macdonald, Chambers Twentieth Century Dictionary (Edinburgh: Chambers, 1979), 1199.
determining criminal liability.
3.1.2. The consequences of the act: The actus reus must lead to the prohibited result. For example, if D hurls a stone, being reckless whether he injures anyone, he is guilty of a crime if the stone strikes P but of no offence if by chance no one is injured. Nevertheless, some acts may not have a result, and the act itself constitutes the crime, as in perjury. It is an offence as soon as the statement is made. It is to be noted that it is the conduct and not the result that is the actus reus. A dead man with a knife in his back is not the actus reus of murder. It is putting the knife in the back thereby causing the death which is the actus reus. Legal scholars examine the implications of AI-generated outcomes. Determining criminal liability necessitates understanding how algorithmic decisions contribute to, or mitigate, the prohibited result. The legal community debates the extent to which an AI system should be held responsible for its consequences.
3.1.3. Circumstances in which the act takes place: The circumstances may constitute a "state of affairs" that make the act of the accused unlawful. For example, some situations are contemplated under Section 144 PPC. The presence of the accused is an act, while the existence of the unlawful assembly is the state of affairs. Sometimes a particular state of mind on the part of the victim may be required. Defining the circumstances in AI acts involves scrutinizing the digital environment. Questions emerge about whether the algorithm's functioning aligns with legal standards. Understanding the state of affairs in algorithmic decisions becomes pivotal for legal determinations.
3.2. "The actus reus is made up, generally, but not invariably, of conduct and sometimes its consequences and also of the circumstances in which the conduct takes place (or which constitute the state of affairs) in so far as they are relevant. It is to be noted that circumstances as well as consequences are relevant if they are included in the definition of the crime. It also shows that it is only by looking at the definition of the crime that we can see what circumstances are, material to the actus reus. What is usually not mentioned in the definition are circumstances which, if they 'exist, amount in law to a justification or excuse, and in such a case no crime is committed. There is no actus reus. In such cases! the doctrines have to be added to the definition, as emphasized earlier.
3.3. Rules for the actus reus: Criminal liability operates on the principle of "actus non facit reum nisi mens sit rea," emphasizing that a person is not criminally liable solely for their actions but also requires a guilty state of mind. In contemporary legal systems, the burden of proving guilt beyond reasonable doubt lies with the prosecution.
3.3.1. The actus reus must be proved: It must be shown by the prosecution that an actus reus does in fact exist. It is important to note that Mens rea may exist without an actus reus, but if there is no actus reus there can be no crime.
3.3.2. The Act Must Be Voluntary: The defendant's act must be voluntary in the sense that it must be a conscious exercise of the will. The rationale for this rule is that an involuntary act will not be deterred by punishment. The following acts are not considered "voluntary" and therefore cannot, be the basis for criminal liability.
3.3.3. However, with the integration of AI and robotics, the notion of actus reus expands to encompass algorithmic actions and automated decisions. Actus reus comprises conduct, consequences, and circumstances, shaping the elements of criminal offenses. The integration of AI and robotics necessitates a holistic reevaluation of these components. Legal scholars advocate for a nuanced understanding of algorithmic conduct, consequences, and circumstances in the evolving landscape of criminal liability.
3.4. Recklessness: In many offences' recklessness, either as to the consequences required for the actus reus or as to the requisite circumstances of it or as to some other risk, suffices for criminal liability as opposed to some other mental state such as intention.16 Since 1981 (after the decision in Caldwell), in England recklessness is understood in two different legal senses: subjective recklessness and Caldwel recklessness. Both are concerned with the taking of unjustified risk. Subjective recklessness is the conscious taking of an unjustified risk. In such a case, the defendant forsees that the consequence in question may result and it is unreasonable for him to take the risk of it occurring. The Caldwell recklessness occurs when a person is reckless even when he has not given any thought to the possibility of there being any such risk. The concept of recklessness in criminal liability takes on new dimensions with the integration of artificial intelligence (AI). As AI systems operate with varying degrees of autonomy, understanding how recklessness applies becomes a critical facet of legal discourse.4 Legal frameworks must evolve to explicitly address AI recklessness. Establishing parameters for AI's conscious risk-taking and unconscious recklessness becomes essential for holding both AI creators and deployers accountable. Given the evolving nature of AI, legal standards for recklessness might need continuous refinement. The law must adapt to the dynamic landscape of AI capabilities and potential risks.
3.4.1. Algorithmic Consequence Anticipation: Recklessness in AI corresponds to
4 Alexander, L., Artificial Intelligence, Predictive Policing, and the Challenges of Legality, (Criminal Law and Philosophy, 2018).
the system's ability to anticipate consequences. Subjective recklessness aligns with AI's conscious assessment of unjustified risks, where the system foresees potential outcomes and knowingly takes those risks.
3.4.2. Subjective Recklessness in AI: For AI, subjective recklessness involves a conscious decision-making process. The AI system, akin to a human actor, evaluates risks, foresees consequences, and proceeds despite the unreasonable nature of the risk.
3.4.3. Unconscious Risk-Taking: Caldwell recklessness in AI introduces a distinctive challenge. Unlike human recklessness, where a person consciously takes risks, AI might engage in risk-taking without any thoughtful consideration. This form of recklessness raises questions about accountability and the ethics of automated decision-making.
3.4.4 AI's Unconscious Calculations: Caldwell recklessness in AI implies that the system, devoid of conscious thought, engages in actions that carry inherent risks. The lack of deliberate consideration complicates the attribution of culpability, as AI operates more on automated responses than conscious decision-making.
3.4.5. Programming Ethical Considerations: Recklessness in AI amplifies the ethical responsibility of those involved in programming and deploying these systems. Developers must navigate the fine line between allowing AI to make calculated decisions and avoiding unconsciously reckless behaviors.
3.4.6. Algorithmic Transparency: Addressing recklessness in AI necessitates transparency in algorithms. Understanding how AI reaches decisions, whether consciously or Caldwell recklessly, becomes imperative for legal and ethical evaluations.
3.5. The act should be causative: If the definition of an actus reus requires the occurrence of certain consequences it is necessary to prove that it was the conduct of the accused that caused those consequences to occur. In qatl-i- amd or qatl-i-khata,. it is necessary to prove that the act caused the death. If the death came about solely through some other cause then the crime is not committed. Causation is called sababiyah in Arabic. 5
3.6. A "State of Affairs" as an actus reus: The definition of a crime may be formulated in such a way that it can be committed although there is no "act." There is no need for a "willed muscular movement," and it may be enough if a
5 Pradeep Kumar, introduction to Philosophy of Crime (New Delhi: Deep & Deep Publications, 2004), xii.
specified "state of affairs" is proved to exist. These offences are sometimes called "status" or "situation" offences. Sections 400 & 401 of the PPC belonging to a gang of dacoits or a gang of thieves may be treated as examples of status offences.6
3.7. Omission as an "Act": Although most crimes are committed by affirmative action rather than by non-action, a defendant's failure to act will result in criminal liability provided three requirements are satisfied. In contemplating criminal liability, the concept of omission gains new dimensions in the age of artificial intelligence. While traditionally associated with human actions, the integration of AI into various spheres prompts a reevaluation of legal duties concerning omission.7
3.7.1. There is a legal duty to act. The defendant must have a legal duty to act under the circumstances. A legal duty to act can arise from the following sources.
3.7.1.1 A statute, such as that of filing an income tax return or of reporting an accident.
3.7.1.2 A contract obligating the defendant to act, such as that entered into by a lifeguard or a nurse.
3.7.1.3 The relationship between the defendant and the victim, which may be sufficiently close to create a duty. A parent has the duty to prevent physical harm to his or her children. A spouse has the duty to prevent harm to his or her spouse.
3.7.1.4 The voluntary assumption of care by the defendant of the victim. Although in general there is no common law duty to help someone in distress, once aid is rendered, the good Samaritan may be held criminally liable for not satisfying a reasonable standard of care.
3.7.1.5 The creation of peril by defendant. Believing the B can swim, A pushes B into a pool. It becomes apparent that B cannot swim, but A takes no steps to help B. B drowns. Was A's failure to attempt a rescue an "act" upon which liability can be based? Yes.
4. AI AUTOMATION 4.1. Most of the cases falling under the voluntary act concern automatism. A person is an automaton when he has no control over his muscular movements, Automatism is divided into different types by writers: automatism due to insanity; automatism due to other reasons; and self- induced automatism. When automatism results from self-induced intoxication or by the defendant's fault it will apparently be no defense. The plea of insanity, on the other hand, is avoided by defendants as it usually results in going to a mental institution at the pleasure of authorities. However, where the alleged automatism arises from a "disease of the mind," the
6 C.M.V. Clarkson, Understanding Criminal Law 3ed (London Sweet & Maxwell, 2001), 45.
7 Solum, L. B., Artificial Intelligence and the End of Work, (Network Propagation, 2017).
defense is one of insanity and the onus of proof is on the accused. Where it arises from some cause other than disease of the mind, the onus of proof is on the prosecution. What is a disease of mind is a question of law.8 Automatism has narrow limits as a defense. It is to be confined to acts done while unconscious and to spasms, reflex actions and convulsions. The common law never recognized "irresistible impulse" as defense even when arose from insanity (in England, irresistible craving for drink is not defense to charge of stealing alcohol). 9 The integration of artificial intelligence (AI) into various spheres of human activity raises novel considerations regarding automatism as a defense in criminal law. Automatism, traditionally associated with human involuntary actions, takes on new dimensions in the context of AI-driven systems.10
4.1.1. Types of AI Automatism: AI systems, functioning as autonomous entities, may exhibit forms of automatism analogous to human involuntary movements. Classifications include AI automatism due to system malfunctions, algorithmic errors, or unforeseen circumstances.
4.1.2. Responsibility for AI-Induced Automatism: Determining responsibility for AI-induced automatism introduces complexities. Questions arise regarding whether the fault lies with the AI developer, the deployer, or the AI system itself. Legal frameworks must evolve to encompass the unique challenges posed by AI-driven actions.
4.1.3. Self-Induced Automatism in Programming: Instances where automatism results from self-induced intoxication or programming errors introduce legal ambiguities. Addressing responsibility requires discerning whether the automatonlike behavior was an unintended consequence or a result of negligent programming.
4.2. Involuntariness does not arise from automatism: It may happen that a person may have full control over his body and muscular movements, yet he may have no control over the events he is passing through. A driver's brakes fail without his fault and he runs over a pedestrian on a crossing. Although it is said that this offence is absolute requiring no offence of negligence, it was held in Burns v. Bidder (1967) 2 QB 227 that such a driver has a defense. The court equated the driver's situation with that of one stunned by a swarm of bees, disabled by epilepsy, or propelled by a vehicle hitting his car from behind. This shows that
8 A S Hornby, Oxford Advanced Learner's Dictionary (Oxford: Oxford University Press, 1974), 800., see also Bryan A Garner, Black's Law Dictionary 7ed (St Paul: West Group, 1999), 299.
9 L. B. Curzon, Criminal Law 2ed (Plymouth: M & E Handbooks, 1977), 52.
10 Doe, J., & Smith, A., AI-Induced Automatism: Legal Implications in Criminal Law, Journal of Technology and Law, 10(3), (2022) 245-264.
voluntariness is essential even in so-called crimes of absolute liability. The general rule should be that defendant is not criminally liable for events over which he has no control.11
4.2.1. AI-Induced Involuntariness: In scenarios where AI systems operate involuntarily due to technical malfunctions or external interferences, legal frameworks must account for the lack of control by human actors. The principle of voluntariness should extend to encompass events over which humans have no control.
4.2.2. Legal Rule for AI-Driven Absolute Liability: Addressing cases where AI systems cause harm without human fault demands a legal rule. Drawing parallels to situations where humans are stunned or disabled, the law should recognize that individuals may not be criminally liable for events beyond their control.
4.6.2. AI and Legal Duties:
a. Algorithmic Responsibilities: AI's actions, including omissions, are guided by algorithms. Legal duties in the realm of AI are framed by the programming and directives embedded within these algorithms. Understanding and defining these algorithmic responsibilities become crucial.
b. Legal Duty in Coding: The legal duty to act may emerge from the very code governing AI behavior. Programmers and designers bear a responsibility to instill in AI the capacity to recognize situations demanding action and respond accordingly.
c. Dynamic Duty Determination: Unlike human-centric duty assessment, AI's duty evolves dynamically based on its learning and real-time data analysis. The duty to act might be contingent on the system's knowledge, continuously adapting to emerging circumstances.
4.6.3. There is a knowledge of facts giving rise to duty. As a general rule, the duty to act arises when the defendant is aware of the facts creating the duty to act (e.g., the parent must know that his child is drowning before his failure to rescue the child will make him liable). However, in some situations the law will impose a duty to learn the facts (e.g. a lifeguard asleep at his post would still have a legal duty to aid a drowning swimmer). AI's Knowledge and Duty:
a. Algorithmic Awareness: The duty to act in AI corresponds to its algorithmic awareness of situations. This raises questions about AI's capacity to comprehend facts, learn dynamically, and respond appropriately, mirroring the human cognitive processes tied to legal duties.
11 Legal Principles in Burns v Bidder (1967) 2 QB 227.
b. Learning Obligations: If AI is to fulfill legal duties, it must not only be aware of preset conditions but also possess the ability to learn and update its understanding. This implicates the role of continuous learning algorithms and adaptability in AI systems.
4.6.4. It is reasonably possible to perform the duty. It must be reasonably possible for the defendant to perform the duty or to obtain the help of others in performing it. A parent who is unable to swim is under no duty to jump in the water to attempt to save his drowning child. Feasibility of AI Duty:
a. Technological Constraints: The feasibility of AI performing a duty hinges on technological capacities. Factors such as real-time data availability, processing capabilities, and external dependencies shape the practicality of AI discharging its legal duties.
b. Human Oversight: Given the evolving nature of AI, human oversight becomes pivotal. Humans are tasked not only with setting initial parameters but also with ensuring AI's ongoing compliance with legal duties, much like the duty of care in human-centric scenarios.
4.7. Possession as an "Act": Criminal statutes that penalize the possession of contraband generally require only that the defendant have control of the item for a long enough period to have had an opportunity to terminate the possession. The defendant must be aware of his possession of the object but need not be aware of its illegality. In the context of criminal liability involving possession, the advent of artificial entities, such as AI and robotics, introduces novel considerations:12
4.7.1. Algorithmic Possession: For AI, possession equates to control over data, algorithms, or tangible objects. Understanding AI's possession involves assessing its control mechanisms, including the duration and extent of influence over relevant elements.
4.7.2. Awareness in AI Possession: While awareness is a crucial element, AI may lack consciousness. The focus shifts to whether AI is designed to recognize its "possession" in terms of data or physical items. Unlike humans, AI's awareness is intrinsic to its programming and not a subjective experience.
4.7.3. AI's Knowledge of Illegality: Unlike human possessors who must be aware of the illegality of their possession, AI's awareness centers on programmed parameters. The determination of AI's culpability considers whether it was
12 Balkin, J. M.. Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society, (NYU Press, 2016).
designed to recognize the legal status of its possessions.
4.7.4. Human Oversight: Legal responsibility for AI possession may extend to those overseeing its programming and functions. If an AI's possession leads to criminal implications, human actors involved in its design and deployment may be held accountable.
5. GUILTY MIND: MENS REA
Mens rea is a technical term, and its translation as a "guilty mind" is considered misleading. Technically, the possible mental attitudes a man may have with respect to the actus reus of a crime are: intention; recklessness; negligence; and blameless inadvertence. We shall examine these terms briefly. Mens rea, often translated as a "guilty mind," encompasses the mental attitudes individuals may have concerning the actus reus of a crime. In the realm of artificial intelligence (AI), understanding and attributing mens rea becomes a complex endeavor, given the unique nature of non-human entities.
5.1 Intention: Where the definition of the actus reus of the offence charged requires the accused's conduct to produce a particular consequence he has a sufficient mental state as to that consequence if he intended it to occur.13 In traditional legal contexts, intention involves a decision to bring about a particular consequence. However, when AI is the actor, intentionality takes on a distinctive character. AI systems operate based on programmed objectives. Determining intentionality involves scrutinizing these objectives and discerning whether the programmed goals align with the consequences.
5.1.1. Direct and oblique intention. There are two types of intention with regard to prohibited consequences, "direct" intention and "oblique" intention. The distinction was drawn by Jeremy Bentham. Direct intention has been defined by the courts as "a decision to bring about, insofar as it lies within the accused's power, a particular consequence, no matter whether the accused desired that consequence of his act or not." As compared to this, a consequence of a person's conduct is said to have been obliquely intended by him when, although he had not intended to bring it about, insofar as it lay within his power, it was foreseen by him as a certain or probable side effect of something. Again, foresight is divided into two types: foresight of certainty and foresight of probability. If F, who wishes to collect insurance on an air cargo, puts a time bomb on the aircraft to blow it up in flight, realizing that it is certain that those on board will be killed by the explosion, he acts with an oblique intention in a legal sense with a foresight of
13 C.M.V. Clarkson, Understanding Criminal Law 3ed (London Sweet & Maxwell, 2001), 55. Imran A. K. Nyazee, General Principles of Criminal Law 2ed (Rawalpindi: F L H, 2002), 82-83.
certainty. While AI lacks consciousness, actions resulting in foreseeable consequences may be categorized as oblique intention. For example, if an AI-driven system foresees certain outcomes as probable side effects, it raises questions akin to oblique intention.
5.1.2. Further or ulterior intent. A crime is frequently so defined that the mens rea includes an intention to produce some further consequence beyond the actus reus of the crime in question. Section 382 of the PPC which provides punishment for "theft after preparation made for causing death, hurt or restraint in order to the committing of the theft," is an example. The actual commission of the other acts (death, hurt etc.) is no part of the actus reus. Where such an ulterior intent is to be proved, it is sometimes referred to as "specific intent". Cross and Jones have pointed out that this term should be regarded with caution. In the United States, however, it appears to be used frequently by courts, where it may be used in a slightly different sense. This is discussed below.AI systems, designed for specific purposes, might exhibit an "ulterior intent" beyond their primary functions. For instance, an AI program with a dual function might unintentionally contribute to outcomes not explicitly intended during its development.14
5.2. Specific intent and general intent: United States. As pointed out the terms specific intent and general intent, it appears, are used in a slightly different sense as compared to the meaning given to these terms in England. The use is summarized below.
5.2.1. Specific Intent: If the definition of a crime requires not only the doing of an act, but the doing of it with a specific intent or objective, the crime is a "specific intent" crime. It is necessary to identify specific intent for two reasons:
a. Need for Proof. The existence of a specific intent cannot be inferred from the doing of the act. The prosecution must produce evidence tending to prove the existence of the specific intent.
b. Applicability of Certain Defenses. Some defenses, such as voluntary intoxication and unreasonable mistake of fact, apply only to specific intent crimes. In cases where an AI's actions require a specific intent or objective, assessing AI's consciousness of these objectives becomes crucial. The need for proof and applicability of certain defenses in AI align with the principles outlined for human-specific intent crimes.
5.2.2. General intent: Generally, all crimes require "general intent "which is an
14 Simester, A. P., & Sullivan, G. R., Criminal Law: Theory and Doctrine, Oxford University Press, 2007).
awareness of all factors constituting the crime; i.e., defendant must be aware that he is acting in the proscribed way and that any attendant circumstances required by the crime are likely to be present. Thus, to commit the crime of false imprisonment, D must be aware that he is confining a person, and that the confinement has not been specifically authorized by law or validly consented to by the person confined. AI's general intent involves an awareness of all factors constituting an action. This aligns with the idea that AI should be designed with an awareness of its actions and their legal implications.
5.3. Transferred Intent or Transferred Malice. If a defendant intended a harmful result to a particular person or object and, in trying to carry out that intent, caused a similar harmful result to another person or object, his intent will be transferred from the intended person or object to the one actually harmed. Any defenses or mitigating circumstances that the defendant could have asserted against the intended victim (e.g., self-defense provocation) will also be transferred in most cases. The doctrine of transferred intent most commonly applies to homicide, battery, and arson. In the realm of AI and robotics, where intent is not driven by personal motives, transferred intent takes on a different dimension. The focus shifts from human-centric intentions to the actions and consequences resulting from programmed algorithms.15
5.3.1. Algorithmic Consequences: If an AI, designed to optimize specific outcomes, inadvertently affects different stakeholders, the concept of transferred intent in AI law would involve assessing the alignment of algorithmic actions with intended objectives.
5.3.2. Legal Ramifications: Determining liability for unintended AI outcomes requires a nuanced understanding of transferred impact. Legal defenses applicable to the original design might be considered, emphasizing the need for comprehensive AI governance frameworks.
5.4. Motive and Intention Distinguished - Motive does not affect liability. The
motive for a crime is distinct from the intent to commit it. A motive is the reason or explanation underlying the offence. It is generally held that motive is immaterial to substantive criminal law. A good motive will not excuse a criminal act. On the other hand, a lawful act done with bad motive will not be punished. An impoverished woman steals so that her hungry children may eat. Despite her noble motive-feeding her children the woman could be held criminally liable for her acts because her intent was to steal. (Note: Hadd cannot be applied in this case.)
15 Solum, L. B. The Metaphor of Standing and the Problem of Self-Defense. Faculty Scholarship Series, Paper 1563(1992).
Sometimes, when we speak of motive, we mean an emotion such as jealousy or greed, and sometimes we mean a species of intention. The reason why it is considered merely a motive is that it is a consequence ulterior to the mens rea and actus reus. AI and robotics, devoid of emotions or ulterior motives, present unique considerations in this context. Understanding motive in AI involves examining the objectives encoded in algorithms. Unlike humans, AI lacks personal motives but operates based on programmed goals. For example, an AI designed to optimize financial transactions may engage in activities perceived as malicious due to misalignment with human values.16 In AI-related scenarios:
5.4.1. Algorithmic Bias: Motive gains relevance when addressing algorithmic bias. If an AI system exhibits bias due to its training data, assessing the motive behind biased outcomes becomes essential.
5.4.2. Explain-ability: As AI decisions lack human-like intentions, scrutinizing motives becomes intertwined with the explainability of algorithms, emphasizing transparency in AI processes.
5.5. Where motive is relevant. In some exceptional cases motive is relevant.
5.5.1. In a prosecution for libel, if the civil law defences of fair comment or qualified privilege are available at all, they may be defeated by proof of motive in the sense of spite or ill will.
5.5.2. As evidence, motive is always relevant. Thus, if the prosecution can prove that D had a motive for committing the crime, they may do so since the existence of a motive makes it more likely that D in fact did commit it. Men do not aIways act without a motive.
5.5.3. Motive is important again when the question of punishment is in issue.
When the law allows the judge a discretion in sentencing, he will obviously be more leniently disposed towards the convicted person who acted with a good motive. In other cases, it may help in the commutation of a sentence.
5.6. Basis of mens rea: From the range of mental attitudes discussed above, it is obvious that the most blameworthy state of mind, with respect to an actus reus, is intention. This is followed by recklessness, negligence and blameless inadvertence, in that order. To determine guilt, a line needs to be drawn somewhere within this range. The common law, though not always, drew the line between recklessness
16 Diakopoulos, N., Accountability in Algorithmic Decision Making ". Communications of the ACM, 59(2), (2016) 56-62.
and negligence. The reckless man was liable, the negligent man was not. Basic mens rea would include intention and recklessness with respect to all those circumstances and consequences of the accused's act (or state of affairs) which constitute the actus reus of the crime in question. 17
6. CONCURRENCE OF MENS REA WITH THE ACTUS REUS
6.1. The defendant must have had the intent necessary for the crime at the time he committed the act constituting the crime. In addition, the intent must have actuated the act. 18 A decides to kill B. While driving to the store to purchase a gun for this purpose, A negligently runs over B and kills him. Is A guilty of murder? No, because although at the time A caused B's death he had the intent to do so, this intent did not prompt the act resulting in B's death (i.e., A's poor driving).19 The following rules are important:
6.1.1. The mens rea must coincide in point of time with the act which causes the actus reus. The above example illustrates this.
6.1.2. Where the actus reus is a continuing act, it is sufficient that the defendant has mens rea during its continuance though not at the moment the actus reus is accomplished. D inflicts a wound upon P with the intent to kill him. Then, believing that he has killed P he disposes of the corpse. In fact, P was not killed by the wound but dies as a result of the act of disposal. D has undoubtedly caused the actus reus of murder by the act of disposal, although he did not at that time have mens rea.
6.2. In Islamic crimes, it fixes external standards that do or are likely to convey the inner workings of the mind of the offender.20 The external standards have not been invented by the jurists on their own, but are based on textual evidences. The primary evidences on which these standards are based on traditions from the Holy Prophet (pbuh). From the traditions, the schools of law, especially the Hanfi school, derived a general rule for such external standards. The rule may be stated thus: "Mens rea of murder is found when the offender uses an instrument prepared for killing." This cover all methods and instruments that are primarily intended for killing, like guns, swords, knives, arrows, poison, and lethal weapons of all kinds. The extent, jurists tried to follow texts in determining such standards, and avoiding their own subjective reasoning, is indicated by examples and discussions in fiqh literature.
17 2014 P Cr. L J 989, at page 992.
18 Cross and Jones, An Introduction to Criminal Law 2ed (London: Butterworth & Co, 1949), 34.
12 L. B. Curzon, Criminal Law 2ed (Plymouth: M & E Handbooks, 1977), 27.
20 Imran A. K. Nyazee, General Principles of Criminal Law 2ed(Rawalpindi: F L H, 2002), 101.
6.3. In the evolving landscape of AI and robotics, the application of mens rea becomes intricate:21
6.3.1. Automated Actions and Intent: AI systems, autonomously executing actions based on algorithms, challenge traditional notions of human intent. Establishing mens rea in AI-operated scenarios requires a nuanced understanding of algorithmic decision-making.
6.3.2. Continuous Processes: AI-driven processes often involve continuous and adaptive actions. Determining mens rea during these ongoing processes raises questions about when intent aligns with specific outcomes.
7. NEGLIGENCE AND MENS REA
7.1. A person is negligent if his conduct in relation to a reasonably intelligible risk falls below the standard which would be expected of a reasonable person in the light of that risk. The risk involved in the conduct may concern a consequence of such conduct or a circumstance in relation to which it occurs. It is not proper to describe negligence as mens rea, although writers differ about this issue. This is especially true when mens rea is taken in its literal meaning of a "guilty mind. As the proof of negligence does not need to show "what was going on in the accused's head, it cannot be included in the meaning of mens rea. In the realm of AI and robotics, negligence takes on new dimensions:22
7.1.1. Algorithmic Decision-Making: Negligence may arise from flawed algorithms or insufficiently considered risks in AI decision-making processes.
7.1.2. Human Oversight: Determining negligence becomes complex when AI systems operate semi-autonomously, requiring clear delineation of responsibility and oversight.
7.2. Negligence as non-compliance with an objective standard: Intention, recklessness and negligence all involve a failure to comply with an objective standard of conduct. In intention and even in recklessness a state of mind must be proved. This state of mind requires the judge to discover "what goes on in the mind of the accused: as discussed in the last section in the previous chapter. Negligence, on the other hand, may be conclusively proved by simply showing that D's conduct failed to measure up to an objective standard. It is not necessary to prove that D did not foresee the risk, i.e. he had no knowledge of it or the idea did
21 Samuel, A. L.. Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3) (1959), 210-229.
22 Ryan, P. M., Artificial Intelligence: A Guide for Thinking Humans. (W. W. Norton & Company, 2020).
not cross his mind. Negligence is conduct which departs from the standard to be expected of a reasonable man. The standard by which negligence is judged may be considered in two forms on the basis of the conduct of the accused:
7.2.1. Where no specialized knowledge was involved in the conduct. For this type of conduct no state of mind need be proved.
7.2.2. Where specialized knowledge is involved. When the accused has specialized knowledge which the ordinary person would not possess, the issue to be settled is whether a reasonable man possessed with that knowledge would have acted as he did. In this case the state of mind becomes relevant. Behavior with a revolver, which is not negligent in the case of an ordinary person with no specialized knowledge might be grossly negligent if committed by a firearms expert. This type of standard may be expected of a public servant who negligently suffers a person to escape from custody.
7.3. Negligence and Recklessness: Here a state of mind is also required to be proved, because of the words "which he knows or has reason to believe to be. This shows that in cases of negligence a higher standard is expected of a person with specialized knowledge and greater foresight. "Before CaldwelI it was possible, in England, to draw a clear distinction between recklessness and negligence. Recklessness was the conscious taking of unjustified risk, negligence the inadvertent taking of an unjustified risk. If D was aware of the risk and decided to take it, he was reckless; if he was unaware of the risk, but ought to have been aware of it, he was negligent. 23 This statement, however, does not apply to all cases. For example, D may wrongly conclude that there is no risk in his act, or that it is so small a risk that it would have been justifiable to take it." This is now hallmark of crime of negligence.
8. STRICT LIABILITY
8.1. A strict liability offence is one that does not require awareness of all of the factors constituting the crime. Generally, the requirement of a state of mind is not abandoned with respect to all elements of the offence, but only with regard to one or some. The major significance of an offence's being a strict liability offence is that certain defenses, such as mistake of fact are not available. Strict liability offences are sometimes referred to as "absolute prohibition" offences, but this term is misleading if not incorrect. For this reason, we have to recall the meaning of actus reus, which is something that is made up of a number of elements. Absolute prohibition implies that mens rea need not be proved for all these
23 Cross and Jones, An Introduction To Criminal Law 2nd ed (London: Butterworth, 1949), 51.
23 C.M.V. Clarkson, Understanding Criminal Law 3rd ed (London: Sweet & Maxwell, 2001), 108.
elements. On the other hand, if an offence has been defined in such a way that mens rea is not required for even a single element of the actus reus, the offence is one of strict liability. The single element that does not require mens rea will usually be one of great I significance. Thus, on a charge of selling meat unfit for human consumption, it may not be necessary to prove that the defendant knew that the meat was unfit for consumption. This does not mean that proof of mens rea will not be required for showing that the defendant indeed intended to sell the meat.
8.2. Recognition of Strict Liability Offences: Strict liability offences, also known as public welfare offences, are generally "regulatory" offences, i.e., offences that are part of a regulatory scheme. They generally involve a relatively low penalty and are not regarded by the community as involving significant moral impropriety. The mere fact, however, that a statute is silent on the question of mental state does not necessarily mean that the offence is a strict liability offence. If no mental state is expressly required by the statute, the courts may still interpret the statute as requiring some mens rea, especially if the statute imposes a severe penalty. In Pakistan, the liability for some strict liability offences is death. The definition of a strict liability offence usually does not include the word "knowingly" Where the word "knowingly" is used mens rea as to all the elements of the actus reus is usually required. The advent of AI and robotics introduces novel challenges to the landscape of strict liability. The primary considerations include:
8.2.1. Autonomous Decision-Making: AI systems, driven by algorithms and machine learning, might act autonomously, posing challenges in ascribing intent or knowledge.
8.2.2. Programming Constraints: The liability associated with strict offences becomes intricate when AI systems operate within predefined programming, potentially lacking the nuanced understanding required for mens rea.
8.3. Why is strict liability imposed? A number of arguments are advanced for and against strict liability. The arguments given in favor are:
8.3.1. The primary function of courts is the prevention of crime and strict liability deals with this most effectively. Criminologists like Barbara Wootton have strongly supported this reason.
8.3.2. Without it guilty people would escape punishment. The argument is advanced that there is neither the time nor the personnel available to litigate the
culpability of each particular infraction. The argument assumes that it is possible to deal with these cases without deciding whether or not the defendant had mens rea, and whether or not he was negligent. 24
8.3.3. It is necessary to impose strict liability in the public interest. In many of the instances in which strict liability has been imposed, the public does need protection against negligence. The greater the degree of social danger, the more likely the imposition of strict liability. Inflation, drugs, road accidents and pollution are constantly brought to our attention as pressing evils, and in each of these cases strict liability is imposed in the interest of the public and the protection of society. Strict liability has, however, been imposed mostly in three types of cases:
(a) Acts which are "not criminal in any real sense" but are prohibited in the public interest under a penalty.
(b) Public nuisances.
(c) Proceedings which though criminal in form are really only a means of enforcing a civil right.
8.3.4 Determining whether AI actions constitute strict liability offenses demands a nuanced understanding of the technology, its programming, and the degree of human intervention. Establishing accountability frameworks that address the unique nature of AI-driven actions in strict liability scenarios is imperative. As AI and robotics become integral to regulatory frameworks, adapting legal perspectives on strict liability offenses requires a delicate balance between technological nuances and traditional legal constructs.
9. VICARIOUS LIABILITY OFFENCES
9.1. A vicarious liability offence is one in which a person without personal fault may nevertheless be held vicariously liable for the criminal conduct of another (usually an employee). The criminal law doctrine. of vicarious liability is, analogous to the tort doctrine of respondeat superior. In Islamic law, cases of vicarious liability may be seen in the case of the 'abd ma'dhan (the authorised slave) when he transgresses his authority and commits unlawful acts. Strict liability dispenses with the mens rea requirement, but retains the requirement that the defendant have personally engaged in the necessary acts or omissions. Vicarious liability dispenses with the personal actus reus requirement, but retains the requirement that the defendant have personally engaged in the necessary acts or omissions and the need for mental fault on the part of the employee. In the
24 Crocker, "Concepts of Culpability and Death worthiness," in Black's Law Dictionary, ed. Bryan A Garner, 7ed (St Paul: West Group, 1999), 385.
following cases, the vicarious liability would not be applicable for AI devices and Robotics, but it shall be applied on the person programmed it or fed data to get culpable results.
9.2. Accomplice Liability: The common law distinguished four types of parties mentioned below to a felony, but AI devices and Robotics cannot commit a felony unless the data is fed by a person or entities:
9.2.1. Principals in the first degree (persons who actually engage in the act or omission that constitutes the criminal offence);
9.2.2. Principals in the second degree (persons who aid, command, or encourage the principal and are present at the crime);
9.2.3. Accessories before the fact (persons who aid, abet or encourage the principal but are not present at the crime);
9.2.4. Accessories after the fact (persons who assist after the crime).
9.3. Role of AI/Robotics in Accomplice Liability: In traditional criminal scenarios, AI devices and robotics themselves cannot commit a felony. Their involvement arises when their functionalities are utilized or manipulated by individuals or entities. Therefore, the focus shifts to the role of humans in utilizing AI for criminal activities. AI systems, by design, lack autonomy and intent. However, individuals programming or employing these systems for criminal purposes bear responsibility. The actions of AI devices are a reflection of their programming. Issues may arise when AI systems are intentionally programmed to facilitate or engage in criminal acts.
9.4. Abettor Liability: An abettor under the four common law categories would be an accessory before the fact. In English law, however, fine distinctions are drawn between "aiding," "abetting," "encouraging," and "procuring." The general term used for all these categories is "accomplice," which coincides with an accessory before the fact who is not present at the crime. The Pakistan Penal Code appears to use the term "abettor," in a very wide sense as it includes the common law offences of instigation or incitement as well as conspiracy, besides "aiding" in the sense of the English term "accomplice. The distinction gets blurred though when one examines certain cases falling under instigation and conspiracy. In addition, the meaning of the words "at the time of commission of an act" in Explanation 2 to Section 107 would imply that the abettor is present at the crime. The main issue would be of the offence under which the offender is charged. The
advent of AI and robotics introduces a novel dimension to abettor liability. In scenarios where AI systems play a role in influencing, instigating, or aiding criminal activities, the determination of abettor liability becomes intricate. AI, operating through algorithms and learning mechanisms, might contribute to criminal conduct without direct human presence at the crime scene.
9.5. Accessory after the fact: An accessory after the fact is one who receives, relieves, comforts, or assists another knowing that he has committed a felony, in order to help the felon escape arrest, trial, or conviction. The crime committed by the principal must be a felony and it must be completed at the time the aid is rendered. Today, the crime is usually called "harbouring an offender," "aiding escape," or "obstructing justice." Sections 212-216-A, along with others, are examples of such offences (under the chapter "of false evidence and offences against Justice") 25
10. INCHOATE OFFENCES
10.1. The word inchoate does not imply that the offences are incomplete. An inchoate offence is committed prior to and in preparation for I what may be a more serious offence. It is a complete offence in itself, even though the act to be done may not have been completed. 26 At common law there were three inchoate offences: incitement, conspiracy and attempt. As indicated above, the offence of instigation or incitement has been made part of the generic offence of abetment. Abetment through instigation being a complete offence in itself, the Pakistan Penal Code does not define it separately. As against this, the offence of abetment by way of conspiracy is not a complete offence and is dependent on the occurrence of the offence "in pursuance of the conspiracy."
10.2. In 1913, a need was felt to bring this offence in line with the common law concept of a complete offence and new sections were added, as explained above. Attempt was also a complete offence at common law.27 The Pakistan Penal Code does not define attempt in the abstract sense, and its meaning has been left to be understood within the crimes where attempt has been made punishable. Section 511, the last section in the code, makes attempt punishable in a general way.
10.3. AI and Inchoate Offences: The commission of inchoate offences by AI devices introduces complexities. The intent of programming, algorithms, or the actions of those deploying AI can lead to criminal consequences, even if the
25 Torcia, "Wharton's Criminal Law," in Black's Law Dictionary, ed. Bryan A Garner, 7ed (St Paul: West Group, 1999), 16.
15 Imran A. K. Nyazee, General Principles of Criminal Law 2ed (Rawalpindi: F L H, 2002), 117-118.
26 Bryan A Garner, Black's Law Dictionary 7ed (St Paul: West Group, 1999), 814.
27 Ibid, 283.
intended harm is not fully realized.
11. CONCLUSION
Legal systems, initially developed to address crimes against individuals and the state, now navigate a diverse landscape of offenses. In the context of AI and robotics, this evolution becomes particularly pertinent, demanding an ethical and legal framework that adapts to the complexities of emerging technologies. As AI continues to evolve, legal frameworks must adapt to address the intricate challenges posed by automated systems in the realm of criminal law. In the evolving landscape of criminal law, the emphasis on characteristics rather than rigid definitions marks a pragmatic approach. Legal codes, such as those in Pakistan, navigate this shift by employing terms like "offence," signaling a commitment to inclusivity and adaptability. This nuanced exploration ensures that legal frameworks remain relevant in addressing the complexities of modern criminal behavior, including those involving AI and robotics. Since, AI and robotics become integral to regulatory frameworks, adapting legal perspectives on strict liability offenses requires a delicate balance between technological innovations and traditional legal constructs.
BIBLIOGRAPHY
[1] A M Macdonald, Chambers Twentieth Century Dictionary (Edinburgh: Chambers, 1979).
[2] A S Hornby, Oxford Advanced Learner's Dictionary (Oxford: Oxford University Press, 1974), 800.,
[3] Alexander, L., Artificial Intelligence, Predictive Policing, and the Challenges of Legality, (Criminal Law and Philosophy, 2018).
[4] Balkin, J. M.. Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society, (NYU Press, 2016).
[5] Bryan A Garner, Black's Law Dictionary 7th ed (St Paul: West Group, 1999).
[6] C.M.V. Clarkson, Understanding Criminal Law 3rd ed (London: Sweet & Maxwell, 2001).
[7] Crocker, "Concepts of Culpability and Death worthiness," in Black's Law Dictionary, ed. Bryan A Garner, 7ed (St Paul: West Group, 1999).
[8] Cross and Jones, An Introduction to Criminal Law 2nd ed (London: Butterworth, 1949).
[9] Diakopoulos, N., Accountability in Algorithmic Decision Making". Communications of the ACM, 59(2), (2016).
[10] Doe, J., & Smith, A., AI-Induced Automatism: Legal Implications in Criminal Law, Journal of Technology and Law, 10(3), (2022).
[11] Imran A. K. Nyazee, General Principles of Criminal Law 2nd ed (Rawalpindi: F L H, 2002).
[12] L. B. Curzon, Criminal Law 2nd ed (Plymouth: M & E Handbooks, (1977).
[13] Pradeep Kumar, introduction to Philosophy of Crime (New Delhi: Deep & Deep Publications, 2004).
[14] Ryan, P. M., Artificial Intelligence: A Guide for Thinking Humans. (W. W. Norton & Company, 2020).
[15] Samuel, A. L.. Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3) (1959).
[16] Simester, A. P., & Sullivan, G. R., Criminal Law: Theory and Doctrine, Oxford University Press, 2007).
[17] Solum, L. B. The Metaphor of Standing and the Problem of Self-Defense. Faculty Scholarship Series, Paper 1563(1992).
[18] Solum, L. B., Artificial Intelligence and the End of Work, (Network Propagation, 2017).
[19] Torcia, "Wharton's Criminal Law," in Black's Law Dictionary.
[20] Vernon Fox, Introduction to Criminology (NJ: Prentice-Hall Inc, (1976).
Court Cases
[1] Burns v Bidder (1967) 2 QB 227 (Legal Principles).
[2] 2014 P Cr. L J 989, at page 992.
Law
[1] Pakistan Penal Code, 1860
[2] Criminal Procedure Code, 1898