Part I
Alarie, Benjamin, et al. “How Artificial Intelligence Will Affect The Practice Of Law.” The
University of Toronto Law Journal, vol. 68, no. 1, 2018, pp. 106–24, https://doi.org/10.3138/utlj.2017-0052.
This article focuses on the use of artificial intelligence as a predictive tool for assessing the merits of a legal case. Although AI cannot provide legal advice like a lawyer can, it is able to survey numerous judicial decisions relevant to a case, providing lawyers with well-informed and accurate predictions of outcomes of legal strategies. The authors also argue that even if current AI is efficient in some legal tasks, it is unclear how impactful future AI will be in the field. Many revolutionary technological developments did not serve as simple substitutes for existing processes but instead led to the creation of new ones. The strength of this text is that it centers on legal strategy prediction and AI, providing a thorough analysis. It also made sufficient use of scholarly articles in the legal-AI discourse as evidence. I did not encounter significant weaknesses.
Aristotle. On Rhetoric: A Theory of Civic Discourse. Translated by George A. Kennedy, Oxford
University Press, 1991.
“Artificial Intelligence (AI)” U.S. Department of State, U.S. Department of State, 21 June 2023,
www.state.gov/artificial-intelligence/#:~:text=“The%20term%20%27artificial%20intelligence%27,influencing%20real%20or%20virtual%20environments.”.
Ashley, Kevin D. Artificial Intelligence and Legal Analytics : New Tools for Law Practice in the
Digital Age. Cambridge University Press, 2017.
Bryson, Joanna J. “Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of
Ethics.” Ethics and Information Technology, vol. 20, no. 1, 2018, pp. 15–26, https://doi.org/10.1007/s10676-018-9448-6.
Burkert, Andreas. “Ethics and the Dangers of Artificial Intelligence.” ATZ Worldwide,
vol. 119, no. 11, 2017, pp. 8–13, https://doi.org/10.1007/s38311-017-0141-x.
This piece addresses some of the ethical concerns regarding AI and autonomous vehicles. In regard to autonomous vehicles, the author asks: “how does it decide if it has to choose between driving into a group of pensioners or a young mother with a small child?” (11). The response provided by Dr. Hermann Hauser of the Cambridge Institute for Entrepreneurship is that such a situation would never arise in real road traffic, even with human drivers (11). According to Hauser, there will be fewer accidents on average from AI than humans, mainly because they can survey their environment with more than a dozen cameras. Nonetheless, the author concludes that ethical principles are essential to responsible development of AI, especially for autonomous driving. Although I appreciated the focus of this work on AI and self-driving cars, the author could have explored other examples of ethical concerns and AI.
Connell, William J. “Artificial Intelligence in the Legal Profession: What You Might
Want to Know.” Computer and Internet Lawyer, vol. 35, no. 9, 2018, pp. 32–36.
This article provides an overview of AI’s impact on the legal profession. The author argues that although AI will take over lower-echelon functions such as document and contract drafting and analysis, lawyers will still be needed to exercise judgment, provide guidance to clients, and creatively develop legal strategies. He thinks that the development of AI should serve as a wake-up call to incorporate AI into legal education and practice. I appreciated that the author divided the text into sections, highlighting the impact of AI on different aspects of law such as legal education and the time-saving benefits. The author also included relevant examples of AI like IBM's Watson system, which can create arguments using a database of research.
Dignum, Virginia. “Ethics in Artificial Intelligence: Introduction to the Special Issue.”
Ethics and Information Technology, vol. 20, no. 1, 2018, pp. 1–3, https://doi.org/10.1007/s10676-018-9450-z.
The author of this article argues that AI should be developed with the ability to consider moral and societal values, thus regulating its choices and ensuring human well-being. She also thinks methods should be put in place for the human regulation and evaluation of AI systems during their use. In addition, she argues that ethical codes of conduct should be enforced for AI developers and users to ensure responsibility. The article only cites three sources, two of which are AI reports and the other being a recent book on cultural-moral development. Overall the author presented her argument clearly but I was hoping that examples of responsible AI practices would be mentioned.
Ergen, Mustafa. “What Is Artificial Intelligence? Technical Considerations and Future
Perception.” Anatolian Journal of Cardiology, vol. 22, no. Suppl 2, 2019, pp. 5–7, https://doi.org/10.14744/AnatolJCardiol.2019.79091.
This piece explains how AI works through pattern-predicting algorithms modeled after the human brain. To develop artificial intelligence, large amounts of data are necessary, similar to how humans need perception to make logical connections between symbols and to develop idea patterns. For example, machine learning AI uses statistical algorithms to “find patterns in massive amounts of data and then [...] make predictions.” (6) The article cites multiple sources on how types of AI are modeled after the brain, making this a succinct and helpful introduction to understanding how AI operates. There was no significant weakness in the text.
Felicity Bell, and Michael Legg. “Artificial Intelligence and the Legal Profession:
Becoming the AI-Enhanced Lawyer.” University of Tasmania Law Review, vol. 38, no. 2, 2019, pp. 34–59.
The authors argue that AI will not replace lawyers, even though it is more efficient than humans in legal tasks like drafting documents. Lawyers will still be needed to provide clients with legal guidance and judgment, which involves “knowledge, experience, common sense, [...] an understanding of human behavior and social norms, empathy, and the capacity to self-reflect.” (54). The work also utilizes research on AI and its capabilities, as well as the ethical and professional obligations of lawyers. This piece was insightful because it directly applies to my topic, but I was hoping it would also discuss how a lawyer’s persuasive and rhetorical skills compare to that of AI.
Haenlein, Michael, and Andreas Kaplan. “A Brief History of Artificial Intelligence: On
the Past, Present, and Future of Artificial Intelligence.” California Management Review, vol. 61, no. 4, 2019, pp. 5–14, https://doi.org/10.1177/0008125619864925.
The article defines artificial intelligence as “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” (5). It also provides a history of AI from its birth in the 1940s to its present types and uses, citing several historical sources as evidence. It also includes a brief section on the weaknesses of expert system AI, which solely relies on a collection of rules, mistakenly assuming that human intelligence formalizes itself in this way. The authors conclude that AI development will result in “unique ethical, legal, and philosophical challenges that will need to be addressed,”citing autonomous vehicle programming as one example (13). Ultimately this piece was useful for providing a general overview of what AI is. There were no significant weaknesses.
Harris, Laurie A. Generative Artificial Intelligence: Overview, Issues, and Questions for
Congress. [Library of Congress public edition]., Congressional Research Service, 2023.
This piece defines generative AI, its capabilities, and concerns regarding its use. According to the text, generative AI uses volumes of data to generate new content, including text responses, images, videos, and music. People are using GenAI programs like ChatGPT to write texts like speeches, emails, and even essays. Because of its utility, Congress is considering using GenAI in the government workforce. One of the central concerns regarding its use is that it can sometimes engage in what is called “hallucinating”, in which the program generates false information. Because GenAI is trained on data taken from the internet, the generated content can reflect biases and misinformation. Despite the text being short, it was helpful in explaining generative AI and its potential dangers, as this is one of the tools that lawyers are now using.
Hinton, Martin, and Jean H. M. Wagemans. “How Persuasive Is AI-Generated Argumentation?
An Analysis of the Quality of an Argumentative Text Produced by the GPT-3 AI Text Generator.” Argument & Computation, vol. 14, no. 1, 2023, pp. 59–74, https://doi.org/10.3233/AAC-210026.
Kienpointner, Manfred. “On the Art of Finding Arguments: What Ancient and Modern Masters
of Invention Have to Tell Us About the ‘Ars Inveniendi.’” Argumentation, vol. 11, no. 2, 1997, pp. 225–36, https://doi.org/10.1023/A:1007738732374.
Müller, Vincent C., and Michael Cannon. “Existential Risk from AI and Orthogonality:
Can We Have It Both Ways?” Ratio (Oxford), vol. 35, no. 1, 2022, pp. 25–36, https://doi.org/10.1111/rati.12320.
This work highlights the existential risk posed by artificial intelligence, namely the possibility of singularity. Singularity is the point at which AI will be advanced enough to self-improve and develop new AI systems that can surpass human capabilities. Some claim that once we reach singularity, humans will be unable to control AI, opening the possibility of human extinction. According to sources cited by the authors, there is a 50% chance of singularity occurring by 2040-50. Ultimately, through a series of logical investigations, the authors conclude that although AI can pose a significant threat to humanity if it is “designed or used badly,” singularity will likely not be the cause (34). Even though the article is extremely academic and at times difficult to understand, it was still useful for background on the main public fear concerning AI.
Rex, Peter. “A Moral Framework Will Protect Humans from the Dangers of Artificial
Intelligence.” Opposing Viewpoints Online Collection, 2022.
The author argues that AI should be developed with a moral framework to prevent it from posing a threat to humanity. He is worried that people could one day use AI to censor free speech or economically exploit others, among other things. For this reason, he argues that AI developers should be familiar with Judeo-Christian ethics and American principles to develop an AI “Magna Carta” that will prevent AI from violating rights to “speech, privacy, and [...] life, liberty, and the pursuit of happiness.” Although the author lists some of the dangers of AI, I disagree with his conservative moral bias. AI’s moral framework should be guided by the goals we want to achieve, not absolute principles. Another weakness is that the author does not elaborate on how AI could violate “life, liberty, and the pursuit of happiness,” making this section read like meaningless rhetoric.
Rogers, Justine, and Felicity Bell. “The Ethical AI Lawyer: What Is Required of Lawyers
When They Use Automated Systems?” Law, Technology and Humans, vol. 1, no. 1, 2019, pp. 80–99, https://doi.org/10.5204/lthj.v1i0.1324.
This article aims to outline the “necessary elements for lawyers to engage in professional conduct when utilising AI.” (80). To do so, the authors use the Four-component Model of Morality (FCM) developed by psychologist James Rest, proposing that lawyers use this framework to uphold an ethic of integrity and responsibility. Because AI cannot conduct moral reasoning, lawyers must not over-rely on it for case judgments. I found this article convincing and useful for my purposes but the inclusion of the FCM model was unnecessary. Perhaps the authors wanted to use psychological research as an authoritative underpinning for their argument.
Rowe, Niamh. “An AI Lawyer Is About to Defend a Human in a U.S. Courtroom.” The Daily
Beast, 14 Jan. 2023, www.thedailybeast.com/ai-lawyers-from-donotpay-will-defend-human-defendants-in-traffic-court.
“The Seven Types of Artificial Intelligence That Everyone Can Use.” CE Noticias Financieras,
English ed., ContentEngine LLC, a Florida limited liability company, 2023.
This periodical provides a descriptive typology of artificial intelligence. Some types of AI, like NarrowAI and Rule-based AI, surpass human capabilities in limited and specific tasks. Other types, like StrongAI and EvolutionaryAI, can adapt to new situations and solve complex problems, capable of exceeding human intelligence. This piece was useful in providing brief but insightful descriptions of seven different types of AI, helping me become more familiar with the topic. Perhaps one weakness is that there was no bibliography for further research.
Walton, Douglas, and Thomas F. Gordon. “How Computational Tools Can Help Rhetoric and
Informal Logic with Argument Invention.” Argumentation, vol. 33, no. 2, 2019, pp. 269–95, https://doi.org/10.1007/s10503-017-9439-5.
Part II
Artificial intelligence is a topic that has been trending within the last few years. More and more people are utilizing AI technology in their daily lives, like phone facial recognition and content-generation through programs like ChatGPT. Recently I developed an interest in the topic from a comment made by my law professor. In a casual conversation with his students, he expressed his opinion that as the use of AI for legal tasks like document generation and analysis becomes commonplace, a lawyer’s persuasive and rhetorical skills will become more valuable. Because I plan on becoming a lawyer, this comment sparked my interest as I wanted to learn more about how AI will affect the profession. Will there be less need for lawyers? Will lawyers become obsolete? These are some of the questions that prompted me to research the matter.
Prior to my inquiries, I was not fully aware of what constituted AI. I only knew of content-generating programs like ChatGPT and the development of self-driving cars. I came to find that any machine-based system that can “interpret external data correctly, [...] learn from such data, and [...] use those learnings to achieve specific goals and tasks through flexible adaptation” is considered AI (Haelin and Kaplan 5). This means that even facial recognition systems and programs like Google Translate constitute AI. I was surprised to find that there are different types of AI, some of which are rule-based, while others are more adaptive for problem-solving purposes. I was also unaware that some systems like IBM’s Watson can craft arguments using databases of research, making this type of generative AI potentially problematic for lawyers (Connell 33). One of the problems of generative AI like ChatGPT and IBM’s Watson is that they are trained on data taken from the internet, making them prone to prevalent biases and misinformation. In extreme cases, such technology can even generate false information, engaging in what some have called “hallucinating” (Harris). This is one reason why scholars argue that AI cannot be relied on, especially in a legal setting.
Among the works I engaged with, there is consensus that AI will not replace lawyers altogether, but that its use will significantly contribute to the profession. One type of AI that was mentioned is machine learning, which uses statistical algorithms to “find patterns in massive amounts of data [...] to make predictions” (Mustafa 6). I was surprised to find that lawyers are using machine learning to survey hundreds of case decisions, helping them assess the quality of their cases through AI prediction (Alarie 118). In addition, scholars have pointed out that AI exceeds human capabilities in tasks that most starting lawyers engage in, such as drafting and analyzing provisions and contracts. This makes AI a threat to the number of introductory positions in the field (Connell 33). Even so, AI will not make lawyers obsolete because it cannot self-reflect and provide experienced knowledge, guidance, and judgment like humans (Bell and Legg 54). For this reason, along with the aforementioned risk of argument-generating AI systems “hallucinating”, lawyers and clients should not rely on this new technology too heavily. Lawyers will still be needed to conduct moral reasoning, employ creativity in their argumentation, and supervise the AI programs they use.
These discoveries were insightful for my purposes but I was disappointed that there was no discussion of the importance of a lawyer’s persuasive and rhetorical skills in relation to AI. Nonetheless, I found that generative AI’s persuasive argumentation demonstrates obvious weaknesses (Hinton and Wagemans 59). With my future writing, I wish to apply these insights to the discourse of law and AI, exploring how one’s good oratorical and argumentation skills will become even more valuable in a future dominated by artificial intelligence. For this purpose, I consider the op-ed the best genre because I do not want the work to be unnecessarily academic and thus unintelligible for casual readers. Although I will still use scholarly evidence, I want to develop a thought-provoking piece for a general audience interested in the relationship between AI and humanity, not only those immersed in the legal field.