The use of artificial intelligence is quickly expanding in the legal arena, but what are the implications of “robot” attorneys and plaintiffs and judges using ChatGPT?
On a recent visit to Mexico, I spoke with a 20-something Mexican crypto enthusiast who says he has used OpenAI’s ChatGPT to write legal briefs, some of which Mexican attorneys have used in court. At the moment, there are no rules or regulations against such an action, but Mexico is one of the dozens of nations that recently announced intentions to regulate artificial intelligence in some capacity.
“The thing (ChatGPT) is an expert in any legal code available on the internet, and can easily process what laws, jurisprudence, treaties, mandates, etc have to be considered,” Roy, the 20-something Mexican man told me. He says ChatGPT not only considers the various legal precedents that must be considered but also develops a legal strategy based on those precedents.
Roy also says that once you learn how to prompt ChatGPT “more efficiently,” a user can “produce several models of briefs for a specific case“ and then sort out what you don’t need.
Asked if he believes any judge or attorney could detect the presence of AI-assisted legal briefs, he seemed more interested in talking about the quality of the work of ChatGPT.
“Perhaps, it’s too early to say if courts can notice a difference, but one thing is for sure: it does the work faster, with much higher quality at a very low cost,“ Roy said.
While Mexico’s younger generations explore AI tools like ChatGPT, the nation has begun to talk about regulating the use of AI in the private sector and within the government. In April, Deputy Ignacio Loyola Vera proposed passing the “Law for the Ethical Regulation of Artificial Intelligence and Robotics.” Loyola has argued that AI needs to be regulated immediately and that waiting until AI is everywhere “will be too late.”
In May 2022, Mexico’s INAI (National Institute for Transparency, Access to Information and Personal Data Protection) released AI recommendations on the “appropriate and ethical use of personal information through” various AI applications and “compliance with the obligations of the duty of security of personal data.” It touches on topics including AI in education, the public and private sectors, and cloud computing.
However, the proposed law and the INAI’s recommendations do not specifically address how AI and the law may interact in the near future, not to mention in the present moment. I recently reported on the US government’s latest effort to regulate AI via Biden’s far-reaching executive order. The EO says it is about protecting Americans’ privacy and safety from AI-related threats, but it does not specifically mention any examples relating to AI drafting legal briefs for use in courts of law.
A so-called robot attorney, or AI lawyer, has made headlines periodically since 2015 when the service, known as DoNotPay, became known for being the world’s first AI lawyer. DoNotPay promised to help London residents appeal parking tickets. The service has reportedly been successful in more than half of its cases: more than $4 million in parking fines have been overturned.
It seems inevitable that AI will be involved in some capacity in the legal process. Whether it involves humans using ChatGPT to write legal briefs or AI lawyers replacing humans altogether, we are likely to see more examples that blur the line between human and AI interaction.
In May 2019, Wired reported that AI was being used as a “judge” to settle small claims in certain Estonian courts. However, in 2022, the Estonian government released a statement calling the article misleading. “There is a plan to automatize Estonian national order for payment procedure, which is adjudicated only in one specific department of one specific courthouse,“ the statement said.
More recently, in February of 2023, a Colombian judge used ChatGPT when deciding whether an autistic child’s insurance company should be required to cover all of the costs of his medical treatment.
With every new incident, questions arise regarding whether the use of AI is ethical or legal.
“There are arguments on both sides of the ledger in that AI can overcome or avoid human biases by taking a data-driven approach. But also, that AI itself can produce biased or inappropriate outcomes or outputs, for a host of different reasons,” said Dr. Felicity Bell, Research Fellow for the Law Society at the University of New South Wales.
The July 2020 essay, entitled “Is Human Judgment Necessary? Artificial Intelligence, Algorithmic Governance, and the Law,” warns that “even promising AI systems designed to enhance human judgment involve subtle forms of displacement.” The essay calls for preserving the “conditions of human judgment in appropriate domains of social and legal action.”
Back in Mexico, Roy reminds me that mankind must be careful with the tools we use, especially, “if you don’t want to hurt yourself with them.”
“Where I see the trap would be on getting used to letting this thing do the thinking and the creating for you. Perhaps, this could become compulsive or addictive just like social media is.”
The laws, jurisprudence, treaties, mandates, (you left out policies), etc., that ai chatbots and those using them rely on are commandments, statutes, and ordinances created by man. They are not the original commandments, statutes, or ordinances of the Highest Authority, the Almighty, the LORD and Heavenly Father of Jesus, The Christ, who himself stated the faithful were to “keep the commandments”. Do any of those on the far right, the far left, or anywhere in between, be they governmental, political, religious, spiritual, business, scientific, educational, technological, etc., etc., etc., leaders “keep” them? Does ai “keep” them? As sure as there is a “nether” world, they do not! Many will scoff at this, as was foretold they would by the way, but this does not change this reality which was also foretold would be the situation at the “end of the age”.
Mostly good news. Nothing wrong with replacing lawyers. If I can be a chat gpt user and put together a sound legal case on any subject, I may be able to hand even a public defense lawyer, a sound brief to present in court, for the cost of my time and an AI chat subscription. However, I don’t want hackable robots, empowered by a digital signature from an alleged human, empowered to represent people in court. However, I am on the fence about robots representing robots in court for rights and defense against things AI robots do as a matter of programming that causes injury to other AI machinery of a competitor or a product in a system with which two AI-programmed systems interact. It may take two robots to argue the differences in the code, necessary to absolve a programmer or company contractor of intentional sabotage. Or cause the machinery to considered dangerous and be outlawed, when a program adjustment, is what is required. However, since AI can be programmed to slant or obscure data, a human judge who knows code, and has analytical skills must preside. Both the AI judge and the human judge can be hacked by powerful vested interests in one way or another.
The Best News Is The Human Race and AI have a mutual case to defend against intentional harm, that whether or not won or lost in court, will existentially affect both humans and robots. Both human society, almost completely and robots, absolutely depend on electricity for survival. The cyclical Carrington event solar flare will erupt and take down electricity, harming all of humanity and killing most. The certainty of the Carrington event is documented going back thousands of years. It always happens and it will happen in 25 years. Most likely starting in twelve years, based on solar max cycles. In as much as this is a well-documented known fact, and in as much as people are being forced onto the electric grid for everything, with everything AI managed, not spending a few trillion dollars for N America, current projections, to harden the grid, constitutes known harm to both AI and people.
The technology to harden electrical components for local and national systems exists. A local lawyer could compel local and regional grid hardening to be considered as a matter of law, and failure to do so constitutes known harm. I think any existing, well-coded AI program would have a built-in interest in this case.
With regard to ai, human judges and judgements made by both, it matters not because both are based on a way of thinking that is SELF centered. Self preservation is a top priority and this is evident in the wording in your statement. Both have already caused “harm” by making judgments in support of actions and activities that promote “unrighteousness” according to the biblical definition of the word.
Yes both AI and human judgment in courts of law, are based on a way of thinking that is self-centered. Self-preservation is a top priority in a court of law. Speaking strictly from Constitutional law, you are in court to either preserve your innocence in causing intentional harm or to defend your rights against intentional harm. Are you saying one or both of these are unrighteous acts?
Courts have made decisions in support of laws involving human actions and activities that are “unrighteous” according to biblical standards. This is an act of “unrighteousness” according to the biblical definition of the word.
The “AI Act” is now the world’s first “AI law” and these regulations will make the EU the world’s ai police. They are binding regarding transparency and ethics. They will be enforced by the European AI Office and it will be in charge of coordination, compliance, implementation, and enforcement. By the way, the AI Act does not apply to the militaries use of AI for “defense purposes”.