Taiwan’s AI Courts + Taipei Meetup Sept 5
What happens when justice is delivered at the speed of AI
I will be in Taipei starting this Saturday for a week, and am planning to host a ChinaTalk meetup this coming Tuesday, September 5. RSVP here — it would be great to see you!
I’ll also be attending Semicon as well as the Asianometry / Fabricated Knowledge / Semianalysis meetup on Sunday afternoon (sign up here for that event).
The following is a piece by ChinaTalk editor Nicholas Welch.
On Sunday, Taiwan Central News Agency journalist Lin Chang-shun 林長順 reported that Taiwan judges will begin using generative AI to draft court rulings. Chunghwa Telecom and the Judicial Yuan jointly developed the model, which is capable of “automatically filling in the facts of the crime” as well as “composing the rationale for the verdict.” According to Judicial Yuan Information Office director Lai Wuzhi 賴武志, if all goes well, judges will begin using the model to handle drunk driving and fraud cases by the end of September, drug cases by December, and civil cases like car-accident damages and debt cancellation next year.
This development follows the Judicial Yuan’s “Digital Policy 2.0.” Published in July, this four-pronged plan seeks to establish an “eCourt” 電子法院 by “using ICT technologies to establish a highly efficient and convenient judicial system”:
Remote proceedings 遠距法庭 — allowing the parties to participate via webcam is especially important in light of “geographical and epidemiological barriers.”
e-Procedure 電子程序 refers to advancements like real-time document sharing and automatically generated electronic receipts of delivery — all to improve the efficiency of litigation.
e-Management 電子案件管理 systems will use AI in digitizing case data, interpreting and classifying judgments, as well as creating an online appointment scheme.
And i-Justice 智慧司法: through a “further application of AI,” courts will deploy intelligent customer service assistants, automated stenographers — and generative AI will compose press releases for high-profile cases as well as automatically generate judicial decisions 裁判書草稿自動生成.
Page 38 of the Digital Policy 2.0 report explains that the Judicial Yuan seeks to “expand the types of cases for which draft decisions can be automatically generated.” In many areas of civil and criminal law, the report notes, “caseloads are high” and the subject matter is “highly repetitive” 重複性高 — such as theft, hit-and-run, loan repayment requests, consumer liability, and car accident damages.
To be sure, Taiwan judges won’t be the first to use AI in writing decisions. Earlier this year, a judge in Colombia used ChatGPT “to speed up and solidify his decision in a case in which he had to determine whether an autistic child should receive free health services.” A judge in Punjab, India, used ChatGPT to decide a criminal case, asking the chatbot, “What is the jurisprudence on bail when the assailant assaulted with cruelty?” But these stories were headlines of their own — that a single judge using AI to assist on a single case is newsworthy indicates that very few judges in the world use generative AI in writing their opinions (or at least very few publicly acknowledge as much).
Meanwhile, the American legal community, broadly speaking, has looked askance at any use of AI in the courtroom. Notably, two New York lawyers, Steven Schwartz and Peter LoDuca, were busted for using ChatGPT in a brief they submitted to the court: it cited cases that ChatGPT had pulled out of thin air. Judge P. Kevin Castel ordered them to pay $5,000 each in sanctions. In the aftermath of that incident, Texas Judge Brantley Starr now requires lawyers in his court to attest that they did not use generative AI in drafting briefs.
In other words, Taiwan’s decision to roll out generative AI as an integral feature of the courts is jumping into uncharted territory. More concerningly, the Digital Policy 2.0 report fails to address any of the risks of introducing generative AI into judicial decisionmaking. Instead, it delves into a brief history of AI — when Google’s AlphaGo crushed humans in Go, how ChatGPT has “drastically changed the course of human history” — and then moves right into implementation.
The potential concerns are numerous — I’ll mention just a few:
As frustrating as it may be, the grinding slowness of judicial systems is a feature, not a bug. The passage of time often cools tempers, allowing for a greater chance of out-of-court resolutions. And the slowness itself acts as a deterrent for spurious lawsuits.
Owing to rising professional standards, the quality of legal writing in the US has markedly improved over the past few decades. For example, SCOTUS opinions today are beautifully written — it wasn’t necessarily so fifty years ago, and certainly not 100 years ago. Thus the use of generative AI will put the current state of Taiwan’s judicial writing quality in a time capsule. I hope the Taiwanese public is already satisfied with their judges’ writing prowess.
Moreover, Taiwan’s judicial system is quite new: arguably it began in 1945 after Japanese colonial-era laws were repealed, but its current form wasn’t realized until after its democratic transition in the 1990s. The AI model which will generate court rulings was trained on cases from 1996 to 2021. Twenty-five years isn’t very long.
Taiwan’s legal writing still uses Classical Chinese phrasing and grammar. How well can generative AI handle those idiosyncrasies?
AI can’t reliably detect factual nuance. This deficit has been well-documented already in the business and medical communities, too. The outcome of even ostensibly simple cases can turn on small technicalities that only a human observer could detect. But Chunghwa’s T5 AI model, as conceded in page 40 of the Digital Policy 2.0 report, currently relies on GPT 2.0. In theory, Taiwan judges will review the AI-generated decision before publishing it — but if judges become accustomed to relying on AI more than their own good sense, legal mistakes could increase.
Along the same lines, chatbots hallucinate — as New York attorneys Schwartz and LoDuca found out the hard way. Would judges really take the time to manually check every syllogism and citation the AI spits out?
It’s arguably a benefit that fallible humans run the legal system: that way, when mistakes are made, society has someone to blame. How would someone wrongfully convicted go about blaming Chunghwa Telecom?
The idea of a telephone company and lazy-ass, paper-pushing judges shoving people through the criminal system — and “highly efficiently” and “conveniently” at that — just isn’t pretty. Obviously no one wants a highly inefficient and inconvenient system, either. But the chief beneficiaries of AI-generated court rulings in bulk quantity will be government actors, not defendants.
That’s because the slowness of the legal system also forces police and prosecutors to pursue only the most important cases. Basically law-abiding citizens probably won’t enjoy the consequences of a criminal system suddenly capable of processing them for very minor offenses. Further, as of 2016, Taiwan’s criminal conviction rate was 96.7%. While not as high as China’s or Japan’s — both of which exceed 99% — Taiwan’s conviction rate is one of the highest in the world.
The aforementioned judges in Colombia and India didn’t just use ChatGPT in drafting their written opinions. They outsourced their own critical thinking to ChatGPT, asking it to decide the case before them. Imagine if the lawyers in those courts had used ChatGPT in drafting the briefs they submitted to the judge. Then we have a system in which ChatGPT decided the entire case by itself. That’s a bone-chilling, Kafkaesque world right there.
Granted, many of Digital Policy 2.0’s goals are laudable and uncontroversial. There’s no reason why briefs must be submitted in hard copy to the court. More accurate stenography — with no associated labor costs — is an obvious plus. These sorts of proposals stand to benefit the everyday citizen just as much as the government.
But I have deep reservations about the benefits that automated rulings could bring. And the fact that the use of generative AI for judicial decisionmaking was slotted in with a litany of unobjectionable proposals implies that the Taiwan government hasn’t thought this one through very much. Notwithstanding Taiwan’s exceptionally creative digital innovations at the governmental level, it’s not hard for me to imagine a Taiwan in which this expansive “i-Justice” reform brings about far more societal harm than good.
If I recall correctly from my research, PRC courts have been using generative AI in drafting judicial documents already. PRC scholars have expressed the same concerns you mention, but court reports (obviously) only mention the positive effects of using AI (generative and other).
Further proof here along with the high conviction rates that justice and fairness is not what is important or what matters here! Why! because the elites who are having this made and used do not live by these rules and laws. The elites, as usual, in all societies don't live by the common persons laws and rules. This just cements it even more. AI will be everywhere soon!