top of page

You Can’t Blame AI, My Lord

  • 3 days ago
  • 3 min read



Trial judge in Andhra Pradesh gets pulled up by the Supreme Court for citing fake judgments generated by artificial intelligence.
By Our Bureau


Just when you had your fill of how AI, because of its accuracy, efficiency, and speed, could trigger major job losses, comes news that it may not be as super-competent and reliable as it is made out to be.

 

Last fortnight, close on the heels of the AI Summit in Delhi, a bench of the Supreme Court comprising Justices PS Narasimha and Alok Aradhe pulled up a trial court judge in Andhra Pradesh for adjudicating on a property dispute, citing “fake” judgements generated by artificial intelligence.

 

The key portion of the court’s order notes: "We take cognisance of the trial court deploying AI-generated non-existing, fake or synthetic alleged judgments and seek to examine its consequences and accountability as it has a direct bearing on the integrity of the adjudicatory process.”

 

The court added that it did not think of it as an “error in decision making” but a “misconduct” for which “legal consequence shall follow.”   

 

It issued a notice to Attorney General R. Venkataramani, Solicitor General Tushar Mehta and the Bar Council of India to examine the matter in more detail.  

 

The Apex Court’s order merely reiterated what experts closely studying the evolution of artificial intelligence have been observing in the past few months—that AI should be regarded as a tool to assist humans and not as their replacement.


 

This view is in sharp contrast to the AI lobby, which has designated several professions, including legal assistants, as endangered.

 

Errors in court rulings, courtesy of AI, are not new. According to a Reuters report, two federal judges admitted in response to an inquiry by U.S. Senate Judiciary Committee Chairman Chuck Grassley that members of their staff used artificial intelligence to help prepare recent judgments that were "error-ridden."

 

U.S. District Judge Henry Wingate in Mississippi and U.S. District Judge Julien Xavier Neals in New Jersey said the judgments did not undergo their chambers' typical review processes before being issued.

 

Errors also creep in when lawyers take AI shortcuts and cite non-existent judgements or paragraphs not in existing judgements. In June last year, the High Court of England and Wales reportedly warned lawyers not to use AI-generated case material after a series of cases cited fictitious or partially cooked-up rulings.

 

Last year, the Supreme Court of India published a White Paper on ‘Artificial Intelligence and Judiciary.’ The report stressed the need for human oversight and the importance of keeping institutional safeguards "firmly in place" and warned against the risks involved in overdependence and leaving tasks totally to AI.


 

To quote: “Overreliance on AI entails users exhibiting behaviours where they accept suggestions and outputs provided by AI that may be incorrect or hallucinated, without validation, for instance, when AI is used to compile precedents on a specific issue. Due to the limitation associated with any GenAI system, including but not limited to hallucination, the list may contain some cases that do not exist. In such cases, overreliance on AI without proper verification would lead to these cases becoming part of the legal discourse until flagged or reported.”

 

The White Paper also underlined that, finally, judges and lawyers will be held accountable for errors generated by AI. “The ultimate responsibility and accountability associated with using AI shall be attributed to humans; human in the loop in the context of the judiciary means putting the final responsibility, accountability, action or outcomes in the judicial process to the judges and lawyers.  The judiciary is personally responsible for the material that is produced in their name, and thus, it is important to ensure the authenticity and credibility of the document. AI may assist the judges, but cannot replace them.”

 

What steps can a judge or lawyer take to ensure that a cited legal document is not the result of an ‘AI hallucination’, which may produce false or fabricated information along with seemingly legitimate citations?

 

A careful and thorough legal assistant would be invaluable in this process. However, it is crucial to ensure that she/he does not consume any 'AI hallucinogens' before reviewing old case files!



Subscribe to Our Free Newsletter

  • White Facebook Icon
  • Instagram
  • Twitter

© 2035 by TheHours. Powered and secured by Wix

bottom of page