Serious mistake’: B.C. Supreme Court criticizes lawyer who cited fake cases generated by ChatGPT

This if for Legal related news.
Post Reply
User avatar
White Wolf
Posts: 146
Joined: Mon Apr 14, 2025 1:58 pm

Serious mistake’: B.C. Supreme Court criticizes lawyer who cited fake cases generated by ChatGPT

Post by White Wolf »

Image

‘Serious mistake’: B.C. Supreme Court criticizes lawyer who cited fake cases generated by ChatGPT
written by HR Law Canada 29 February 2024 A+A-
A lawyer has been reprimanded by the British Columbia Supreme Court for citing fake legal cases that were generated by ChatGPT — something the court called an “AI hallucination.”

The ruling has brought to light the consequences of relying on artificial intelligence in legal proceedings. The B.C. Supreme Court case involved an application from a divorced father who wanted to take his children to China.

The central issue arose from the father’s counsel, Chong Ke, using AI-generated non-existent case citations in her legal filings. The other party in this case sought special costs for inserting fake case references, supposedly generated by ChatGPT, into the notice of application, causing considerable unnecessary legal work.

The court dismissed the father’s application for overseas parenting time while deliberating on the repercussions for the lawyer due to her reliance on AI for legal case references. Ke admitted to the mistake, highlighting her reliance on ChatGPT and her subsequent failure to verify the authenticity of the generated cases, which she described as a “serious mistake.”


“I had no idea that these two cases could be erroneous. After my colleague pointed out the fact that these could not be located, I did research of my own and could not detect the issues either,” said Ke.

“Regardless of the level of reliability of Al aids, I should have used more reliable platforms for doing legal research and should have verified the source of information that was going to be presented in court and/or exchanged with the opposing counsel,” she said.

“I have taken this opportunity to review the relevant professional codes of conduct and reflected on my action. I will not repeat the same mistake again. I had no intention to mislead the opposing counsel or the court and sincerely apologize for the mistake that I made.”

The ruling highlighted several critical points regarding the use of AI in legal practice. The court emphasized the importance of lawyer diligence, stating, “Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court.”

Despite acknowledging Ke’s lack of intent to deceive, the court criticized her for not adhering to professional standards, noting the “significant difference between academics and lawyering.”

Ke faced consequences for her actions under the Supreme Court Family Rules, specifically R. 16-1(30), which allows for personal liability for costs due to conduct causing unnecessary legal expenses. The court ordered Ke to personally bear the costs incurred due to her conduct, marking a clear warning against the careless use of AI tools in legal matters.

The court’s final comments underscored the limitations of generative AI in legal contexts, stressing that “generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.”

“Competence in the selection and use of any technology tools, including those powered by AI, is critical,” the court said. “The integrity of the justice system requires no less.”

For more information, see Zang v. Chen, 2024 BCSC 285.


Link:
https://hrlawcanada.com/2024/02/serious ... y-chatgpt/
Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest