The AI Lawyer, opportunity, hallucination and professional risk
Published on March 30, 2026 by Sasika Jayasuriya and Thomas Felizzi
Artificial intelligence is increasingly becoming part of everyday professional life. The legal profession is no exception. Generative AI tools capable of drafting submissions, summarising documents and answering legal questions are now widely available and easily accessible.
For many practitioners, these tools appear to offer a solution to the growing demands of modern legal practice. They promise efficiency, rapid drafting and the ability to process large volumes of information in seconds. However, recent court decisions have demonstrated that reliance on generative AI without appropriate safeguards can lead to serious professional consequences.
A central issue in this emerging area is the phenomenon known as AI “hallucination”.
Understanding AI hallucinations
Generative AI systems produce responses by predicting the most likely sequence of words in response to a user’s prompt. They do not verify information against authoritative sources. Instead, they generate outputs that appear coherent and plausible based on patterns identified during their training.
As a result, these systems may sometimes produce information that appears credible but is entirely incorrect. In legal contexts, this can include fabricated case law, incorrect citations or fictional quotations from judgments. Because the output often resembles genuine legal writing, users may not immediately recognise that the information is false.
The dangers associated with this phenomenon became widely known following the decision of the Mata v Avianca Inc (2023) 678 F Supp 3d 443 (SDNY) (Mata).
The Mata decision
The case involved a personal injury claim brought by a passenger who alleged that he had been injured by a serving cart during an international flight.
During the proceedings, the plaintiff’s lawyers filed submissions opposing a motion to dismiss the claim. The submissions contained citations to several cases that appeared to support their argument regarding the limitation period under the Montreal Convention.
However, opposing counsel was unable to locate the authorities cited in the submissions. When the issue was raised with the court, it became apparent that the cases had been generated by an artificial intelligence tool.
The lawyer responsible for drafting the submissions had used ChatGPT to identify relevant case law. The system generated several cases which appeared to support the argument being advanced.
Unfortunately, none of the cases existed. When questioned by the court, the lawyer explained that he had assumed the system was identifying real authorities that were perhaps unpublished or difficult to locate. He had not considered the possibility that the cases themselves had been fabricated. The court ultimately imposed sanctions on the lawyers involved, emphasising that the submission of fictitious authorities had serious implications for the administration of justice.
Why the case matters
The decision highlighted several important risks associated with the use of generative AI in legal proceedings. First, the submission of fabricated authorities wastes time and resources. Opposing parties and courts may expend significant effort investigating citations that ultimately prove to be fictional. Second, reliance on incorrect authorities may undermine legitimate arguments and potentially prejudice the client. And finally, such conduct risks damaging public confidence in the legal profession and the judicial system.
An Australian example
Australian courts have also begun to confront the consequences of generative AI misuse. A clear example is Valu v Minister for Immigration and Multicultural Affairs (No 2).
In that matter, the applicant sought judicial review of a decision of the Administrative Appeals Tribunal. The applicant’s solicitor filed written submissions that cited numerous Federal Court authorities and included multiple quotations said to have been drawn from the Tribunal’s decision.
The difficulty was that none of the cited authorities existed. Judge Skaros recorded that the submissions referred to seventeen Federal Court decisions, complete with convincing case names and citations. Each appeared realistic and legally plausible. However, every one of those authorities and quotations was fictitious.
The issue was identified by the respondent, and the solicitor subsequently filed amended submissions removing the authorities. By that time, however, the Court had already spent considerable time attempting to locate the cases. The hearing was adjourned so that the Court could address the solicitor’s conduct.
The solicitor explained that due to time pressure and health issues he had used ChatGPT to generate a case summary. The authorities produced by the system were incorporated into the submissions without verification because they appeared convincing and professionally written.
The case further illustrates the particular danger posed by generative AI. The text produced by the system was coherent, structured and highly plausible. The case names used realistic litigant names and historically accurate ministerial titles. It looked authentic. But it was entirely wrong.
Generative AI predicts language patterns rather than verifying truth. It does not know whether a case represents binding authority, whether it remains good law, or whether the case exists at all. When gaps appear in its training patterns, it may simply generate information that fits the structure of a legal citation. These are hallucinations, and they are particularly dangerous because they are presented with confidence.
The Court ultimately found that the solicitor failed to exercise competence and diligence and breached the duty not to mislead the Court. The conduct was referred to the Legal Services Commissioner. The lesson from the decision is straightforward, while AI may assist with drafting and summarising, verification remains a non-delegable professional obligation.
Emerging judicial guidance
Courts and tribunals across Australia have begun issuing guidance regarding the appropriate use of generative AI.
In New South Wales, both the Supreme Court and the District Court have issued practice guidance drawing a clear distinction between AI-assisted drafting and the creation of evidence. AI tools may assist with grammar, language or formatting, but they must not be used to generate affidavits, witness statements, character references or other evidence purporting to reflect a person’s knowledge or belief. Evidence must originate from a human mind and adopting AI-generated content risks undermining authenticity.
Importantly, authorities and citations must be independently verified by the practitioner. Courts may also require disclosure of the use of AI systems and the steps taken to confirm the accuracy of the material produced.
Similar principles have been reflected in guidance issued by the NSW Industrial Relations Commission. Participants appearing before the Commission, including lawyers, union officials and employer representatives, must understand the limitations of AI tools, ensure the accuracy of all material filed and avoid misleading the Commission. AI-generated witness evidence is expressly prohibited, and legal authorities must be independently confirmed.
Even in jurisdictions where formal guidance is still developing, expectations are becoming clearer. For example, in a recent matter before the Fair Work Commission, Branden Deysel v Electra Lift Co [2025] FWC 2289 a dismissal application was rejected after the Commission identified that the claim had been generated using AI and contained inaccuracies, including misinterpretations of legislation and industrial instruments. The case illustrates that even in relatively informal forums, submissions must accurately reflect the party’s knowledge and position.
Confidentiality and ethical considerations
Another important issue arising from the use of generative AI is confidentiality. Many publicly available AI systems process and store user inputs. Entering client information into these platforms may therefore create a risk that confidential information is disclosed or incorporated into external datasets. Practitioners should treat publicly available AI systems as though they were external recipients of information. If material could not safely be disclosed to a third party, it should not be entered into an AI system.
Avoiding automation bias
One of the key risks associated with AI tools is the tendency for users to trust automated outputs simply because they appear sophisticated. This phenomenon, sometimes referred to as “automation bias”, can lead users to accept incorrect information without applying critical scrutiny. For lawyers, this risk is particularly significant. Legal submissions carry professional obligations and must be supported by genuine authorities. Generative AI can be a useful tool for drafting or brainstorming ideas. However, it should never be treated as a substitute for proper legal research. The responsibility for verifying the accuracy of legal authorities remains with the practitioner.
Ultimately, the guiding principle is simple, lawyers should never allow AI to produce something they could not confidently explain, justify or verify themselves. Efficiency gains offered by new technology must never come at the expense of professional responsibility.
This article was published on 30 March, 2026 by Carroll & O’Dea Lawyers and is based on the relevant state of the law (legislation, regulations and case law) at that date for the jurisdiction in which it is published. Please note this article does not constitute legal advice. If you ever need legal advice or want to discuss a legal problem, please contact us to see if we can help. You can reach us on 1800 059 278 or via the Contact us page on our website. (www.codea.com.au).