Artificial intelligence in legal practice: professional responsibilities across Australian courts and tribunals
Published on March 6, 2026 by Tim Grellman
The rapid adoption of generative artificial intelligence (AI) in legal practice has prompted courts and tribunals across Australia to clarify the responsibilities of practitioners. While AI can assist in drafting, research, and organisation, it does not replace professional judgment. Practitioners remain personally responsible for all material filed or relied upon, and recent guidance demonstrates that the courts treat AI-generated content in the same manner as work prepared by the lawyer personally.
Federal courts – establishing the national baseline
The Federal Court of Australia has issued a Notice to the Profession on the Responsible Use of Artificial Intelligence, which emphasises that practitioners must verify all AI-generated content before filing. (https://www.fedcourt.gov.au/law-and-practice/practice-documents/notice-to-profession/29-april-2025) The Notice highlighted the risks of fabricated authorities, AI systems can generate citations that appear authentic but do not exist. Filing such material, even inadvertently, may constitute misleading the Court.
Practitioners are expected to verify the existence and accuracy of authorities, quotations, legal propositions, and legislative interpretations. Where AI materially contributes to reasoning, disclosure may also be required. The Court has indicated that technological competence is part of legal competence; using AI tools without understanding their limitations may be professionally problematic.
The Federal Circuit and Family Court does not currently have a formal practice note regulating lawyers’ use of AI. Its AI Transparency Statement (https://www.fcfcoa.gov.au/ai-transparency-statement) relates to the Court’s internal use of AI for administrative purposes. In practice, practitioners appearing in the Federal Circuit and Family Court remain responsible for all material they file, and the Federal Court’s Notice to the Profession on Responsible Use of AI provides guidance that AI may be used as a drafting tool, but the practitioner retains full responsibility for verifying accuracy, authority, and factual content.
New South Wales Courts – drafting versus evidence
The New South Wales Supreme Court Practice Note SC Gen 23 (https://supremecourt.nsw.gov.au/documents/Practice-and-Procedure/Practice-Notes/general/current/PN_SC_Gen_23.pdf) and District Court General Practice Note 2 (https://districtcourt.nsw.gov.au/documents/practice-notes/district-court-pn—general/Gen_AI_Practice_Note.pdf) have issued detailed guidance that draws a critical distinction between drafting assistance and the creation of evidence. AI may be used to assist with language, grammar, or formatting, but it must not generate:
- Affidavits
- Witness statements
- Character references
- Any material purporting to represent a person’s knowledge or belief
Evidence must come from a human mind and deponents must confirm AI has not been used in preparing their affidavits/statements. If AI produces content adopted by a witness, it undermines authenticity. Similarly, practitioners may not rely on AI to verify legal authorities; every citation must be checked using authoritative research tools. Where AI is used for permissible purposes, the Court may require disclosure of the system used and how accuracy was verified.
NSW Industrial Relations Commission
The NSW Industrial Relations Commission Practice Note No 33 (https://irc.nsw.gov.au/documents/practice-notes/Practice_Note_No_33_-_Use_of_GenAI.pdf) addresses the use of AI in tribunal proceedings. The guidance applies to lawyers, union officials, employer representatives, and other advocates appearing before the Commission. It emphasises that AI does not reduce professional responsibility, and participants must understand the limitations of AI, ensure the accuracy of all material, avoid misleading the Commission, and preserve the integrity of evidence.
AI-generated witness evidence remains strictly prohibited and deponents must confirm AI has not been used in preparing such material. Legal authorities and propositions must be verified independently. While consequences in the Commission may be more credibility-based than disciplinary, credibility is critical in industrial advocacy, and errors can affect an advocate’s standing in future matters.
Fair Work Commission
Although the Fair Work Commission (FWC) has not yet issued a consolidated practice direction on the use of artificial intelligence, expectations are already emerging through tribunal decisions and member commentary. President Hatcher of the FWC has foreshadowed the introduction of a practice direction. The President has also indicated that the increased use if AI has been the significant driver behind the sharp increase in matters being commenced at the FWC, which necessitates further direction.
In recent cases, the FWC has encountered applications where parties admitted to using AI tools such as ChatGPT to draft their claims. (link to article https://www.hcamag.com/au/specialisation/employment-law/fwc-slams-ai-generated-application-after-chatgpt-drafted-dismissal-claim/545918)
One dismissal claim, for example, was rejected after it became clear that the application had been AI-generated, with the FWC highlighting the obvious risks of relying on artificial intelligence for legal submissions.
These incidents demonstrate that material filed in the Commission must be reliable and accurate.
Common risks include AI-generated misinterpretations of awards, inaccurate summaries of legislation, and potentially misleading content in claims. Even in the relatively informal context of the Commission, practitioners and representatives must ensure factual accuracy and that any evidence or statements genuinely reflect the person’s knowledge. Credibility remains paramount, and errors arising from unverified AI-generated material can materially affect the weight given to submissions and the outcome of proceedings.
Confidentiality and privilege
Across all courts and tribunals, entering client information into publicly available AI systems raises confidentiality and privilege concerns. Public AI platforms may store, process, or incorporate user data into training sets, creating the risk of disclosure. Practitioners should treat public AI tools as external recipients of information. A simple rule is that if information would not be sent to a third party, it should not be entered into an open AI system.
Practical guidance for practitioners
Across all jurisdictions, a consistent practical framework is emerging for the use of artificial intelligence in legal practice. While AI can be a valuable tool for structuring documents, organising information, and assisting with drafting, practitioners must continue to exercise their professional judgment. They are required to perform their own legal reasoning, manually verify all authorities, ensure the accuracy of factual content, avoid using AI to generate evidence, maintain records of any AI assistance, and be prepared to disclose the use of AI if requested. A useful guiding principle is to never allow AI to perform a task that you could not confidently explain under oath. Following this principle ensures that the efficiency gains offered by AI do not come at the expense of professional accountability.
Artificial intelligence is becoming an increasingly common and powerful tool in legal practice, offering clear efficiency benefits. However, courts and tribunals have consistently emphasised that AI does not reduce the professional responsibilities of practitioners. Lawyers remain personally accountable for the content of filings, witness statements, and submissions. Guidance from the Federal Court, the Federal Circuit and Family Court, the New South Wales courts, the NSW Industrial Relations Commission, and the emerging expectations of the Fair Work Commission demonstrates that verification, supervision, and transparency are central to the responsible use of AI in practice.
As technology continues to evolve, understanding the limits of AI and maintaining professional diligence will remain essential for meeting both the ethical and procedural obligations of legal practice in Australia. By treating AI as a tool rather than a substitute for analysis, practitioners can leverage its efficiency while ensuring the integrity, credibility, and accountability of the legal process.
This article was published on 6 March, 2026 by Carroll & O’Dea Lawyers and is based on the relevant state of the law (legislation, regulations and case law) at that date for the jurisdiction in which it is published. Please note this article does not constitute legal advice. If you ever need legal advice or want to discuss a legal problem, please contact us to see if we can help. You can reach us on 1800 059 278 or via the Contact us page on our website. (www.codea.com.au).