Federal Court of Australia Issues New Mandates to Curb Unacceptable AI Use
Chief Justice Debra Mortimer warns Australian lawyers of penalties for AI-generated errors, mandating disclosure and verification of all generative AI content.
By: AXL Media
Published: Apr 16, 2026, 9:29 AM EDT
Source: The Guardian

Strict Disclosure and Verification Protocols
The new guidance requires lawyers and solicitors to maintain rigorous oversight when employing AI tools for pleadings, written submissions, and other official filings. Under the new rules, legal professionals must explicitly disclose at the start of a document if generative AI was used to summarize information, analyze data, or create media evidence. Crucially, the court now mandates that all practitioners verify that cited legal authorities actually exist and directly support the propositions made, a direct response to a global surge in AI "hallucinations" appearing in court records.
Accountability and Financial Consequences
The Chief Justice emphasized that while the court "embraces" technological advancement, misuse will lead to severe professional repercussions. Lawyers who fail to comply with these transparency rules can expect adverse costs orders—financial penalties that shift legal fees to the non-compliant party—and potential investigations into their professional obligations. The warning follows at least 73 identified instances in Australia where AI-generated content led to false citations or fabricated quotes being presented as legitimate legal precedent.
Strategic Analysis: Safeguarding Professional Standards
The federal court’s intervention marks a shift from passive observation to active regulation of legal technology. By mandating that affidavits and expert reports reflect "recollection, knowledge, or experience" rather than algorithmic output, the court is reinforcing the human element of legal testimony. This move acts as a critical firewall against the "unsustainable phase" of AI use described by High Court Chief Justice Stephen Gageler, where judges have found themselves acting as "human filters" for unreliable, machine-generated arguments.
Categories
Topics
Related Coverage
- West Virginia University Study Finds Judges Adopting Generative AI for Administrative Support While Protecting Human Authority
- ACM TechBrief Warns of Security and Reliability Risks in Rapidly Rising Vibe Coding Trend
- Singapore Prison Service Launches Massive 81-Cubicle Video Court Facility at Changi to Digitalize Justice
- Disgraced Immigration Adviser Qian Yu Faces Further $25,000 in Fines for "Blatant" Deception