Carroll & O'Dea Facebook

When it matters,
you need the
right commercial advice

Contact Us

Publications

Artificial Intelligence and Defamation

Artificial Intelligence and Defamation

Published on May 24, 2023 by Grace Brophy and Daniel O’BrienGrace Brophy and Daniel O’Brien

As the popularity of Artificial Intelligence (“AI”) continues to rise, when it comes to defamation we need to ask ourselves, “who is responsible for the publication of material and what are the risks involved”?

Defamation law is a complex area of law, and the inception of AI models including ChatGPT and Bing Chat and their mainstream use have raised many legal questions that are rapidly emerging.

However, before we address the early impact of AI on defamation law within Australia, we must first define defamation.

What is Defamation?

Defamation is broadly defined as the publication of false and damaging statements about an individual (and sometimes corporations) that causes significant harm to their reputation.

The various legal remedies for defamation are damages for non-economic loss (reputational damage), economic loss for loss of future earnings, and in some cases, an interlocutory or interim injunction to for the removal of the defamatory material and an order to prevent further publications.

Recent Cases

Below we examine a number of Australian and international cases that have raised significant questions involving AI and the dissemination of false information with potentially defamatory outcomes.

Professor Jonathan Turley  

In April 2023, Jonathan Turley, Professor of Public Law at George Washington University, discovered that ChatGPT had falsely reported that he had been accused of sexual harassment in a 2018 Washington Post article. [1]

ChatGPT was asked to cite “five examples” of “sexual harassment” by U.S. law professors with “quotes from relevant newspaper articles” to support it.

According to Professor Turley, “five professors came up, three of those stories were clearly false, including my own”. [2]

In this instance, ChatGPT had concocted a Washington Post story, and fabricated a quote, which falsely accused Professor Turley of sexual harassment and something he has categorically denied. There were a number of inaccuracies rendering the accusation false. These included references to:

  1. the wrong law school;
  2. a trip to Alaska that had never occurred; and
  3. a false allegation of sexual harassment.

Brian Hood, Mayor of Hepburn Shire, VIC

In this matter, ChatGPT published material falsely claiming that Hepburn Shire Mayor, Brian Hood, had served time in prison for bribery.

Mr Hood was the whistle blower in a corruption scandal involving a company partly owned by the Reserve Bank of Australia. Several people were charged, but Mr Hood was not charged.

In spite of this, an article was generated by ChatGPT, which falsely accused Mr Hood of being jailed for bribing foreign officials to win currency printing contracts.

Following the publication of the article, Mr Hood threatened to commence defamation proceedings against Open AI (owner of ChatGPT). This would mark the first defamation lawsuit against the automated service.  [3]

What are the consequences of AI for defamation law in Australia?

The main question that needs to be addressed, is whether the law recognises the liability of AI models that publish defamatory material.

At this point in time, opinions on this topic remain mixed.

Associate Professor Jason Bosland, Director of the Media and Communications Law Research Network at the University of Melbourne is quoted as saying:

“The prospects of successfully suing [them] are very slim, almost zero, and that’s because the claimant might be able to obtain a judgment in their favour against OpenAI in a local court, but it would need to be enforced in the US”. [4]

It is possible that AI companies could add disclaimers to their products to avoid liability. That liability would then pass on to users, who could then be pursued for spreading false and defamatory claims despite being aware that AI models do not always produce accurate information.

However, at this stage, the issues are still up for debate and we encourage our clients and readers to closely monitor any developments on this important and evolving issue.

If you have been defamed or have had an allegation of defamation made against you, contact Grace Brophy on (02) 8226 7320 and / or Daniel O’Brien on (02) 9291 7185  for a free initial consultation to discuss your legal options.

Daniel O’Brien, Partner – dobrien@codea.com.au
Grace Brophy, Lawyer – gbrophy@codea.com.au

 


[1] https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

[2] https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

[3] https://www.theguardian.com/technology/2023/apr/06/australian-mayor-prepares-worlds-first-defamation-lawsuit-over-chatgpt-content

[4] https://lsj.com.au/articles/worlds-first-chatgpt-defamation-lawsuit-may-come-out-of-australia/

Need help? Contact us now.

We're here to help. For general enquiries email or call 1800 059 278.
For Business lawyers call +61 (02) 9291 7100.

Contact Us