Australian Academics Apologise for False AI-Generated Allegations Against Big Four Consultancy Firms

Henry Belot Guardian Australia 3 November 2023

  • Summary
  • False accusations by AI: A group of academics used an AI tool called Google Bard to generate false allegations of misconduct against the big four consultancy firms in a submission to a parliamentary inquiry.
  • Reactions from the firms: The firms, KPMG and Deloitte, were outraged by the inaccurate information and demanded a correction. They said the false accusations unfairly tarnished the reputation of their staff and the industry.
  • Apology from the academics: The academics admitted their error and offered an unreserved apology to the firms and the committee. They said they were unaware of the limitations of AI and regretted using it to create case studies.
  • Implications for AI and parliament: This incident raised questions about the reliability and ethics of using AI to generate information that is covered by parliamentary privilege. It also showed the need for checking sources and facts before making submissions.


Full Article

A group of academics has offered an unreserved apology to the big four consultancy firms after admitting they used artificial intelligence to make false allegations of serious wrongdoing in a submission to a parliamentary inquiry.

The accusations have been met with scorn by the firms, concerned that inaccurate information has been given parliamentary privilege to unfairly tarnish the reputation of its staff.

The academics, who specialise in accounting, were urging a parliamentary inquiry into the ethics and professional accountability of the consultancy industry to consider broad regulation changes, including splitting up the big four.

Part of the original submission relied on the Google Bard AI tool, which the responsible academic had only begun using that same week. The AI program generated several case studies about misconduct that were highlighted by the submission.


KPMG Australia launches internal review after potential conflict-of-interest concerns raised

The original submission falsely accused KPMG of being complicit in a “KPMG 7-Eleven wage theft scandal” that led to the resignation of several partners. It also accused KPMG of auditing the Commonwealth Bank during a financial planning scandal. KPMG never audited the Commonwealth Bank.

Deloitte’s general counsel, Tala Bennett, also expressed concern about the submission wrongly accusing her firm of being sued by the liquidators of the collapsed building company Probuild for allegedly failing to properly audit its accounts. Deloitte never audited Probuild.

The submission raised concerns about a “Deloitte NAB financial planning scandal” and wrongly accused the firm of advising the bank on a scheme that defrauded customers of millions of dollars. Deloitte told the Senate there was no such scandal.

It also accused Deloitte of falsifying the accounts of a company called Patisserie Valerie. Deloitte had never audited the company.

“Deloitte supports academic freedom and constructive discourse in relation to those matters currently before the committee, however, it considers that it is important to have factually incorrect information corrected,” Bennett said.

“It is disappointing that this has occurred, and we look forward to understanding the committee’s approach to correcting this information.”

KPMG has written to the academics’ employers to state that while they respect academic freedom, they will seek to publicly correct the record to protect the reputation of their staff.

This is believed to be the first time a parliamentary inquiry has grappled with the issue of artificial intelligence generating false accusations that are subsequently covered by parliamentary privilege, which prevents defamation lawsuits.

The sections of the submission that contained false information generated by artificial intelligence will now be removed. A new document is expected to be uploaded to the Senate inquiry website.

In a letter to the Senate, emeritus professor James Guthrie claimed responsibility for the error, excusing the other academics.

“Given that the use of AI has largely led to these inaccuracies, the entire authorship team sincerely apologises to the committee and the named Big Four partnerships in those parts of the two submissions that used and referenced the Google Bard Large Language model generator,” Guthrie said in the letter.

“Given we are also accounting academics, we are deeply invested in the public interest and ensuring accountability and transparency in the sector – which is why we unreservedly offer our apologies and deep regret.”

Guthrie said the factual errors were “regrettable” but insisted “our substantive arguments and our recommendations for reform remain important to ensure a sustainable sector built on shared community values”.

“I now realise that AI can generate authoritative-sounding output that can be incorrect, incomplete or biased,” Guthrie said.

The Liberal senator Richard Colbeck, who chairs a separate inquiry into the consulting industry, said the incident was “a salient reminder to everyone to check your sources”.

“This shows how out of control things can get if basic facts are not correct.”