APAA e-Newsletter (Issue No. 40, April 2024)

AI and the Law in Australia – What Every Business Needs to Know in 2024

Joy Atacador, Peter Divitcos, Matthew Hennessy - Dentons Australia (Australia)


Businesses across Australia are furiously deploying AI-powered technologies to increase productivity, augment labour and process high volumes of data. The business case for AI, and sound AI governance, is clear. Every business must be across how to navigate the risks associated with the use of AI.

What laws currently regulate the use of AI in Australia?

There is no single piece of legislation in Australia that regulates AI.  Instead, AI is regulated by legislation to address specific harms. Broadly speaking, the patchwork of laws can be divided into the following six categories:

  • The misuse of personal information or data in connection with a prompt or response generated by an AI model is generally protected by the Privacy Act 1988 (Cth), Security of Critical Infrastructure Act 2018 (Cth), intellectual property and confidentiality obligations.
  • A false output generated by an AI model (including inaccurate advice or information) may be addressed by the misleading and deceptive conduct/false representations provisions, product liability and consumer guarantees under the Australian Consumer Law, Schedule 2 of the Competition and Consumer Act 2010 (Cth) (ACL).
  • Consumers aggrieved by an AI system that results in an unfair or unreasonable outcome can rely on the unconscionable conduct and consumer guarantee provisions in the ACL.
  • Where an AI model discriminates against a protected attribute (such as age, disability, race, sex, intersex status, gender identity and sexual orientation) recourse may be sought in certain circumstances under Australia’s anti-discrimination laws including the Age Discrimination Act 2004 (Cth), Disability Discrimination Act 1992 (Cth), Racial Discrimination Act 1975 (Cth) and Sex Discrimination Act 1984 (Cth).
  • Recourse for physical, economic or other harm arising from the impact of an AI system may be found in Australia’s work, health and safety laws, the ACL and the tort of negligence.
  • Sector specific laws such as the Therapeutic Goods Act 1989 (Cth) which regulates medical devices utilising AI.

This fragmented regulatory environment is likely to expose Australia to some vulnerabilities.

The Australian Government’s interim response after consultation on Safe and Responsible AI in Australia indicates that new laws will distinguish between high-risk AI and low-risk AI use cases.

Under this approach, stricter, preventative measures such as testing, transparency and accountability requirements would be introduced for high-risk AI use cases. Whereas a minimalist approach would be adopted for low-risk AI developments.

This risk-based approach to AI regulation raises several challenges. Firstly, the assessment of risk allocated for different AI use cases must be identified. In addition, the specific context of use may not be comprehensively encapsulated. Further, the regulation must be adaptable to rapid changes in technological development.

Australian Government’s copyright reform proposals

Over 2023, the Australian Government’s Attorney-General’s Department held a series of roundtables with stakeholders on copyright priorities and emerging issues.[1] Late in 2023, it was announced that a copyright AI reference group would be established to prepare for future copyright challenges emerging from AI.

The reference group is likely to consider issues including the authorship and ownership of IP rights in AI-generated content and liability for AI-generated content.

An AI platform’s terms of use typically include provisions concerning the ownership of IP rights in AI-generated content. Businesses should check the terms, including licence provisions, to ensure it is not assigning rights in its inputs to the AI platform.

Australian courts are also yet to consider the question of copyright liability for AI-generated content. Liability for AI-generated outputs may rest with:

  • The developers of the AI system, including its language models and algorithms.
  • The provider of the AI system to end-users.
  • The end-user that inputs prompts which instruct the AI system to generate an output.

Where an AI system is developed or programmed to directly copy the data in its learning models, irrespective of the end-user’s prompts, the AI developer and provider are more likely to be responsible for any infringing output. A fair dealing exception for text and data mining would likely benefit AI developers and providers in this regard.

However, where an AI system is developed or programmed to use data to create original content, but the end-user’s prompts nonetheless result in an infringing output, the AI developer and provider are less likely to be held responsible.

The reference group will need to provide recommendations on who is responsible for the use of AI to create copycat works and when AI-generated works should receive copyright protection.

How to navigate the risks of using AI in your business

  1. Keep a list of AI platforms used in your business.
  1. Review the Terms of Service of AI tools used in your business, including IP ownership and indemnity provisions. Check that your business has the right to input data and use AI generated content.
  1. Ensure the legal team is consulted about your business’ AI strategy and projects.
  1. Activate your Board to engage in an AI governance framework overhaul.
  1. Establish an AI Governance Committee and set up AI Guiding Principles.
  1. Maintain AI risk registers for projects.
  1. Implement acceptable AI use policies and training programs for staff.
  1. Have a bank of checklists and protocols that must be complied with when engaging AI vendors. Create template AI clauses and incorporate them into supplier and customer contracts. Obtain warranties from third party contractors to confirm originality of content.
  1. Don’t input commercially sensitive, confidential, legally privileged or personal private information, trade secrets or valuable IP into an AI tool.
  1. Continually audit, monitor and review AI use cases, transactions and incidents.


[1] Australian Government Attorney-General’s Department, Ministerial Roundtable on Copyright, https://www.ag.gov.au/rights-and-protections/copyright/ministerial-roundtable-copyright.