Artificial intelligence and workplace safety: NSW extends WHS duties to digital systems
Published on April 17, 2026 by Tim Grellman and Jessica Duan
The increasing use of artificial intelligence and algorithm-driven technologies in modern workplaces has created new efficiencies for businesses, but it has also introduced new categories of workplace risk. From gig-economy platforms allocating jobs through automated systems to warehouses using algorithms to drive productivity targets, technology is increasingly shaping how work is organised and performed.
Recognising these developments, the New South Wales Parliament has recently enacted legislative reforms clarifying that work health and safety obligations extend to risks created by artificial intelligence and digital workplace systems. The amendments represent a significant step in adapting workplace safety laws to the realities of an increasingly digitised workforce.
Legislative reform
The changes were introduced through the Work Health and Safety Amendment (Digital Work Systems) Act 2026 (NSW) which amends the existing framework under the Work Health and Safety Act 2011 (NSW) (https://legislation.nsw.gov.au/view/html/inforce/current/act-2011-010).
While the WHS Act has long imposed a primary duty on employers to ensure, so far as is reasonably practicable, the health and safety of workers, the amendments extends this duty to hazards arising from “digital work systems”. The concept of digital work systems is broad and captures a range of technologies including artificial intelligence, algorithmic management tools, automated monitoring systems and digital platforms that influence how work is allocated or performed.
The reforms were introduced in response to the growing role of automated decision-making in workplaces and the recognition that safety risks may arise from software systems as much as from physical equipment or workplace environments. Importantly, the amendments clarify that the use of technology does not diminish or shift an employer’s legal responsibility with respect to facilitating workplace safety.
Algorithmic management and workplace risk
One of the central issues addressed by the reforms is the rise of “algorithmic management”. In many industries, work allocation, performance monitoring and productivity expectations are increasingly determined by digital systems rather than direct human supervision.
This model is particularly prevalent in gig-economy platforms where drivers or delivery workers receive tasks through a technology application, but it is also becoming common in logistics, warehousing, retail and other sectors where digital monitoring tools track worker productivity.
While such systems can improve efficiency, they may also generate safety risks. For example, algorithmically generated performance targets may encourage workers to maintain unsustainable work rates, increasing the likelihood of fatigue, repetitive strain injuries or other physical harm. Similarly, automated scheduling systems may allocate workloads without adequately accounting for human limitations or workplace conditions.
Digital technologies may also create psychosocial hazards. Continuous monitoring of worker performance, location or productivity can contribute to workplace stress, anxiety and fatigue, particularly for workers who feel constant pressure to meet automated performance metrics. The reforms recognise that these risks are genuine workplace hazards that must be managed within the existing work health and safety framework.
Clarifying employer responsibility
The legislative amendments make clear that employers cannot avoid liability by attributing decisions to automated systems or third-party software providers.
Under the WHS framework, the person conducting a business or undertaking remains responsible for ensuring that the systems used within the workplace operate safely. This means that where an organisation deploys artificial intelligence or digital management tools, it must ensure those systems do not create foreseeable risks to worker health and safety, including:
- excessive or unreasonable workloads for workers at work in the business or undertaking,
- the use of excessive or unreasonable metrics to assess and track the performance of workers at work in the business or undertaking,
- excessive or unreasonable monitoring or surveillance of workers at work in the business or undertaking, and
- unlawful discriminatory practices or decision-making in the conduct of the business or undertaking.
In practical terms, this may require employers to undertake risk assessments when implementing digital systems, review how algorithmic tools allocate work, and ensure that automated productivity targets do not expose workers to unreasonable pressure or unsafe conditions. The reforms reinforce the principle that workplace safety obligations apply to the system of work, regardless of whether that system is controlled by human supervisors or digital technology.
Implications for employers
For organisations adopting artificial intelligence or automated management tools, the reforms highlight the importance of incorporating safety considerations into technology governance and procurement processes.
Businesses may need to review how digital systems influence workload allocation, performance monitoring and workplace expectations. This may involve assessing whether algorithmic systems produce unreasonable workloads, ensuring that workers are not subject to excessive digital surveillance, and consulting with employees about the impact of new technologies. Employers may also need to engage more closely with technology providers to understand how digital systems operate and whether they create unintended safety risks.
From a governance perspective, boards and senior management should recognise that the deployment of AI and automated workplace systems is not merely a technological decision but also a legal and compliance issue. Organisations that fail to properly assess the safety implications of digital tools may face regulatory scrutiny under the WHS regime.
A changing regulatory landscape
The amendments represent one of the more direct legislative attempts to address the intersection of artificial intelligence and workplace safety.
As technology continues to reshape the nature of work, regulators are increasingly recognising that traditional safety frameworks must evolve to address new forms of risk. In this context, the NSW reforms signal a broader shift toward treating digital systems as integral components of workplace infrastructure, subject to the same safety obligations as machinery, equipment and physical work environments.
The extension of work health and safety obligations to digital work systems reflects the growing influence of technology in modern workplaces. While artificial intelligence and algorithmic management tools may deliver operational efficiencies, they also have the potential to shape workloads, behaviour and working conditions in ways that affect worker wellbeing. By clarifying that employers remain responsible for the risks created by these technologies, the new legislation reinforces a fundamental principle of workplace safety law, innovation does not displace responsibility. Regardless of whether instructions originate from a human supervisor or an algorithm, employers must ensure that the systems directing work are safe.
This article was published on 17 April, 2026 by Carroll & O’Dea Lawyers and is based on the relevant state of the law (legislation, regulations and case law) at that date for the jurisdiction in which it is published. Please note this article does not constitute legal advice. If you ever need legal advice or want to discuss a legal problem, please contact us to see if we can help. You can reach us on 1800 059 278 or via the Contact us page on our website (www.codea.com.au). If you or a loved one has been injured, use our Personal injury Claim Check now.