You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
accountants daily logo

Canberra to develop guardrails for high-risk AI

Technology

Consultation paper response outlines a targeted regulatory approach to ensure low-risk use continues unimpeded.

By Christine Chen 11 minute read

The government will develop mandatory guardrails for the use of AI in high-risk settings such as surgery or cars but will refrain from the one-size-fits-all regulation adopted in Europe, it says.

Ahead of possible legal changes, it proposed three voluntary measures for “immediate action”: working on a set of safety standards, mechanisms for labelling AI-generated material and establishing an expert group to thrash out mandatory rules for AI use.

Minister for Industry and Science Ed Husic said the government wanted “safe and responsible thinking baked in early” as AI was designed, developed and deployed.

“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” he said.

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI. The Albanese government moved quickly to consult with the public and industry on how to do this, so we start building the trust and transparency in AI that Australians expect.”

The proposals come in an interim response to the consultation paper Safe and Responsible AI in Australia, and outline an approach that contrasts with the EU’s single regulatory law to ensure low-risk AI use “continues to flourish largely unimpeded”.

Indicators of high-risk activities included “systemic, irreversible or perpetual” impacts, the paper said, such as using AI-enabled robots for surgery or the use of AI in self-driving cars. However, a comprehensive definition of the term was still in development.

The interim response was the product of 510 online submissions from industry stakeholders, roundtables and a virtual town hall event.

“Almost all submissions called for the government to act on preventing, mitigating and responding to the harms of AI”, it said.

The paper said the national AI centre would “create a single source for Australian businesses seeking to develop, adopt or adapt AI”.

Other measures included developing labelling mechanisms for AI-generated materials and a temporary expert advisory group to develop mandatory “guardrails”.

The paper said submissions recognised that “voluntary commitments from companies to improve the safety of systems capable of causing harm were insufficient” but views differed on the most appropriate form of regulation.

Potential mandatory “guardrails” could involve AI product testing before release, transparency regarding model design and data underpinning AI applications and training for developers and deployers of AI systems, it said.

It also considered possible forms of certification and clearer expectations of accountability for organisations developing, deploying and relying on AI systems.

RMIT research fellow Nataliya Ilyushina criticised the government’s delayed response to the paper, with the consultation process closing six months ago.

“Australia’s unacceptable delay in developing AI regulation represents both a missed chance for its domestic market and a lapse in establishing a reputation as an AI-friendly economy with a robust legal, institutional and technological infrastructure globally,” she said.

Ms Ilyushina emphasised the importance of striking the right balance between regulating AI’s risks without stifling its benefits, especially for small businesses.

“The adoption of AI is affordable and accessible, which is particularly essential for the growth of small businesses – the cornerstone of the Australian economy. Employing AI to augment human jobs has demonstrated a capacity to enhance productivity, providing a direct solution to Australia's challenges of stagnant productivity growth, the cost-of-living crisis and labour shortages,” she said.

“While businesses prefer voluntary codes and frameworks, other stakeholders – especially those working on risks related to cybersecurity, misinformation, fairness and biases – seek more stringent regulations.”

AI writer Tracey Spicer said she was disappointed by the government’s “weak” regulatory response.

“Australia had a tremendous opportunity to be a world leader in this area. Instead, it’s all about a soft, voluntary approach. Big Tech has won, like Big Tobacco in the past,” she wrote on X.

In addition to developing AI regulation, the government has also committed $75.7 million for AI initiatives in the 2023–24 federal budget, including creating SME support centres, expanding the national AI centre and funding AI graduate programs.

You need to be a member to post comments. Become a member for free today!
Christine Chen

Christine Chen

AUTHOR

Christine Chen is a graduate journalist at Accountants Daily and Accounting Times, the leading sources of news, insight, and educational content for professionals in the accounting sector.

Previously, Christine has written for City Hub, the South Sydney Herald and Honi Soit. She has also produced online content for LegalVision and completed internships at EY and Deloitte.

Christine has a commerce degree from the University of Western Australia and is studying a Juris Doctor degree at the University of Sydney. 

You are not authorised to post comments.

Comments will undergo moderation before they get published.

accountants daily logo Newsletter

Receive breaking news directly to your inbox each day.

SUBSCRIBE NOW