New nationally representative research from CHOICE highlights the wide gulf between consumer expectations of AI regulation and the current state of play in Australia. The survey of more than 1000 people found that almost 4 in 5 Australians believe that businesses should have to ensure their artificial intelligence system is fair and safe before releasing it to the public, yet no such requirements exist in Australia.
The survey also found strong support for the role of government in ensuring AI systems are fair and safe, with 77% agreeing that the government should require businesses to assess AI risk before being released to the public and, further to this, 3 out of 4 agreeing that businesses should also be required to prevent AI risk before release.
The message couldn't be clearer – Australians want the government to place obligations on business to mitigate the risks of AI systems
CHOICE consumer data advocate Kate Bower
Notably, consumers expect AI risks to be assessed and managed before products enter the market and see a clear role for the government in ensuring businesses comply.
"The message couldn't be clearer," says CHOICE consumer data advocate Kate Bower. "Australians want the government to place obligations on business to mitigate the risks of AI systems."
"AI systems are notoriously opaque, making it difficult for consumers to know the risks simply by interacting with a product or service. These results show that Australians understand this and they want government and businesses to do more to make sure that AI systems are used fairly and responsibly."
The figures were released on World Consumer Rights Day (15 March), with the theme for 2024 being fair and responsible AI for consumers. Consumers International, the membership organisation for consumer groups worldwide, chose the theme as an acknowledgment of the rising tide of AI systems globally and the "serious implications for consumer safety and digital fairness" that entails.
Government takes first steps towards reform
In January, the federal government announced an interim response to last years' Safe and Responsible AI consultation undertaken by Minister for Industry and Science Ed Husic and the Department of Industry, Science and Resources. The government acknowledged that AI regulation in Australia is seriously lagging behind comparable countries and that Australians want legal protections.
In the response they committed to several actions including:
- the establishment of a temporary AI advisory group to consider mandatory guardrails for high-risk uses of AI
- the creation of a voluntary AI safety standard
- exploring the merits of watermarking for AI generated content.
However, the government intends to leave most uses of AI, like those in low- and medium-risk categories, to be governed by existing laws.
Serious gaps in our current laws
Multiple existing laws do apply to AI systems. For instance, negligent or misleading business practices are illegal regardless of which technology is used. But experts have pointed to the myriad gaps in existing legal regimes, including consumer protection, privacy, discrimination, and copyright law.
Dr Zofia Bednarz, an Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society, says "The laws we have currently in place were not prepared with AI models in mind, and they often fail to address new practices enabled by AI. Coupled with often ineffective enforcement of the existing rules, this makes our legal and regulatory system likely unable to deal with many of the new challenges posed by AI."
Bower says that, while government action is welcome, the response so far falls well short of consumer expectations.
"Australians want more than the bare minimum protection, just regulating for the highest level of risk is not going to cut it," says Bower.
"Three-quarters of Australians believe the government should require businesses to actually prevent AI risks before they release a product into the market and more than two-thirds support an independent third party assessment of the risks, such as through an AI regulator.
It's clear the government is going to have to seriously beef up their response if they want to meet consumer expectations and restore consumer trust
CHOICE consumer data advocate Kate Bower
"It's clear the government is going to have to seriously beef up their response if they want to meet consumer expectations and restore consumer trust."
CHOICE's submission to the Safe and Responsible AI consultation made several recommendations to the federal government, such as the establishment of a well-funded AI commissioner with a range of civil and criminal powers, including a new product intervention power that would remove harmful products from sale before they hit the market. CHOICE is also urging the government to enshrine in legislation requirements for AI systems to be fair, safe, reliable, transparent and accountable.
"With the World Consumers Rights Day theme for 2024 being fair and responsible AI for consumers, now is the perfect time to address the serious implications artificial intelligence poses to consumer safety and digital fairness by introducing stronger laws," says Bower.
Do you have questions or concerns about AI? We'd like to hear from you.
Our survey
CHOICE Consumer Pulse January 2024 is based on an online survey designed and analysed by CHOICE. 1,058 Australian households responded to the survey with quotas applied to ensure coverage across all age groups, genders and locations in each state and territory across metropolitan and regional areas. The data was weighted to ensure it is representative of the Australian population based on the 2021 ABS Census data. Fieldwork was conducted from the 16th of January until the 5th of February, 2024.
Stock images: Getty, unless otherwise stated.