The federal government says the adoption of AI (artificial intelligence) and automation could add hundreds of billions of dollars to the Australian economy by 2030, but high-risk uses of the technology need safety standards and guardrails.
In releasing their interim response to the Safe and Responsible AI in Australia consultation, Minister for Industry and Science Ed Husic says Australians see the value of the technology, but want to see risks identified and tackled as well.
"We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI," he says.
"We want safe and responsible thinking baked in early as AI is designed, developed and deployed."
A risk-based response
The government's response paper says that it will target high-risk settings where the harms of AI would be difficult to reverse, adding that legislation may be introduced for mandatory protection guardrails in these areas.
'High-risk' applications of AI technology include the collection of biometric information such as facial recognition; medical devices; critical infrastructure like water, gas and electricity; determining access to education or jobs and law-enforcement.
The government says they want to make sure the businesses engaged in 'low-risk' uses of AI are allowed to flourish unimpeded.
Businesses can't regulate themselves
Rafi Alam, CHOICE's senior consumer data campaigns advisor, says he is pleased to see the government take the first steps towards regulating the use of AI. He adds that consultations with consumer groups should continue and points out that history shows that businesses can't be left to regulate themselves
"The government's interim response has recognised the inadequacy of our current laws to address consumer concerns about high-risk AI systems, and we welcome their commitment to putting in mandatory guardrails on the development and deployment of these systems," he says.
CHOICE also called for strong enforcement and a well-funded AI Commissioner with a range of civil and criminal penalty powers
"CHOICE's investigations have shown how AI can be used to harm consumers, from biased algorithms that increase prices based on age and sexuality, to facial recognition technology that can be used to monitor and discriminate against certain customers. Our submission to the consultation argued the need to implement a risk-based approach to these and other AI-based technologies."
In CHOICE's submission to the consultations late last year, we argued for a risk-based approach to AI legislation and also called for strong enforcement and a well-funded AI Commissioner with a range of civil and criminal penalty powers.
CHOICE called for Australia to look to the European Union and Canada as examples of jurisdictions where stringent AI regulation means that businesses must guarantee safe, fair, transparent, reliable, and accountable AI systems before releasing them.
Stock images: Getty, unless otherwise stated.