Australia is taking a significant step in the management of artificial intelligence (AI) by establishing a new advisory body. This move, announced by the government, aims to address the growing risks associated with AI technologies.

This Image was created with the assistance of DALL·E

The initiative will involve collaboration with industry bodies to implement a series of guidelines. These will include measures like labeling and watermarking AI-generated content, indicating a proactive approach to fostering transparency and trust in AI use.

Ed Husic, the Science and Industry Minister, highlighted the economic potential of AI but also pointed out the varied application in business and underlying trust issues that need to be addressed. The new body marks Australia’s progression in AI regulation, building on its pioneering establishment of the world’s first eSafety Commissioner in 2015. However, Australia has been somewhat behind other nations in this domain. The guidelines initially proposed will be voluntary, contrasting with the mandatory regulations in places like the European Union.

Australia’s consultation on AI, which received over 500 responses last year, is part of this comprehensive approach. The government is keen to differentiate between ‘low risk’ and ‘high risk’ AI applications, like deep fakes, with a full response to the consultation expected later this year.