Use of AI could worsen racism and sexism in Australia, human rights commissioner warns | Artificial intelligence (AI)
AI risks to anchor racism and sexism in Australia, since the human rights commissioner in the middle of the internal debate warned how to react to the aspiring technology.
According to Lorraine Finlay, the striving for productivity results from AI should not be at the expense of discrimination if the technology is not properly regulated.
Finlay's comments follow the Labor Senator Michelle Ananda-Rajah in order to prevent all Australian data from being “freed” for technology companies to prevent the consecration of KE prejudices overseas and to reflect Australian life.
Ananda-Rajah is against a dedicated AI law, but believes that content creators should be paid for their work.
Register: Au Breaking News -E email
The productivity gains of AI will be discussed next week at the Federal Government's economic summit, since unions and industrial authorities are concerned about concerns about Copyright and data protection protection.
Media and art groups have warned “Rampant theft” of intellectual property If large technology companies can take their content to train AI models.
According to Finlay, it makes it difficult to see which distortions you can contain to determine which data records you can contain.
“Algorithmic bias means that prejudices and injustice are built into the tools we use. Therefore, the decisions that achieve this result will reflect this tendency,” she said.
“If you combine algorithmic bias with automation bias – where people are more dependent on the decisions of machines and almost replace their own thinking – there is a real risk that what we actually create, discrimination and bias in a form in which it is so anchored do not even know that it occurs.”
The Human Rights Commission has consistently campaigned for an AI law that strengthens the existing laws, including the Data Protection Act, and strict tests to distortions of the AI instruments. Finlay said the government should urgently set up new legislative guidelines.
“Bias tests and examination to ensure that human supervision is properly checked [do] You need this variety of different measures, ”she said.
There are increasingly indications of this overseasin areas such as medicine and job recruitment.
An Australian study Published in May, found that applicants were asked by AI personnel brokers that they were discriminated against when they lived with an accent or with a disability.
Ananda-Rajah, Who was A doctor and researcher in AI, before entering the parliament, said it was important that the AI tools are trained on Australian data or carry out the distortions abroad.
While the government emphasized the need to protect intellectual property, it warned that the non -opening of domestic data would mean that Australia would “rent forever [AI] Models of technical giants in overseas ”without supervision or insight into your models or platforms.
“AI has to be trained as much data as possible as possible, or it increases the prejudices, which may harm the people to whom they should serve,” said Ananda-Rajah.
“We have to free our own data to train the models so that they better represent us.
After newsletter promotion
“I am very interested in monetizing content and free the data. I think we can present an alternative to looting and looting overseas.”
Ananda-Rajah raised Skin cancer screening As an example as an example, in which the tools used for testing have an algorithmic distortion. Ananda-Rajah said that the path to overcome a distortion or discrimination against certain patients is to “train these models for as much different data from Australia as possible” with suitable protective measures for sensitive data.
According to Finlay, every publication of Australian data should be made fairly, but she believes that the focus should be on the regulation.
“It is absolutely good to have diverse and representative data … but it's just a part of the solution,” she said.
“We have to ensure that this technology is being set up in a way that is fair for everyone and actually recognizes the work and the contributions that people do.”
A AI expert at La Trobe University and former data researcher of a AI company, Judith Bishop, said that more Australian data could contribute to training the AI tools more adequately -while the Warn -KI -Tools, which were developed in overseas with international data, may not be reflected in the needs of the Australian -but that this was a small part of the solution.
“We have to make sure that a system that was originally developed in other contexts [Australian] Population that we do not rely on US models that have been trained on US data, ”said Bishop.
Esafety's representative, Julie Inman Grant, also deals with the lack of transparency in relation to the use of data from data.
In an explanation, she said that technology companies should be transparent in terms of their training data, develop reporting instruments and have to use various, accurate and representative data in their products.
“The opacity of generative AI development and use is deeply problematic,” said Inman Grant. “This raises important questions about the scale in which LLMS [large language models] Could even increase harmful prejudices, even accelerate – including closer or harmful gender standards and racist prejudices.
“With the development of these systems, some companies concentrate a real risk that certain evidence, voices and perspectives can be overshadowed or held in generative outputs.”