Australian police want artificial intelligence facial recognition system to fight child abuse | Information Age

Members of Australian law enforcement have called for changes to privacy legislation that would allow them to use AI-powered facial recognition and decryption to more easily identify victims and perpetrators of child sexual abuse material (CSAM).

Warning: This story contains references to child sexual abuse.

Speaking at the International Center for Missing and Exploited Children (ICMEC) Australia’s inaugural ‘Safer AI for Kids Summit’ in Sydney on Tuesday, representatives from federal and state police agencies expressed concerns about privacy laws and public perception that prevent them from using AI technologies brought.

Numerous individuals have admitted that the public has a negative view of law enforcement’s use of AI, in part due to previous problems caused by police secretly using AI facial recognition software created by the controversial American company Clearview AI.

Members of Australian police departments Caught using Clearview AI’s facial recognition tool In 2020, after the company collected biometric data from the internet and social media platforms without individuals’ consent.

The software is also used by other law enforcement agencies around the world.

Police ‘scared’ after Clearview AI debacle

Simon Fogarty, director of Victoria Police’s victim identification team, said there was now “massive resistance” to police using AI, but Clearview did not mention the AI ​​by name.

“There’s a lot of concern in the law enforcement community about the use of AI, especially considering some of us got burned pretty badly a few years ago with the use of some products,” he said.

Fogarty argued that there is a significant difference between online data scraping tools and AI search tools that can be used offline.

He called on the police to promote national messaging around AI to “give front-end users some comfort about what they can and can’t do” and called for a more positive public narrative about the technology.

Fogarty said he believes police should gain access to AI tools through improved policies, but clear moral and ethical considerations are needed around facial recognition.

Paul Griffiths, director of child victim identification at Queensland Police, said the use of AI in biometrics was “essential” but a lack of strong regulation had contributed to some police officers “abusing the powers” ​​of AI, resulting in law enforcement losing those powers. .

Hiro Noda, the Australian Federal Police’s (AFP) AI and emerging technology coordinator, said police needed to be “very transparent” about how they were using AI and how it would benefit society.

He said the AFP was “very keen” to work with various jurisdictions to improve their public messaging about law enforcement’s use of AI, rather than relying on “defensive” communications following media controversy.

In 2021, the Office of the Australian Information Commissioner (OAIC) ​​found that Clearview AI had breached Australia’s Privacy Act by collecting citizens’ facial images and biometrics.

OAIC Clearview orders AI shutdown and to delete these images, but the company Co-founded by Australian entrepreneur Hoan Ton-That It did not prove whether he deleted pictures of Australians or stopped collecting them.

In August, the OAIC announced: I will not take any further legal action It’s against Clearview AI, which is already facing investigations and lawsuits in other countries.



Queensland Police’s Paul Griffiths said a lack of regulation was contributing to police ‘abusing the powers’ of AI. Photo: Tom Williams / Information Age

‘We need artificial intelligence’

AFP acting commissioner Ian McCartney said generative AI allows criminals to quickly create artificial CSAM, while the technology could also help investigators sift through large amounts of media and data efficiently, “improving investigation outcomes and minimizing officers’ exposure to disturbing content.” “It can be reduced to a minimum,” he said. .

McCartney said generative AI allows criminals to both create new content and manipulate real photos, making it harder to tell which CSAM images are real.

He argued there was an “opportunity” for AI to help identify the victim through facial recognition, but said law enforcement must remain “responsible and accountable” in any use of the technology.

McCartney told parliament’s joint committee earlier this month that “more legislation and more support” was needed to help law enforcement adopt AI technologies.

Adele Desirs, a victim identity analyst who analyzes CSAM at Queensland Police, told the ICMEC summit that while analysts like her were facing “a tsunami” of artificial intelligence to make her job easier, faster and more accurate – as some other countries are doing – He said he wanted to use intelligence. The material to be researched.

“I understand the need for privacy, but when is the need for privacy more important than the right of children to be protected from sexual abuse?” he said.

Desirs questioned why the federal government didn’t impose regulations allowing law enforcement to use certain AI and facial recognition technologies in CSAM cases “three years ago since they pulled the plug.”

“We need AI,” he said, adding that transparency about its use will still be needed.

long awaited Reforms to the Australian Privacy Act It is expected to improve the protection of children by forcing online services to comply with some new privacy obligations.



AFP acting commissioner Ian McCartney said police must be ‘responsible and accountable’ in any use of artificial intelligence. Photo: Tom Williams / Information Age

encryption problem

AFP’s McCartney said end-to-end encryption technology used by many communications platforms, including Telegram and Meta’s WhatsApp, Instagram and Facebook Messenger systems, had “significantly impacted” law enforcement’s ability to detect CSAM offenders.

“We remain deeply and profoundly concerned that end-to-end encryption is being implemented in a way that undermines the work of law enforcement,” he said.

Mia Garlick, Meta’s regional policy director for Australia, defended the company’s use of encryption on its platforms, telling the summit that Meta helps law enforcement and also detects malicious activity in unencrypted domains.

Queensland Police said most CSAM cases were shared and discussed on encrypted platforms, making the job of law enforcement difficult.

He called on tech companies to “go further” to help authorities track down CSAM offenders and victims.

Griffiths, also from Queensland Police, said AI was “the best way to decrypt” encrypted communications and files.

He also suggested the technology could also be used for real-time monitoring and predictive policing by analyzing what people share online, but more regulation would be needed to make this possible.

If you need someone to talk to you can call: