top of page

The Future of Crime with AI Facial Recognition? No need for trials



China introduces “AI prosecutor” that can automatically charge citizens of a crime

While in the West mostly speech and movement of people are policed through automated “AI” censorship and surveillance systems, in China, work appears to be well under way to create a machine that would act as an AI-powered prosecutor.


The product, which has already been tested by the busy Shanghai Pudong prosecutor’s office, is able to achieve 97 percent accuracy in charging people suspected of eight criminal acts, researchers developing it have alleged.


According to the South China Morning Post, the cases that the “AI prosecutor” is allegedly highly competent in handling involve crimes like credit card fraud, dangerous driving, gambling, intentional injury, obstructing officials, theft, but also something called “picking quarrels and provoking trouble.”


The last one is considered particularly “problematic” since its definition, or lack thereof, can cover different forms of political dissent.


And now the plan is to introduce a machine that would be given decision-making powers, such as whether to file charges, and what sentence to seek on a case-to-case basis.

That, said Professor Shi Yong, who heads the Chinese Academy of Sciences’ big data and knowledge management lab that is behind the project, is a marked difference between this and other “AI” tools that have already been in use in China for years. One of them is System 206, whose tasks are limited to assessing evidence, the danger a suspect poses to the public, and conditions under which they may be apprehended.


But the tech behind the new artificial prosecutor looks to be at the same time far more ambitious, and advanced. What has been disclosed is that it can be run on a desktop PC, processing 1,000 traits extracted from case description filed by humans, and based on that press a charge.


It’s unclear if the database of 17,000 cases spanning five years used to train the algorithms is enough to consider the project as true AI – and if the same result can be achieved by rule-based algorithms.


Either way, not all human prosecutors are thrilled about having some of their workload replaced in this way – although precisely this has been given as the motive for developing the tech.


“The accuracy of 97 per cent may be high from a technological point of view, but there will always be a chance of a mistake. Who will take responsibility when it happens? The prosecutor, the machine or the designer of the algorithm?,” one Guangzhou-based prosecutor noted, speaking on condition of anonymity.


Source: https://reclaimthenet.org/china-introduces-ai-prosecutor/


Clearview AI’s controversial facial recognition tech is involved in 84 Toronto criminal cases

Officers used it without even getting permission.


Clearview AI, a poster child for controversies surrounding facial recognition software used for mass surveillance, that is particularly popular among US law enforcement and agencies, has also been used in Canada.


CBC News reports about this, citing an internal document it had been able to see through an access to information request, that shows the police in Toronto used Clearview in 84 criminal investigations.


In the US the startup’s product, that is causing huge backlash among privacy advocates, is said to have been used by more than 300 local, state and federal agencies; the Canadian figures seem low in comparison, but could be only “the tip of the iceberg” since they concern only one city, and cover the period from October 2019 until February 2020.


The way Clearview works is by scraping, without people’s consent, billions of images from around the world posted on Facebook, Instagram, YouTube, but also what’s described as “millions” of other websites.


These images are then put into a database. When a customer like a police agency uploads its own photos to identify a person, those are matched with the existing Clearview database collected from the web without permission, to produce a match using facial recognition tech.


In Toronto, the document shows that the officers uploaded over 2,800 photos to Clearview’s database to match suspects, victims and witnesses in the 84 cases that have now been confirmed, during investigations carried over three and a half months.


Aware of the dark cloud of controversy that the US startup is under, the Toronto police first denied using its services, to then admit that the technology had been used – however, without at the time providing any more details.


The internal document that has now come to light reveals that Clearview AI’s free trial was apparently so appealing that police officers started using it without speaking to one another, or their supervisors first.


“When you’re enforcing the law, your first obligation is to comply with it,” commented Canadian Civil Liberties Association’s Brenda McPhail. Canada’s privacy commissioners have marked Clearview AI as a mass surveillance tool that breaks the country’s privacy laws.


Source: https://reclaimthenet.org/clearview-ais-controversial-facial-recognition-tech-is-involved-in-84-toronto-criminal-cases/

bottom of page