Friday, 15 October 2021 12:58

EU - Facial recognition: stop of the European Parliament

Rate this item
(1 Vote)

On 6 October, for the first time, the European Parliament officially took a stand against facial recognition systems and automated analysis of behavioural traits and biometric details (individual characteristics such as gait, fingerprints, DNA, voice, etc.). It called on the Commission to permanently ban the use of biometric video surveillance in public spaces to protect respect for privacy, human dignity and prevent discrimination.

MEPs have approved a resolution to urge the Commission also to block funding for biometric research and to ban the use of private face databases such as Clearview Ai and iBorder Ctrl (a sort of 'truth machine' that would detect people's emotions), social score, biometrics at borders as well as in public places.

Stop the use of any technology that can lead to mass surveillance in public places that penalizes minorities, dissent, disadvantaged communities.

The EU Parliament calls for transparent technology that doesn’t violate people's privacy and fundamental rights, more controls on the use of Artificial Intelligence by Law Enforcement Agencies. People should only be monitored if they are suspected of a crime, and final decisions should always be left to human beings. MEPs say no to predictive policing based on behavioural data and call for infringement proceedings against member states and, if necessary, a moratorium on AI that infringes fundamental human rights.

AI systems must always be supervised and developed with transparent, documented and traceable algorithms. The main problem is that numerous advanced technologies still make many errors of identification and classification, discriminating against people belonging to certain ethnic or racial groups, LGBTQ+, children, the elderly and women.

Facial recognition: the core of the EU Parliament Resolution

The Resolution is not binding as it has not yet been included in the final text of the AI Act, the Commission's proposal for a European regulation on Artificial Intelligence. However, this resolution that has just been adopted could mark the way for the future Eu law to be applied to the public and private sectors, which should land in the halls of Brussels and Strasbourg. The issue, however, is far from closed in Europe: often, the Commission does not take into account the parliamentary vote.

On the one hand, transparent algorithms are demanded, on the other, a clear legal framework, an ad hoc law that puts pen to paper is not yet there.

It will be up to the Commission to transpose this resolution in order to create a specific law, but it is not known when.

In the conclusions of the Resolution approved by the European Parliament we read that, despite the benefits brought by Artificial Intelligence, this also involves potential risks such as discrimination, non-transparent decision-making processes, intrusion into people's privacy, danger for the protection of personal data, to human dignity, freedom of information and expression. In the field of criminal justice and law enforcement, these potential risks can affect the presumption of innocence, the fundamental rights to freedom and security of individuals, the effective use of a fair trial.

It is necessary to respect the rights enshrined in the Charter of Fundamental Rights of the European Union, the right to privacy and data protection, in particular the Police Directive (EU) 2016/680.

Before implementing and disseminating any AI system, it is necessary to make mandatory checks on the impact of fundamental rights, on possible criticalities, on the transparency and traceability of algorithms to obtain trust from people on the use by law enforcement agencies and criminal judicial authorities. 

Algorithms should be transparent, traceable, sufficiently documentable: public authorities should disclose their applications as open-sourcesoftware.

Facial recognition: even the UN said 'no'

In mid-September 2021, the UN High Commissioner for Human Rights, Michelle Bachelet, called for a temporary stop to the use of facial recognition technologies and other AI-based surveillance systems, at least until there is an awareness of the risks and rules that can protect fundamental rights (privacy, freedom of movement and expression). The UN High Commissioner has asked for it by presenting a new study on AI commissioned by a UN Human Rights Council.

The study found that AI systems were often used hastily by states and companies without assessing the consequences: violations could prove devastating, have catastrophic effects on people's rights.

In particular, the use of facial recognition should be prohibited or severely restricted.

The UN study reported that AI systems can lead to unjustified arrests of innocent people, discrimination in access to public assistance or in the provision of loans. Real-time remote facial recognition used by law enforcement agencies can lead to inequities due to profiling and decision automation technologies. These systems are based on a large amount of data collected, processed, analysed and shared in an opaque, non-transparent way: they can contain errors, discrimination or be obsolete violating human rights.

Pending an ad hoc law, Member States should introduce a moratorium suspending the use of the riskiest technologies.

In August, UN experts also called for a global moratorium on the sale of all "potentially lethal" spy software, linking to the 'Pegasus case'. The Pegasus spy software produced by the Israeli company NSO Group for intelligence forces would have been sold in various countries, including to some authoritarian regimes. According to Amnesty International, Pegasus has been used by some governments to spy on human rights activists, journalists, political dissidents, managers and political leaders in various parts of the world. NSO has always denied.

Facial recognition: while the EU Parliament blocks it, Finland uses Clearview

While the European Parliament blocks biometric surveillance in public spaces (a block not yet included in the final text of the AI Act), in Finlandthe Police adopt without permission the Clearview facial recognition technology to find potential perpetrators of child abuse that has proved to be unsuitable for this activity.

The National Police Council was informed and reported the incident to the Finnish Privacy Guarantor. The Guarantor stressed that it is the duty of the National Police Council to instruct staff on the processing of personal data (especially biometric). In addition, it ordered Clearview to delete the data transmitted by the Police notifying the interested parties of the violation of personal data for unauthorized research.

Clearview Ai, an American company that holds the largest image database in the world by scraping photos on social networks, today claims to have over 10 billion faces in its database. The lawsuits in the US and Europe to block Clearview were not enough to stop the company that now also promises the recognition of faces covered by masks and blurred ones (to be treated with its machine learning and AI system). According to the Buzz Feed investigation, until February 2020 Clearview AI proposed its technology to 88 law enforcement agencies in at least 24 countries outside the US .

Clearview (whose illegal practices have been condemned by the Canadian Privacy Guarantor) denies declaring that it has no intention of selling, at the moment, its technology outside the US or to private individuals.

According to Stanford University's Annual Report, European companies have invested just $2 billion in AI compared to more than $23.6 billion invested in the US and$9.9 billion in China.

Big Tech is in short supply in Europe. In short, while the EU Parliament is concerned with banning the wide-ranging use of the most advanced technology, in Europe as well as a ban there should be talk of serious technological delay.

Francesco Ciano – ANCDV Advisory Board member

© all rights reserved

Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Functional
Tools used to give you more features when navigating on the website, this can include social sharing.
Unknown
Accept
Decline
Marketing
Set of techniques which have for object the commercial strategy and in particular the market study.
Quantcast
Accept
Decline