By akademiotoelektronik, 27/02/2023
Artificial intelligence: understand everything of the new European framework
What is Europe's vision on AI?
The European vision of artificial intelligence is part of a more global vision, that of a digital Europe, which makes technology an ally, not a threat.The proposal, which was introduced by the executive vice-president of the digital commission, Margrethe Vestager, follows the white paper of the commission on artificial intelligence published in February 2020, which already laid the basics of the new rules forArtificial intelligence.
The EU wishes to regulate the key sectors of the digital ecosystem, and makes every effort to achieve it.The year was prolific, in fact, with the Digital Market Act and the Digital Services Act, revealed in December, then a text on cybersecurity, tests on the digital currency of Central Bank ... The EU is on all levels, and artificial intelligence does not escape it.“Whether it is a question of precision agriculture, more reliable medical diagnoses or secure autonomous driving, artificial intelligence will open new worlds.But these worlds also need rules ", this is what Ursula von der Leyen declared in her speech on the state of the Union.In summary, the European approach is to supervise AI to avoid sinking into a threatening human rights technology as in China, while guaranteeing its potential for the economy.And “this can only be done if the technology is of high quality and developed and used so as to gain the confidence of people.(...) A strategic framework of the EU based on the EU values will give citizens the confidence necessary to accept solutions based on AI, while encouraging companies to develop and deploy them ”, we can seewritten on the official European proposal page.
Dans la même catégorieMargrethe Vestager, executive vice-president of the European Commission during the presentation of Digital Services Act.Screenshot: digital century / European parliament.
What is the prohibited text?
The European Commission draws lines between unacceptable, high risk, limited risk and minimum risk for systems using artificial intelligence. Le texte propose clairement d’interdire les systèmes de notation sociale de la population(qui permettent de juger de la fiabilité d'une personne en fonction du comportement social ou des traits de personnalité relevés), tels que ceux lancés en Chine et vivement critiqués.In addition to social credit systems, the text would prohibit systems using "subliminal techniques", taking advantage of disabled people and distorting "materially the behavior of a person".
In general, AI technologies intended for “an indiscriminate surveillance applied in a generalized manner to all natural persons” will be prohibited, insofar as the State does not set them up to guarantee public security, as inthe framework of the fight against terrorism. Cela veut dire qu’en principe, l’identification biométrique à distance dans des lieux publics est interdite, à moins que l’on ne se trouve dans les cas qualifiés “d’exceptions” : une utilisation par l’État, soumise à autorisation d’un organe judiciaire, pour un motif garantissant la sécurité publique(traque d’un terroriste, localisation d’une personne disparue), limitée dans le temps et dans la géographie.
Margrethe Vestager, during the press conference which accompanied the unveiling of the text, insisted that there was "no place for mass surveillance in Europe".The nuance is fine, therefore, since the use of facial recognition technology in public places is authorized for exceptional cases.Any other use of systems that have the potential to exploit and target the most vulnerable people such as mass surveillance is banned.
Overall, the proposal also wants to ban AI systems that cause people to manipulate their behavior, their opinions or their decisions..Are targeted here, children's toys using vocal assistance and encouraging dangerous behavior.
What does the EU tolerate well?
Among the systems using AI that the EU tolerates well in the text, we find chatbots, forms of artificial intelligence used in messaging services, particularly used in customer-business relations.The latter will be subject to minimum transparency obligations, and will simply have to inform users that they are not real people.The same requirement will be applied for Deep Fakes, who will have to label their services, indicating producing false contents.Likewise, video games compatible with AI or anti-spam filters, which only present minimal risks for the safety of European citizens are tolerated.
What are high-risk systems generally, and how are they regulated?
The bill establishes new rules for providers of high -risk AI systems, and defines them.The AI considered "at high risk" encompasses all technologies which, by their key roles in society, by their potential for use and for associated risks, could seriously affect citizens' rights.
Among these technologies are the algorithms used to scan CVs, applications that make up public infrastructure networks, which are used to assess the solvency of a customer in the banking sector, distribute social security services, requestsasylum and visa, or to help lawyers and judges to make crucial decisions for justice.It is also the systems used during the school career, such as Parcoursup, which can determine access to education and the professional course of a person's life.
All these systems will be strictly supervised during their life cycle, the text tells us.They will have to undergo a compliance assessment before placing on the market, by third -party organizations.In addition, their systems will be recorded in an EU database which will ensure traceability, and will not deliver the declaration of conformity and the marking.This marking it will be necessary to be able to access the continent market.For each change to the system, the text indicates that the AI system will have to return a compliance test.All these systems will be required to show an appropriate level of human surveillance on the operation of their products, and to comply with the quality requirements concerning the data used to build software.
What does the text on data say, algorithmic biases?
The EU wishes to submit high -risk systems to maximum requirements in terms of data quality used to develop a product using artificial intelligence.Data that feeds an artificial intelligence program can indeed be the source of racial and sexist prejudice problems, which hamper the development of technology.One of the requirements of the Commission in the project is that the sets of data “do not incorporate any intentional or involuntary bias” which can lead to discrimination.For this reason, the compliance test will be essential: this will inspect the AI systems deemed at high risk should be inspected, forcing its creators to demonstrate that it has been trained on unbian data sets.
What sanctions, in the event of non-compliance with the rules?
Companies that do not comply with new rules could be sentenced to a fine of up to up to 20 million euros, or 6% of their turnover.
In order to control and supervise states in the adoption of new rules, the text also proposes to create a European committee of artificial intelligence, including a representative by EU country, the data protection authority of theEU and a representative of the European Commission.The Council will oversee the application of the law, and will be responsible for “issuing recommendations and opinions relevant to the Commission, with regard to the list of prohibited artificial intelligence practices and the list of AI systems at highrisk".These rules are therefore not immutable, but well subject to the change and dynamism of a sector as innovative as AI.
What are the defects of the text pointed out by associations and experts?
First of all, the part of the text that has been controversial is clearly that which authorizes mass biometric monitoring by public authorities and in particular contexts."The list of exemptions is incredibly wide", "goes somehow against the goal of claiming that something is a ban," said Sarah Chander, Main Policy Advisor to European Digital Rights, reported by the Wall StreetLog.
Some activists of digital rights, while positively welcoming many parts of the bill, said that certain elements seemed too vague and offered too many loopholes to companies using AI, and in particular to Big Tech.Other associations argued that the rules proposed by the EU would give an advantage to companies in China or Russia, which would not face them.We fear in particular a negative effect on the AI market in Europe, whose too complex legislation could discourage innovations and businesses.
In addition, associations point to the lack of role of citizens within the text. L’association EDRi(European Digital Rights) note l’absence d’évocation de possibilité pour les citoyens ou les consommateurs de déposer une plainte auprès de l'autorité de contrôle, ou de demander réparation s'ils ont été victimes d’un non-respect du règlement.
When could the text apply?
Comme toute nouvelle proposition de loi, et comme les très récents Digital Services Act(DSA) et Digital Market Act(DMA), cette nouvelle réglementation a encore un bout de chemin à parcourir avant de devenir effective, et ne pourrait l’être que dans quelques années.
This text on AI will first go through the European Parliament, where it will be debated and possibly amended, then by the Council, which can do the same. Il peut ainsi y avoir des va-et-vient entre les deux institutions(jusqu’à trois lectures) qui peuvent faire durer le processus, et retarder sa mise en application.
Gardons à l’esprit que les débats pour le règlement général sur la protection des données(RGPD), autre pierre angulaire de l’Europe numérique, avaient pris 4 ans.Once adopted, the regulations will be directly applicable throughout the EU.To companies to be ready.
Related Articles