The European Commission is about to publish the White Paper on Artificial Intelligence, part of which, the moratorium on facial recognition, has been widely echoed. But it is mainly the ethical issues of AI that are at the centre of attention. Not only of the EU, but also of the Vatican and Silicon Valley.
“The use of facial recognition technologies by the public or private sector should be prohibited for some time (three to five years) during which a methodology to assess the impact of these technologies and possible measures to mitigate the risks can be identified and developed”. This is the most significant passage, not by chance highlighted by the media, regarding the European Commission’s draft ‘White Paper’ on Artificial Intelligence. The text, due to be presented at the end of February, is the result of the public discussion on how to address the challenges posed by this technology as a whole. The High-Level Group on AI In a report presented in June 2019 the European Commission’s High-Level Group on AI (composed of 52 experts including Luciano Florida, Stefano Quintarelli, Andrea Renda) indicated that the EU should seriously consider the need for rules to protect itself from the negative impact they could have, in particular, biometric identification (such as facial recognition), the use of autonomous lethal weapon systems (such as military robots), profiling of children with artificial intelligence systems, and the impact of AI on fundamental human rights. The draft document that came to light (published by Euractiv, you can read it here) consists of 18 pages. The full version, which the Commission is expected to release towards the end of February, presents five regulatory options for AI, which would be: 1. Voluntary labelling 2. Sectoral requirements for public administration and facial recognition 3. Mandatory requirements for high-risk applications 4. Security and accountability 5. Governance The Commission is likely to decide to formally adopt a ‘mix’ of options 3, 4 and 5, stresses Euractiv. The options for the new rules Here is what they foresee: on point 3, the document states that “the risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake”, e.g. health care, transport, police and judiciary. Security and Accountability, including cyber threats Point 4 deals with security and liability issues that may arise in the future development of Artificial Intelligence and provides for “targeted changes” to EU legislation such as, for example, the General Product Safety Directive, the Machinery Directive, the Radio Equipment Directive and the Product Reliability Directive. The “risks of cyber threats, risks to personal security, privacy and data protection” should also be identified. On the liability side, “adjustments may be needed to clarify the responsibility of AI developers and to distinguish them from producer responsibilities”. Even before that, it will have to be established whether artificial intelligence systems should be considered as “products”. Finally, with regard to Governance, the Commission states that an effective and strong public control system with the involvement of national authorities and cooperation between the Member States is essential. The Vatican and the ethics of IA “RenAIssance. Per unintelligence Artificiale umanistica”: the Pontifical Academy for Life is also dealing with this issue, with a conference that, in February, will see the participation of Brad Smith, president of Microsoft, John Kelly III, deputy executive director of IBM, the president of the European Parliament David Sassoli, the director-general of the FAO Qu Dongyu. On the occasion, Microsoft and IBM will sign a ‘Call for Ethics’ to involve companies in a process of evaluation of the effects of technologies connected to artificial intelligence, of the risks they involve, of possible ways of regulation, also at an educational level. “We are committed to this sector,” explains Monsignor Vincenzo Paglia, President of the Pontifical Academy for Life, “because with the development of Artificial Intelligence the risk is that access and processing will become selectively reserved for large economic holding companies, public security systems, and political governance actors. In other words, equity in the search for information or in maintaining contact with others is at stake, if the sophistication of services will automatically be taken away from those who do not belong to privileged groups or do not have particular skills”.