The European Union has taken a significant step in regulating Artificial Intelligence (AI) by approving, on December 8, 2023, legislation aimed at addressing the challenges associated with its implementation. The new regulation, designed to ensure ethics and transparency in the development and deployment of AI systems, represents a key milestone in the ongoing effort to balance technological innovation with the protection of fundamental rights and security. However, there are diverse perspectives and opinions regarding this work, which some consider premature due to insufficient understanding of the technology’s current limits and future advancements. After reviewing the regulation adopted by the EU, the following key points emerge:
Definition of High-Risk Areas: The regulation specifically identifies high-risk areas where the deployment of AI systems could have significant impacts. Notable areas include:
- Public Health: AI applications in medical devices and diagnostic systems with direct implications for human health.
- Incorrect Diagnoses: The misinterpretation of data by AI systems could lead to erroneous medical diagnoses, compromising patient health.
- Patient Privacy: The improper collection and handling of sensitive medical data could result in patient privacy breaches, raising ethical and legal concerns.
- Equity in Healthcare: Poorly trained AI algorithms could introduce biases, affecting equity in access to medical services and the quality of care across different demographic groups.
- Cybersecurity Risks: The connectivity of AI-powered medical devices could expose them to cybersecurity risks, potentially leading to unauthorized access or manipulation of medical data.
- Excessive dependence on technology: Blind trust in AI systems for medical decision-making could lead to excessive dependence, undermining human clinical experience and judgment.
- Ethics in research and development: Ethical issues may arise during the research and development of AI applications in healthcare, particularly concerning informed consent and transparency in data collection.
- Unequal access to technology: Challenges may exist in ensuring that AI technology in healthcare is available and accessible equitably, thereby avoiding disparities in medical care.
- Education: Implementation of tutoring and educational assessment systems that impact the learning and development process of individuals.
- Bias in Assessment: AI algorithms used in student assessment may contain inherent biases, potentially affecting the fairness and accuracy of grades, particularly with respect to variables such as gender, race, or socioeconomic status.
- Lack of Transparency: The opacity of decision-making algorithms can lead to a lack of understanding regarding how certain educational decisions are made, undermining the trust of students, parents, and educators in the system.
- Excessive Personalization: Extreme personalization of instruction based on algorithms could lead to a lack of diversity in skill and knowledge acquisition, limiting exposure to different perspectives and educational approaches.
- Unequal Access to Resources: The implementation of AI technologies in educational environments could widen the digital divide and result in unequal access to educational resources, particularly for marginalized or resource-limited communities.
- Student Privacy: The large-scale collection of student data to feed AI systems raises concerns about privacy, particularly if adequate measures are not implemented to protect students' personal information.
- Job Displacement: The automation of certain educational tasks could have labor implications, displacing education professionals and raising questions about how AI can complement rather than replace educators.
- Ethical Challenges in Machine Learning: The use of advanced machine learning techniques in educational decision-making may raise ethical challenges related to the interpretation of results and accountability in cases of erroneous decisions.
- Human Disconnection: Excessive reliance on technology could lead to a loss of human connection in the educational process, which is essential for the holistic development of students.
- Employment and Human Resources: Use of AI algorithms in personnel selection and evaluation processes.
- Discrimination in Personnel Selection:AI algorithms used in recruitment and personnel selection processes may inherit biases present in training data, potentially leading to unjust discrimination based on characteristics such as gender, race, or age.
- Lack of Diversity in AI Models:The lack of diversity in training datasets may result in AI models that are not representative of the general population, affecting equity in access to employment opportunities.
- Opacity in Decision-Making:The lack of transparency in algorithms used for employment-related decision-making may generate distrust and hinder understanding of how certain decisions are reached.
- Employee Privacy: The massive collection and analysis of employee data for AI-driven decision-making raises concerns about worker privacy, particularly regarding sensitive information and personal behaviors.
- Job Displacement: The automation of routine tasks through AI may have implications for employment, leading to the displacement of workers in traditional roles and raising the need for reskilling and labor adaptation.
- Biased Performance Evaluation: Algorithms used to evaluate employee performance may contain inherent biases, affecting fairness in the allocation of rewards and development opportunities.
- Lack of Human Control: Excessive reliance on AI in labor decision-making could result in a loss of human control, generating concerns regarding accountability and ethics in the workplace.
- Impact on Employee Experience: The introduction of AI in human resource management could affect the overall employee experience, either through perceptions of injustice in automated decisions or changes in workplace dynamics.
- Individual Identification and Evaluation Systems: Development of facial recognition technologies and social scoring systems that impact privacy and individual freedom.
- Bias in Identification: AI-based identification algorithms may inherit biases present in training data, leading to incorrect identifications or discrimination based on characteristics such as gender, race, or age.
- Lack of Accuracy in Evaluation: Individual evaluation systems may be prone to errors, potentially resulting in incorrect or unjust decisions, particularly when used in critical contexts such as public safety or justice.
- Violation of privacy: The large-scale collection and processing of biometric data for identification purposes pose significant risks to individual privacy, and the mishandling of this information can lead to serious consequences.
- Security threats: The vulnerability of AI-based identification systems to cyberattacks may compromise the security of biometric data and endanger the integrity of the system.
- Lack of informed consent: In cases where biometric identification is used without the informed consent of individuals, ethical and legal concerns arise regarding autonomy and control over personal information.
- Difficulty in correcting errors: In situations where errors occur in the identification or evaluation of individuals, it may be difficult to remedy the damage and correct stored information, thereby affecting the reputation and lives of those affected.
- Interconnection of databases: The interconnection of biometric databases can amplify security and privacy risks, particularly if adequate measures are not implemented to ensure information protection.
- Social discrimination: The implementation of identification systems based on biometric traits may lead to ethnic and social discrimination, disproportionately affecting certain societal groups.
- Lack of regulations and standards: The absence of clear regulations and standards in the development and use of biometric identification systems may result in inconsistent practices and a lack of accountability.
- Risk of identity spoofing: Biometric technology, although advanced, is not immune to identity spoofing risks, and systems must be sufficiently robust to detect and prevent such attempts.
- Security: Implementation of surveillance and control systems with direct implications for public safety.
- Vulnerabilities to cyberattacks:AI-based security systems may be susceptible to cyberattacks, potentially compromising the integrity and confidentiality of sensitive information.
- Bias in decision-making:AI algorithms used in security systems may inherit biases present in training data, resulting in discriminatory or unjust decisions.
- False positives and negatives:Lack of accuracy in detection algorithms may lead to false positives (incorrect alerts) or false negatives (failure to detect real threats), affecting system effectiveness.
- Risk of Manipulation: Manipulation of AI-based security systems, whether through the introduction of false data or interference with algorithms, can compromise system effectiveness.
- Privacy and Excessive Surveillance: Large-scale implementation of AI-based surveillance technologies may raise privacy concerns, particularly if clear boundaries and control mechanisms are not established.
- Technological Dependence: Exclusive reliance on AI-based security systems may create significant vulnerabilities if these systems fail or are circumvented.
- Integration with Existing Systems: Integrating new AI-based security technologies with existing systems may present technical and interoperability challenges.
- Incorrect Feedback: Incorrect feedback from security systems can contribute to the enhancement of biased algorithms, exacerbating ethical and discrimination issues.
- Lack of Transparency: The lack of transparency in AI algorithms used in security systems can hinder the understanding and evaluation of decisions made by such systems.
- Failure Mode Testing: The lack of robustness and consistency of security systems in the face of unexpected scenarios or intentional attacks can compromise overall safety.
- Energy: Implementation of AI in management and decision-making within the energy sector.
- Infrastructure Security: Cybersecurity is a major concern, as network-connected and AI-based energy systems may be vulnerable to attacks, potentially leading to severe consequences.
- Demand and Supply Management: The AI algorithms used to forecast demand and manage energy supply must be accurate and reliable to prevent disruptions and imbalances in the grid.
- Predictive Maintenance: The implementation of AI-based predictive maintenance systems must address issues of reliability and accuracy to prevent undetected failures or unnecessary maintenance.
- Smart Grid Management: Creating a smart electrical grid involves coordinating various AI-based devices and systems, which can be complex and require interoperability standards.
- Implementation Costs: The initial investment in AI technologies and the need for staff training and upskilling can increase costs, posing a challenge for energy sector companies.
- Adoption of Emerging Technologies: The rapid evolution of AI technologies can make the adoption and updating of systems a constant challenge for companies in the energy sector.
- Resilience to Natural Disasters: AI-based energy systems must be resilient and capable of rapidly recovering from natural disasters or other adverse events to ensure continuous supply.
Requirements for Suppliers, Evaluation, and Certification: Clear requirements are established for suppliers of AI systems operating in areas designated as "high-risk," who must comply with specific measures to ensure transparency in the design, development, and deployment of their technologies.
- Technical Documentation: The supplier shall provide all technical information related to their AI system intended for market introduction, including design, development, and operational documentation.
- Provider Identification: Provide identification data, name, registered address, partners, activity classification, legal form, and balance sheet.
- Declaration of No Simultaneous Application: The provider shall declare that no application has been submitted to any other authority for the purpose of obtaining EU approval or certification. It is presumed that institutions in member states will be responsible for collecting and evaluating the information provided by providers.
- Review of Technical Documentation. The technical documentation shall be evaluated to determine whether to grant the certificate to the provider.
- Access to Source Code. The institution delegated to evaluate the technical documentation may request access to the source code when such access is justified and under specific conditions.
- Additional Testing at the Request of the EU. The provider shall accept the possibility that additional tests may be conducted to gather evidence regarding the functioning of their AI system.
- EU Certificate of Technical Documentation Assessment. If the requirements of the EU AI Regulation are met, and after verifying and validating the supplier’s system, a certification is issued, enabling its deployment within the European Economic Area.
- System Change Management. The supplier is obligated to notify any changes made to the AI system and provide relevant documentation.
Risk Management and Post-Market Surveillance: Suppliers are required to implement risk management systems and establish continuous post-market surveillance mechanisms to evaluate performance and address potential issues.
Fundamental Rights and Non-Discrimination: The legislation focuses on protecting individuals’ fundamental rights, with particular attention to discrimination. Any form of discrimination based on the use of AI is prohibited.
Increased Transparency and Accountability: Suppliers must provide detailed information about the operation of their AI systems, from design through to the post-market phase. Transparency and accountability are fundamental pillars of the regulation.
Interoperability Framework: A framework is established for interoperability between EU information systems in areas such as borders, visas, police and judicial cooperation, asylum, and migration.
This legislation reflects the European Union’s commitment to an ethical, citizen-centered approach to the development of Artificial Intelligence. Aiming to balance technological innovation with the protection of individual rights, the European Union sets a significant precedent in AI regulation, sending a clear signal about the importance of addressing the ethical and social challenges associated with this disruptive technology.
Criticism of the EU’s AI Regulation
Although the European legislation on Artificial Intelligence represents a significant effort to address emerging challenges, like any regulatory framework, it is subject to inherent issues, specifically:
- Overregulation: Some critics argue that the regulation is excessive and could hinder innovation in the field of AI. Regulatory rigidity might discourage investment in research and development, limiting technological progress. It could even deter the establishment or growth of specialized companies, leading to a brain drain and loss of talent to countries with more flexible approaches to developing these technologies, such as the US, Russia, and China.
- Potential Lack of Agility: AI technology is advancing rapidly, and some argue that legislation could become obsolete before it even takes effect. A lack of flexibility may hinder the ability to adapt to swift changes in the technological landscape, resulting in missed opportunities within a highly dynamic sector where non-EU competition is strong and well-developed. This could lead to stagnation in this economic sector and foster technological dependence on external actors.
- Complexity and Compliance: The legislation is extensive and complex, potentially complicating its implementation and compliance—particularly for smaller enterprises that may lack the resources to understand and adhere to all detailed provisions. This could also discourage the creation of new businesses and, consequently, the evolution of novel AI ideas and models within the European sphere, favoring large corporations and supranational organizations instead.
- Risk of Discouraging Innovation: By imposing strict requirements and restrictions, there is a risk that companies may avoid developing high-risk AI technologies due to the regulatory burden, thereby limiting innovation and competition. This would not halt the use of AI in these high-risk areas, which could instead be dominated by non-European firms, posing a significant risk.
- Challenges in Compliance Assessment: Compliance assessment, particularly in terms of risk, can be subjective and challenging. Ambiguity in certain criteria may lead to varied interpretations, creating uncertainty for providers. This could become a source of corruption due to the lack of impartiality and subjectivity among evaluators, experts, and officials responsible for accreditation and foresight entities.
- Lack of Global Harmonization: The absence of global standards and regulations could result in a fragmented landscape, making it difficult for companies operating across multiple jurisdictions to comply with diverse regulatory requirements. In other words, regulations in the rest of the world differ from those outlined in the EU, leading to a complex array of adaptations that would not equally benefit all companies. This would ultimately impact employment opportunities in this sector, future investments, and more.
- Emphasis on Technical Documentation: Excessive focus on technical documentation might not be sufficient to address fundamental ethical and social issues associated with AI, such as algorithmic discrimination and privacy. However, it could also generate resistance and suspicion among many companies unwilling to share their source code, proprietary operating secrets, or innovative techniques, which could be copied or reused by European firms. In other words, it could become a source of industrial espionage for which companies would not be protected.
References
- Artificial Intelligence Act: Council and Parliament Reach Agreement on the World’s First Rules on Artificial Intelligence. https://www.consilium.europa.eu/es/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/
- EU AI Act: First Regulation on Artificial Intelligence https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI Act) and amending certain legislative acts of the Union https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
- The EU AI Regulation, in Summary https://portal.mineco.gob.es/en-en/digitalizacionIA/sandbox-IA/Documents/20220919_Resumen_detallado_Reglamento_IA.pdf
- Artificial Intelligence Act: The Council and the European Parliament reach an agreement on the world’s first rules to regulate AI https://administracionelectronica.gob.es/pae_Home/pae_Actualidad/pae_Noticias/Anio2023/Diciembre/Noticia-2023-12-11-Ley-de-inteligencia-artificial-el-Consejo-y-el-Parlamento-Europeo-llegan-a-un-acuerdo-sobre-las-primeras-normas-en-el-mundo-para-regular-la-Inteligencia-Artificial.html
News
- Experts welcome the first AI regulation but highlight significant gaps. https://www.infobae.com/espana/agencias/2023/12/09/los-expertos-celebran-la-primera-regulacion-de-la-ia-pero-senalan-importantes-lagunas/
- Spain aims to finalize the European AI Regulation during its EU Presidency. https://www.infobae.com/espana/2023/06/14/espana-quiere-cerrar-el-reglamento-europeo-de-ia-durante-su-presidencia-de-la-ue/
- Consumers demand "no erosion of rights" in the European AI Regulation "in the face of pressure from the technology industry." https://www.infobae.com/america/agencias/2023/11/16/consumidores-piden-no-rebajar-derechos-en-el-reglamento-europeo-de-ia-frente-a-presiones-de-la-industria-tecnologica/
- The European Parliament took the first step toward regulating artificial intelligence tools in the region. https://www.infobae.com/america/mundo/2023/05/11/el-parlamento-europeo-dio-el-primer-paso-para-regular-las-herramientas-de-inteligencia-artificial-en-la-region/
- Countries around the world agree on standards to develop artificial intelligence responsibly. https://www.infobae.com/tecno/2023/11/06/paises-de-todo-el-mundo-acuerdan-normas-para-crear-inteligencia-artificial-de-forma-responsable/
- The European Commission assesses whether Microsoft’s investment in OpenAI must be reviewed under EU competition rules. https://www.infobae.com/america/agencias/2024/01/09/la-ce-comprueba-si-la-inversion-de-microsoft-en-openai-debe-revisarse-segun-competencia-ue/
- Nations are losing a global race to bring the dangers of AI under control. https://www.infobae.com/america/the-new-york-times/2023/12/06/las-naciones-estan-perdiendo-una-carrera-global-para-tener-bajo-control-los-peligros-de-la-ia/
- Consumers welcome the European Parliament’s veto on AI facial recognition. https://www.infobae.com/espana/2023/06/14/consumidores-celebran-el-veto-del-parlamento-europeo-al-reconocimiento-facial-de-la-ia/
- The EU’s “sovereignty” in AI, which invests ten times less than the U.S., “is at stake.” https://www.infobae.com/america/agencias/2023/10/30/la-soberania-de-la-ue-en-ia-que-invierte-10-veces-menos-que-eeuu-esta-en-juego/
- How Europe is building its regulations for artificial intelligence. https://www.infobae.com/america/agencias/2023/05/11/como-construye-europa-sus-normas-para-inteligencia-artificial/
- CNMC calls for greater precision in selecting projects to test the European AI Regulation. https://www.infobae.com/espana/agencias/2023/08/28/cnmc-pide-mas-precision-para-elegir-los-proyectos-que-probaran-el-reglamento-europeo-de-ia/
- The EU agrees on the world’s first artificial intelligence law. https://www.infobae.com/espana/2023/12/09/la-union-europea-pacta-la-primera-ley-de-inteligencia-artificial-del-mundo/
- The five most important points of the new AI law in the European Union. https://www.infobae.com/tecno/2023/12/10/los-cinco-puntos-mas-importantes-de-la-nueva-ley-de-ia-en-la-union-europea/
- The European Parliament approved a proposal to regulate the use of Artificial Intelligence. https://www.infobae.com/america/mundo/2023/06/14/el-parlamento-europeo-aprobo-un-proyecto-para-regular-el-uso-de-la-inteligencia-artificial/
- This is the EU law to regulate AI: a world pioneer in protecting against AI risks https://es.euronews.com/2023/12/09/asi-es-la-ley-de-la-ue-para-regular-la-ia-pionera-en-el-mundo-para-proteger-de-los-riesgos
- The EU approves (finally) the world’s first law regulating artificial intelligence https://www.businessinsider.es/ue-aprueba-primera-ley-mundo-inteligencia-artificial-1347466