European AI Law

European AI Law

On June 14, 2023, discussions commenced regarding the forthcoming European Artificial Intelligence (AI) Law, which is anticipated to be in place by the end of 2023. This regulation will ensure that AI developed and utilized within Europe fully adheres to the rights and principles of the EU.

The European Union (EU) acknowledges the urgency for a regulatory framework given the rapid progression of Artificial Intelligence and our escalating reliance on these technologies. While AI holds the potential to deliver immense societal benefits, it also poses potential risks. For AI to be widely accepted and adopted, it's imperative that citizens trust its ethical and responsible use. The EU aspires to be a global leader in controlled and trustworthy AI, and this new law is pivotal in that pursuit.

The law outlines several principles that AI systems must adhere to, such as transparency, non-discrimination, and accountability.


  • Transparency: AI systems should be designed and operated in a manner that allows users to easily understand their workings, decision-making processes, and the criteria upon which they act. For instance, if a bank employs an AI algorithm to assess an applicant's creditworthiness, the algorithm should be transparent enough for the applicant to comprehend how the decision was made, including the factors considered and their respective weightings.



  • Non-discrimination: AI should be designed and programmed to prevent biases and unjust decisions. It shouldn't favor or disadvantage any specific group based on protected characteristics like gender, race, or religion. For example, if an AI system is used for job candidate selection, it shouldn't prioritize or disadvantage candidates based on their ethnic background. If the algorithm disproportionately selects candidates from a specific ethnic group, this would indicate potential bias that needs rectification.



  • Accountability: AI system creators and operators must be accountable for the decisions and actions of their systems. They should be able to explain, rectify, and, if necessary, compensate for any malfunction or harm caused by the AI. For instance, if an AI system in the medical field recommends an incorrect treatment for a patient, resulting in health complications, the entity that implemented and used this system should be able to trace the error, correct the algorithm, and assume responsibility, which might include compensations for the affected patient.


The law prohibits the use of AI for discriminatory purposes or those that infringe upon fundamental rights, such as freedom of expression or data protection.


  • Freedom of Expression: AI shouldn't be used to monitor, analyze, or suppress individuals' free expression or to manipulate information in a way that silences or distorts specific viewpoints. Consider a social media platform using AI algorithms to systematically filter and remove particular political opinions, thereby limiting the diversity of voices on its platform. Such algorithmic censorship would contravene freedom of expression protections.



  • Data Protection: AI systems must be designed and operated in compliance with existing data protection regulations, ensuring users' personal information is handled with privacy and security. For instance, if a health app uses AI to predict potential future illnesses based on a user's health data, not only should this data be processed transparently and with user consent, but it should also be safeguarded against potential security breaches or unauthorized uses.


The law establishes a clear and predictable regulatory framework that fosters AI innovation.


  • Regulatory Clarity: A clear regulatory framework provides businesses and developers with a defined set of rules and guidelines to follow, eliminating ambiguity or uncertainty. For example, a start-up wishing to develop a new AI-based virtual assistant would no longer have to guess or assume acceptable practices. With clearly defined regulations, they would know in advance the requirements in terms of transparency, data protection, etc.



  • Stimulating Innovation: With clear and fair rules in place, companies can confidently invest in research and development, knowing that if they adhere to the regulations, their products and services will be accepted and adopted. For instance, a company wanting to introduce a new AI tool in the European educational market might be more inclined to invest in its development if they have clarity on what's permitted and what's not, thus reducing risks of regulatory rejection or future legal issues.


The European AI Law will not only influence how technology is developed and used in Europe but could also set a precedent for regulations elsewhere in the world. The law's success and global influence will hinge on how the EU strikes a balance between user protection, commercial interests, and adaptability to AI's constant technological evolution. We at Ravenwits will be closely monitoring to ensure full compliance.

If these topics interest you, please don't hesitate to get in touch. The best way to reach us is by leaving a comment on our LinkedIn channel: LinkedIn Link.