fbpx

First Legal Regulation for Artificial Intelligence Systems: AI Act

Artificial intelligence, which has become one of the most important components of modern technology, has rapidly transformed our interaction with technology, seamlessly integrating into various areas of our daily routines.

AI technologies, offering significant advancements and innovative solutions in fields such as medicine, finance, transportation, education and many others, have also brought with sensitivities towards critical issues such as ethics, security and privacy. Therefore, the need to establish regulatory frameworks to ensure the safe and ethical use of AI technologies has emerged.

The European Union, through the Artificial Intelligence Act (AI Act), has introduced a set of regulations regarding the development, marketing, and use of artificial intelligence systems. These regulations aim to ensure the ethical development of AI and the protection of personal data, thereby safeguarding the general safety, rights and freedoms of users and society. The AI Act, which includes comprehensive regulations covering the entire life cycle of AI systems, was approved by the European Parliament on March 13, 2024. Finally, on May 21, 2024, the European Council gave the final green light to the AI Act, which stands as the world’s first legal regulation related to artificial intelligence.

Risk-Based Approach

The AI Act, which stands as pioneering piece of legislation, adopts a “risk-based” approach, where the higher the risk of AI systems causing harm to society, the stricter the regulations they must comply with. Aiming to establish a comprehensive legal framework for AI technologies, the AI Act sets out technical and operational requirements to ensure the safety and reliability of AI systems. It introduces transparency measures to help users understand how AI systems work, establishes mechanisms for the responsible use and oversight of AI systems, and imposes strict rules on how AI systems process and protect personal data.

Key Regulations

Some of the important regulations introduced by the AI Act include the following points:

- Users of AI systems must be informed about how these systems work, what types of data they use, and how this data is processed in order to enable them to understand AI systems and make informed decisions.

- The decision-making processes of AI systems should be traceable, ensuring that the reasons behind certain decisions made by the systems can be understood.

- Rules ensuring compliance with ethical standards during the development and use of AI systems have been introduced. These rules include fundamental ethical principles such as preventing discrimination, fair usage, and social responsibility.

- Data subjects are given the right to question and object to decisions made by AI systems about them. Additionally, they have the right to know whether their personal data is being processed by AI systems and to access this data.

- Regulations have been introduced to allow regulatory bodies to inspect AI systems and check for compliance.

- Certain public service providers must assess the impact on fundamental rights before using high-risk AI systems. Increased transparency requirements have been introduced for the development and use of high-risk AI systems.

Extra Territorial Effect

The AI Act will apply to AI system providers who will market or provide services in the European Union, regardless of whether they are resident in the European Union. It will also apply to users of AI systems located in the European Union, and to providers and users of AI systems located outside the European Union if the outputs of these systems are used within the European Union. Therefore, the AI Act is not limited to the European Union; it will also apply to public and private persons outside the European Union to the extent that the use of AI systems affects persons in the European Union.

Implementation Timeline

The AI Act is expected to be published in the Official Journal of the European Union in the coming days. The new Act will be implemented gradually within 24 months of its publication, with certain exceptions for specific provisions.

  • After 6 months from the effective date; Member States of the European Union will be required to phase out the prohibited systems
  • After 12 months from the effective date, the obligations imposed on general purpose AI systems will come into effect.
  • After 24 months from the effective date; the obligations related to high-risk systems, as defined in the list of high-risk use cases list, will come into effect.
  • After 36 months from the effective date, the obligations related to high-risk systems already subject to other European Union legislation will come into effect.

As many countries are currently working on developing national policies and strategies on artificial intelligence and related issues, the AI Act could set a global standard for countries like Turkey, which do not yet have any legislation in this area. It provides an innovation-friendly legal framework while also ensuring a controlled environment, thus creating an appropriate balance between ethical and safe usage.

Special thanks to Kerem Elmas for his contributions.


Stay Informed

Subscribe to stay up to date on the latest legal insights and events of your choice.