Recommendations from the Personal Data Protection Authority on the Protection of Personal Data in the Field of Artificial Intelligence
Artificial intelligence (AI) technologies, as one of the fastest-developing and most widely debated fields today, provide significant benefits to individuals and society while also posing various risks in terms of personal data protection. Recently, the widely discussed DeepSeek has been subjected to review by the European Commission and relevant data protection authorities.
The Personal Data Protection Authority (“Authority”) has issued the following recommendations for developers, manufacturers, service providers, and decision-makers to mitigate risks in AI applications concerning personal data protection. The need for detailed regulations in this area continues to grow with each passing day.
Human Centered Artificial Intelligence
In terms of protecting fundamental rights and freedoms, AI applications should be managed with a human-centred approach.
In this recommendation, human-centred artificial intelligence refers to AI systems not fully automating decision-making processes but instead incorporating a structure that allows for human intervention. It should align with ethical principles and human dignity. When making decisions, AI must consider an individual's autonomy and will, avoiding discrimination and bias.
Accountable Algorithms
Algorithms that ensure accountability in compliance with data protection laws should be adopted from the design phase of products and services and throughout their entire lifecycle.
The transparency and accountability of AI systems are critical for ensuring compliance with data protection laws. From the design stage, the adherence of products and services to data protection principles must be guaranteed, and algorithms should be developed accordingly. Ensuring that algorithms are auditable and traceable will help individuals trust decision-making processes.
Involvement of Individuals and Society in Risk Assessments
Risk assessments should be conducted with the participation of individuals and groups likely to be affected by the applications.
Individuals and groups potentially impacted by AI systems should be involved in the process to ensure comprehensive risk assessments. Particular attention should be given to factors such as the social impacts of AI applications, the risk of discrimination, and the possibility of incorrect decision-making. A more inclusive approach can be adopted by involving independent experts and civil society organisations in these assessment processes.
Limitation of the Scope of Automatic Processing
Products and services should be designed to ensure that individuals are not subjected to decisions affecting them solely based on automated processing without considering their exclusive views.
Individuals should not be impacted solely by decisions based on automated processing. For instance, in a job application process, instead of relying entirely on an AI-driven decision-making system, a process that includes human evaluation should be implemented. This would allow individuals to exercise their right to appeal decisions made about them. Since AI systems that process personal data can directly affect individuals' fundamental rights, their participation in decision-making processes is crucial.
Use of Anonymised Data
If the same outcome can be achieved without processing personal data in the development of artificial intelligence technologies, the data should be processed in an anonymised form.
When developing AI systems, obtaining results without processing personal data, where possible, is a crucial aspect of personal data protection. If processing personal data is unavoidable, anonymisation should be the preferred approach. Anonymisation is an effective method to safeguard individuals' privacy, and anonymised data should be periodically reassessed to ensure that its level of anonymity is maintained.
Design in accordance with the Data Protection Principle
In artificial intelligence initiatives, all systems should be developed in accordance with the principle of data protection from the design phase onwards.
AI systems must be aligned with data protection principles from the initial design stage. The "Privacy by Design" and "Privacy by Default" approaches should be adopted to ensure that systems are built to protect individuals' privacy from the outset. This proactive approach helps safeguard personal data throughout the entire lifecycle of AI technologies.
The Role of Human Intervention in Decision Making Processes
Human intervention should be established in decision-making processes, and individuals' freedom to distrust the outcomes of AI-generated recommendations should be preserved.
According to the Authority’s recommendation, AI systems should be designed to incorporate human intervention in decision-making processes. Additionally, individuals must retain the freedom to question and reject AI-generated recommendations. This is essential for maintaining individuals' independence from technological systems. The accuracy and reliability of AI-driven decisions should be continuously monitored and assessed.
Minimum Data Usage and Monitoring Model Accuracy
The quality, nature, source, and quantity of personal data used should be assessed to ensure minimal data usage, and the accuracy of the developed model should be monitored.
According to the Authority’s recommendation, a preliminary assessment should determine that only necessary data is processed. Additionally, the accuracy and impartiality of AI models should be continuously tested. Errors or biases in datasets can lead to incorrect decisions by the system. To ensure that the model operates fairly, impartially, and accurately, ongoing monitoring and testing processes must be implemented.
Conducting a Privacy Impact Assessment
If high risks are anticipated in terms of personal data protection in AI projects, a privacy impact assessment should be carried out, and the lawfulness of data processing activities should be evaluated within this framework.
Privacy impact assessments are a critical step in analysing the potential risks of the system and determining whether it complies with legal requirements. These assessments should not only be conducted at the initial stage but should also be performed periodically throughout the system's operational lifetime.
Supporting Awareness and Training Activities
To raise awareness of personal data protection, training and information initiatives on data privacy should be encouraged.
Through such efforts, both individuals and organisations can place greater emphasis on protecting data privacy in the use of artificial intelligence. Collaboration between AI developers, legal professionals, and policymakers should be established to conduct awareness activities on the ethical and legal dimensions of AI.
Ultimately, regardless of the vast opportunities that AI technologies offer, the responsibility to protect individuals' fundamental rights and freedoms must always remain a priority.
The Authority’s recommendations highlight key principles for ensuring that AI applications are developed in an ethical, transparent, and reliable manner until specific legislation is enacted in this field. Adhering to these principles will not only protect individuals’ privacy but also enhance trust in technology.
In Turkey, it is essential to establish a legal framework regulating the safe and ethical development, deployment, and use of AI systems. In shaping such regulations, the European Union AI Act, as the first legislative framework in this field, should be utilised to adopt a risk-based approach. Additionally, the principles outlined in Article 22 of the GDPR should be incorporated into legal regulations, transforming recommendations into legally binding rules to ensure comprehensive protection.
Special thanks to Ufuk Ege Uçar for his contributions.