A Comparative Analysis of The US and The EU’s Approach to Artificial Intelligence in the Context of Human Rights

Abstract

Artificial Intelligence (AI) has become a significant force in today's rapidly evolving world, attracting substantial global attention and driving countries into a competitive race. AI has the potential to transform various sectors, including health, education, transportation, energy, and finance, heralding a comprehensive transformation of our lives. AI research began in the 1950s and Ahas been shaped by important milestones and turning points. This study compares the approaches of the United States (USA) and the European Union (EU) to protecting human rights in AI development and usage, analyzing their AI regulations and identifying strengths and weaknesses.

AI's development has seen fluctuations and poses potential risks such as violations of personal data privacy, data protection, cybersecurity breaches, discrimination, and prejudice. The EU emphasises data protection and privacy, exemplified by the General Data Protection Regulation (GDPR), whereas the US adopts more flexible and voluntary guidelines. The analysis of these diverse perspectives leads to policy recommendations for advancing transparency and accountability in AI systems. These include establishing and regularly reviewing ethics committees, enhancing international cooperation, and increasing investment in AI technologies. These measures aim to ensure that AI development aligns with human rights protection, fostering a responsible and ethical AI landscape.

Keywords: Artificial Intelligence, Human Rights, Human-Centered Artifical Intelligence, European Union, United States of America

Preface

This thesis examines the impact of artificial intelligence (AI) on human rights and conducts a comparative analysis of the legal regulations in the European Union and the United States. The reason for choosing this topic is to understand how rapidly advancing AI technologies affect individuals' rights and freedoms, and how the regulatory frameworks in this area are being shaped. This study aims to guide policymakers, researchers, and individuals interested in the subject in developing AI in a manner that respects human rights.

Throughout the thesis writing process, analyzing and comparing data obtained from various sources was often complex and challenging. However, I would like to thank those who supported me in every way during this period.

First and foremost, I extend my deepest gratitude to my advisor, Prof. Dr. Emel Parlar Dal, for guiding me through the research process with valuable suggestions, support, and understanding. I would also like to thank all the professors in the International Relations Department who made significant contributions to the determination and development of my thesis topic.

Finally, I extend my gratitude to my family, my partner Gökhan, and my friends who supported me at every stage of this thesis. Their moral support gave me strength and motivation. I hope that this thesis will contribute to future studies in the field of AI and human rights.

Introduction

Subject and Purpose of The Thesis

In recent years, Artificial Intelligence (AI) has been the subject of increasing interest in the literature. This technology, which touches almost every aspect of our lives through digitalization, has made it imperative for countries and governments to take action in this field. The aim of this study is to compare the approaches of the United States of America (USA) and the European Union to the protection of human rights in the development and use of artificial intelligence. It will evaluate the human rights approach and ethical practices of these two important actors in the race for artificial intelligence.

The main aim of the study is to show how ethical and human rights practices differ in the legal regulations of the US and EU approaches to artificial intelligence. To this end, it seeks answers to the questions: 'How does AI affect human rights and the use of AI technologies in this area?', 'At what points do the regulations of the two major powers and other countries diverge?', 'What role does the approach of the two actors play in the formation of a global AI standard?', and 'What proposals can be developed for human rights-oriented AI practices? The increasing use of AI in all fields and its incorporation into decision-making processes has highlighted the need to examine all possible impacts, taking into account the benefits for human beings.

In researching this topic, it is important to evaluate the countries that are leading and shaping the development of artificial intelligence. Since the role of America and the European Union is important in the development of this technology and the formation of global standards, it will have an impact on the legal regulatory stages in other countries. Considering that areas such as privacy and freedom of expression are the most important points of human rights, it is necessary to examine in detail any formation that will affect this area in order to prevent possible future risks.

This paper explores the complex relationship between artificial intelligence and human rights, and aims to guide policymakers, researchers and individuals interested in the issue to develop artificial intelligence technologies in a way that respects human rights.

Literature Review

Artificial Intelligence has become a topic of research in various disciplines in recent years. The literature in this field evaluates the technical aspects, social impacts and ethical concerns of artificial intelligence from many different perspectives. The scope of this thesis is to evaluate the similarities and differences between the US and the EU in terms of human rights in AI regulation through a comparative analysis. Both countries have detailed human rights regulations. This study is based on the laws of both countries that directly or indirectly regulate artificial intelligence and the studies of academics in the field as the main source. In order to understand a country's perspective on an issue, it is important to know how that issue is regulated by laws and other regulations.

In addition to many of the technologies we use in our daily lives, artificial intelligence facilitates important aspects of our lives. In addition to the many benefits, the fact that AI poses risks such as algorithmic opacity, cybersecurity vulnerabilities, injustice, discrimination and prejudice, has prompted governments to develop policies on the issue.

The first part of this study focuses on the definition and development of artificial intelligence, its benefits and potential risks. Artificial Intelligence, which has been absent from the literature since the 1950s, continues today with the contributions of many experts such as Alan Turing and John McCarthy, from health to finance, from education to transport. In this field, the book "Artificial Intelligence Structures and Strategies for Complex Problem Solving", written by George F. Luger and published in 2002, has served as a reference book for understanding the algorithms and the historical evolution of artificial intelligence. Although there is still no single definition of artificial intelligence today, it has provided an important perspective to the study of what the new meaning of these two words together is.

Artificial intelligence has had many ups and downs from the 1950s to the present day. During these important periods, there have been studies that have both criticised and contributed to the field. To understand the importance of milestones such as the Dartmouth Conference, the Turing Test, which are the most important of these, references are given from the book written by George F. Luger. In addition, in terms of understanding machine learning, which is the starting point of today's generative AI, the book "Machine Learning and Other Artificial Intelligence-Applications" by Reza Forghani in 2022 is important in terms of understanding artificial intelligence algorithms that increase performance through exposure to data and fill the gap in the foundation of today's artificial intelligence technology.

The impact of artificial intelligence on our daily and professional lives is significant. Assessing these risks from different perspectives is important for developing artificial intelligence in a way that is compatible with human rights. In the second part of the assignment, the regulations of the European Union and the United States of America are included. The aim here is to understand what kind of restrictions or freedoms both powers provide for individuals, given the complexity of artificial intelligence and the importance attached to technological investment. The European Union's approach to artificial intelligence over the last decade has been examined. The General Data Protection Regulation (GDPR) has been examined in detail to ensure transparency and reliability of artificial intelligence technologies in the European Union. This regulation allows individuals to understand why they are facing a particular decision in automated decision-making processes. It has contributed to this study by defining the boundary between human rights and artificial intelligence, regulating areas such as the collection and storage of personal data.

The examination of 'Ethics Guidelines For Trustworthy AI' (2019) was used to support the rules on human rights of artificial intelligence regulated in the GDPR, and provided a basis for comparative analysis, as it allowed an understanding of the difference between the US and EU approaches. The White Paper on Artificial Intelligence: a European approach for excellence and trust, published by the European Union in 2020, contributed to the thesis by providing a comprehensive analysis of the process of establishing trust in the field of artificial intelligence and maintaining European leadership in this field. Subsequently, the 'EU Artificial Intelligence Act (2021)' and the 'Artificial Intelligence Liability Directive (2022)' were used as they facilitate the understanding of the EU's approach in this field. As these regulations aim to protect human rights by imposing liability for harm caused by artificial intelligence systems, they provided insight into the development of a human-centred approach to AI and policy recommendations. Finally, the AI Act, adopted by the European Parliament on 13 March 2024, provided an opportunity to closely evaluate the EU's most recent approach to artificial intelligence.

The United States of America's regulations in the field of artificial intelligence have also been examined from a broad perspective, using various sources. Firstly, the 'Executive Order on AI', which sets out the priorities of US President Biden's administration in the field of artificial intelligence, is discussed. It has provided a clearer understanding of US views on issues such as human rights and ethics in relation to AI. Then, together with the 'Blueprint for an AI Bill of Rights', it provided important guidance on how the US intends to protect the rights and freedoms of individuals in AI applications. In order to understand the US human rights approach to AI, various views from academic studies as well as legislation were used to illustrate the position taken in the thesis. After examining the importance given to human rights with the above documents, the 'National Artificial Intelligence Initiative Act of 2020' document, which sets out the US government's initiatives to promote research, development and education in artificial intelligence, was used in more detail to apply these regulations to specific areas. This act helped to understand the US investments and priorities in the field of artificial intelligence. Accordingly, this source is very effective in concluding that the US has a more liberal approach than the EU.  At the same time, it is important in terms of the US's goals to maintain its global leadership in this field and the steps it has taken to do so. The American Artificial Intelligence Initiative has made it possible to analyse this US claim to leadership and the policies that support it. 

The third part of the study analyses the positions of other countries in the global race for artificial intelligence. The examples of China, India and Australia, which have different perspectives, are included. The thesis analyses the aspects that set these three countries apart from the EU and the US and their approaches to the human rights of AI from different perspectives. In this context, the article 'Artificial Intelligence Governance and Ethics: Global Perspectives' shed light on understanding the AI approaches of many different countries from a broad perspective, and this document was used in selecting the countries to be assessed in this section (Daly et al. 2019: 11-24) In analysing Australia, we were guided by documents describing the methodologies developed by government agencies and other stakeholders in their efforts to develop the field. In particular, 'The Office of the Australian Information Commissioner' shed light on the areas in which Australia should improve its approach to AI and human rights, its shortcomings and its strengths.

During the course of this study, the US and EU legislation was extensively discussed and the strengths and weaknesses of both were identified in the comparison section. In defining AI, past conferences and published reports were used, and current issues were also mentioned when looking at countries on a global scale. In the comparison of AI and human rights, the regulations on human rights and AI formed the basis of the study, as well as the difference between the approach of the US and the EU in this field, examined from different perspectives and examples such as education, economy, and health.

Methodology

The first part of this thesis is an analysis of the current legal frameworks, policy documents and ethical guidelines of the US and the EU in the field of artificial intelligence. These documents form the basis of both actors' approaches to AI technologies. The study will analyse legislation, policy documents, academic articles, official reports, news and other relevant sources related to artificial intelligence in the US and the EU. Reliable and up-to-date sources will be used in the data collection process, and different perspectives will be taken into account. The analysis of the collected data will be carried out using a comparative method, identifying the similarities and differences, strengths and weaknesses of the US and EU approaches to AI, and will be used to assess the impact of AI technologies on human rights.

The study will address the relationship between artificial intelligence and human rights in a theoretical framework, drawing on fields such as human rights law, ethics and technology law. The study will be based on document analysis. The aim is to contribute to the development of a human-centred artificial intelligence system. Although this thesis mainly focuses on US and EU legislation, many academics who are experts in their fields have been involved in this thesis and have enriched the data collection process.

By synthesising and analysing these different documents, the research will provide a nuanced understanding of the regulatory, ethical and strategic dimensions that shape the approaches of the European Union and the United States.

Human Rights and Artificial Intelligence

Definition and History of Artificial Intelligence

Artificial Intelligence (AI) is being used more and more widely today and has a significant impact in many industries, from business to health. However, the concept of AI is quite broad and includes many different techniques and applications. Therefore, there is no standard definition of AI, and it can be interpreted in different ways among different experts and disciplines. Artificial Intelligence is machines that can perform tasks that require human-level intelligence (Kissinger, Schmidt, Huttenlocher 2022: 14). Machine learning (ML) is a subset of AI encompassing algorithms that enhance their performance through exposure to data. Today, it is being implemented and developed in many fields such as health, defense and transportation. Scientists using Deep Learning, which is a subset of Machine Learning, achieve remarkable feats of perception, language understanding, and problem-solving, contributing significantly to the advancement of AI capabilities.

Artificial intelligence technology, which is an indispensable part of modern life, is used effectively from smartphones and watches to home automation systems, from autonomous driving systems to financial transactions. The combination of different disciplines and the rapid change and development of artificial intelligence are the reasons for the lack of a standard definition. Therefore, for example, George F Luger defines artificial intelligence as "the collection of problems and methodologies studied by artificial intelligence researchers" (Luger, 2002: 1-35). Others take a broader perspective and define artificial intelligence as "machines think like human", "machines act like humans, machines act intelligently", and define it as adaptive learning ability.

With advancing technology and scientific research, the field of artificial intelligence (AI) has made great progress. In the history of AI, certain turns and important milestones have shaped the growth and evolution of the field. The creation of the first artificial intelligence programs, which are considered as the beginning of research, and the importance of the Dartmouth Conference, led by scientists such as John McCarthy and Marvin Minsky, are highlighted. It is defined as "the science and engineering of making intelligent machines" by John McCarthy, who is considered the father of Artificial Intelligence (Manning, 2020). At this conference in 1956, artificial intelligence was officially born. This conference provided a platform where the basic concepts of the field of artificial intelligence were identified, and the principles and goals of the discipline were discussed. Following the conference, artificial intelligence research accelerated, and significant developments were made in the field. When talking about Artificial Intelligence, the 1940s, that is, the World War II and the first emergence of computers must be addressed. In 1943, McCulloch and Pitts' work is considered the earliest attempts at artificial intelligence and proposes a theoretical model of how neurons work and how they can compute. McCulloch and Pitts' work played an important role in the intersection of neuroscience and computer science by contributing to the development of artificial neural networks and artificial intelligence models.

Another milestone in the field of Artificial Intelligence is Alan Turing's Turing Test, which he conducted in 1950 to understand the concept of "intelligence" in artificial intelligence research. Turing, who proposed this test to question whether machines have the ability to think, measures the intelligence of a computer vs. a human (Luger 2002: 35). The test includes a scenario in which a human communicates in writing with a machine. If the machine is able to fool the human, that is, if the conversation with the human is indistinguishable from a conversation actually conducted by the human, then the machine is considered to be capable of thinking. With this test, it was measured how close artificial intelligence can get to human intelligence and guided the studies in this field. In this respect, it also helps to determine ethical issues such as the interaction of artificial intelligence with humans and the effort to understand human emotions and thoughts.

The years 1956-1970 are shown as the period when a new era in artificial intelligence studies began and progress accelerated.  In this period, an artificial intelligence program was developed for the first time to imitate human thought. Developed in 1956 by Allen Newell, Herbert A. Simon and J.C. Shaw, Logic Theorist was designed to solve mathematical theorems. It is a program that takes steps to prove mathematical theorems and performs these steps using basic logic sets. Logic Theorist demonstrated the ability of artificial intelligence to learn logical and mathematical skills. However, it is not only was the first example of an automated reasoning system but also demonstrated the importance of search strategies and heuristics in a reasoning program (Luger 2002: 565).

In 1957, just after Logic Theorist, General Problem Solver (GPS) was developed by Allen Newell and Herbert A. Simon. Like Logic Theorist, GPS is an important milestone in the field of artificial intelligence and is designed to simulate problem solving capabilities. The General Problem Solver, which was developed by John McCarthy and Marvin Minsky in 1959, is designed to perform a variety of problem-solving tasks. GPS functions as a general-purpose problem-solving tool in the field of artificial intelligence. Regardless of the domain of the problem, this program applies various problem-solving strategies based on a set of rules provided by the user. GPS has served as a model for understanding and emulating human problem-solving abilities.

In 1958 Frank Rosenblatt developed a learning algorithm for a type of single-layer network called a perceptron (Luger 2002: 474). "The Perceptron" is an important model that is one of the basic building blocks of artificial neural networks. This model, which has a simple neural structure, tries to imitate neural networks. With this feature, artificial neural networks used for many tasks such as classification and recognition in the field of artificial intelligence, machine learning and data mining are based on the basic principles of "The Perceptron".

On the other hand, List Processing (LISP), developed by John McCarthy in 1958, developed a programming language in the field of symbolic artificial intelligence, which will have an impact on the present day. The main feature of LISP is the ability to operate on lists. This gives programmers the flexibility to create and manipulate dynamic data structures. This program, which evolved into many different versions over time, has become one of the most widely used derivatives in AI research, especially with Common LISP. Other computer languages such as FORTRAN and COBOL were invented during this period (Sharma, Garg 2022: 21-38).

Artificial intelligence developed from 1957 to 1974. The fact that computers are more accessible, faster and can store more information is one of the main reasons for this. But in the early 1970s reality hits. AI's revolution can be thought of as a roller coaster. There have been many ups and downs over time, and the early 1970s is one of these downs. In a series of reviews conducted by the United States to evaluate language technologies and artificial intelligence research, a committee called ALPAC (Automatic Language Processing Advisory Committee) was formed. These reports provide a critical assessment of language processing technologies and methods at the time and discuss the future potential of language processing technologies.  The impact of this report was to end the substantial funding of machine translation research in the United States for two decades (Hutchins 1996: 9-12). Subsequently, in 1971, the British government also stopped funding research in AI, as no significant results had been achieved.

Between 1970-1980, there was an increase in artificial intelligence studies and Expert Systems played an important role in the development of knowledge-based systems. Expert Systems are artificial intelligence applications that represent a rule-based approach that aims to emulate expert knowledge on a particular subject. Expert Systems have a wide range of applications in medicine, engineering, finance, law and many other fields as well as industrial and commercial applications.  The development of Expert Systems marked a transformative period in AI history, with Allen Newell, a prominent figure in AI, acknowledging the emergence of these systems as a major advance in the field (Brock 2018: 3-15). For example, they can be used to diagnose diseases in the field of medicine and to determine investment strategies in the field of finance.  However, since these systems are often costly and difficult to maintain, it is often found more beneficial to co-operate rather than replace human experts.

Another pivotal advancement in the field of artificial intelligence is the Backpropagation Algorithm, introduced in 1986 by Rumelhart, Hinton, and Williams (Sargano et al. 2014; Nguyen, 2019) Backpropagation Algorithm is an optimization algorithm widely used in the training of artificial neural networks. By enabling the efficient adjustment of connection weights based on the slope of the error function, this algorithm has enabled the learning of complex models and representations in the training of neural networks, which constitutes an important milestone in the training of artificial neural networks (Nguyen, 2019). Today, the Back Propagation Algorithm continues to play an important role in the development of artificial intelligence technologies.

Deep Blue and the Kasparov match is another important milestone in Artificial Intelligence research. This match not only showcased the capabilities of AI, but also influenced the path of AI research and its applications in contemporary society. In the 1997 chess match between IBM's Deep Blue and Garry Kasparov, Deep Blue's victory demonstrated the potential of artificial intelligence, succeeding in complex tasks that were considered exclusive to human intelligence (Naraine, Wanless 2020: 49-61). This event highlighted the progress made in AI research and showcased the capabilities of machine learning algorithms in strategic decision-making (Bory 2019: 627-642).

Benefits and Risks of Artificial Intelligence

Artificial Intelligence offers a significant benefit across various sectors of society. Increase in productivity is the one of the remarkable advantages of AI. One of the key advantages of AI is its scalability and increased autonomy compared to traditional IT systems (Mäntymäki et al. 2022: 603-609). Also in the health sector, hospital staff are using AI to identify problems and diseases. AI-powered systems are helping to analyze medical images such as X-rays and MRIs with a great accuracy. With this benefit AI helping to detection of diseases at earlier phase. Integration of AI in the health sector has driving innovation in medical research and practice.

AI fosters innovation by enabling he development of new products and services that addressing todays emerging challenges and meet consumer needs. From self-driving cars to home security systems AI ultimately leading to cost reductions and resources optimization. Moreover, AI facilitates enhanced decision-making by leveraging vast amounts of data to generate insights and predictions. Especially, AI's learning and prediction power contribute to more informed and accurate decision-making processes (Shrestha et al. 2019: 66-83). AI’s role in enabling rapid responses, automated processes and early predictions is crucial for dynamic social environments. It is empowering businesses and institutions to make more informed choices and competitiveness in global market. The digital economy and financial economy improved with the integration of AI, blockchain, and big data. In this scope, AI developed the efficiency of financial services by creating digital platforms. Every day, people using lots of applications in their phones, and they can perform operations that may take a long time in a very short time.

Artificial Intelligence improving fast and contributing environmental sustainability by resource use, reducing waste, increase efficiency in farming and help to minimize the ecological footprint of human activities. Furthermore, AI can simplify environmental analysis, facilitate building calculations, and contribute to environmental management in various sectors, including oil and gas markets (Chutcheva et al. 2022: 10; Goussous 2020: 1350-1358). However, it is essential to ensure the responsible and ethical use of AI in environmental applications. AI presents a wide benefit for the environment from sustainable agriculture to biodiversity. Responsibly and ethically utilizing AI can help promote environmental sustainability and tackle urgent environmental issues.

In spite of significant benefits, it offers, AI also presents risks and challenges that must be addressed. The potential risks such as gender discrimination, malicious use of AI, job displacement, data privacy are the most pressing concerns. These risks consist of ethical concerns, societal impacts and environmental implications. Nowadays AI systems becoming capable of performing tasks performed by humans, fears of widespread unemployment and economic disruption are growing. The rise of AI has the potential to fundamentally alter workplaces and professions, leading to uncertainties and anxieties about job security and future career prospects (Mirbabaie et al. 2021: 73-99). In addition, it is important to understand the negative effects that AI can have on children's development. The use of AI in education can affect adolescents' social adaptation and behavior. Apart from these, AI-supported education systems may negatively affect educational opportunities among groups with limited access to technology. At this point, international co-operation may be needed to ensure educational equality.

On the other hand, concerns about ethical issues regarding AI are also issues that are being discussed. These ethical concerns include privacy breaches, discrimination, unemployment due to automation, and security risks (Huang et al. 2023: 799-819). Considering that many companies store personal data, the extent of the risk increases. The lack of sufficient transparency in Artificial Intelligence applications leads to manipulations and errors. AI systems may evolve over time and behave in ways that are not fully understood or predicted by humans. Therefore, the areas where artificial intelligence will be used for any decision-making mechanism should be carefully selected.

Another factor is that AI algorithms can cause discrimination by learning and reflecting biases in training data sets. And if at this point the AI learns the wrong information in a training data set, it may make incorrect decisions. This may lead to injustice and unfairness in decision-making processes. At the same time, this complex process may lead to accountability deficiencies. Artificial Intelligence algorithms that make it difficult to clearly identify who is responsible for decision-making processes may make it difficult to resolve legal problems. 

Since Artificial Intelligence also closely affects people's lives, it is seen in studies that people may have negative effects on their individual lives. On the one hand, although Large Language Models such as ChatGPT-4o, which have emerged in recent days, they can now chat with people at a level to understand their facial expressions and adjust the tone of voice, it can cause depression, emotional collapse or burnout in the long run. At the same time, the development of AI has the potential to fundamentally change professions, leading to uncertainties and concerns about job security and future career prospects.

The integration of AI technologies raises concerns related to inequality, data-intensive business models, and the need for a deeper engagement with scientific and technological debates to address the negative impacts on human rights (Bakiner 2023). For example, the European Union's proposal to regulate Artificial Intelligence aims to prevent AI from causing harm while using it for societal benefit. It emphasizes the importance of human rights as a guiding framework in the development and use of AI systems. The EU's approach recognizes the potential threats to human rights posed by rapid technological progress and emphasizes the need for legislative measures to address these risks.  Moreover, the responsible development of AI requires a human rights-based approach to mitigate long-term risks and uncertainties associated with AI technologies, emphasizing the need for proactive measures to protect human rights in the evolving landscape of AI applications (Lane 2023).

In terms of national security, research shows that fully automated decision-making mechanisms can lead to costly errors and fatal consequences (Osaba, Welser 2017: 1-23). On the other hand, with the emergence of artificial intelligence agents, they can manipulate information and turn it into a cyber-attack tool, and the empowerment of malware such as Mirai targeting IoT devices with artificial intelligence can increase the impact of cyber-attacks (Osaba, Welser 2017: 1-23). Artificial intelligence is trained according to a specific data set, and this is called "Data Diet". It has been discussed that as a result of a mistake in the information in the data sets with the data feed, both artificial intelligence systems can make mistakes and enemies can transfer false information to artificial intelligence systems by using this vulnerability. An artificial intelligence system can be used for malicious purposes to create disinformation. The use of artificial intelligence in cyber-attacks makes it very difficult to identify people and can be used to manipulate political discourse, especially on social media (Osaba, Welser 2017: 1-23).

Artificial intelligence can also be used in social engineering activities by analyzing political and social networks. It has the potential to manipulate political outcomes by connecting isolated groups or influencing different segments of society to achieve specific political goals. For example, AI can be used to create public opinion in favor of a particular political party or candidate, to direct social reaction on a particular issue, or to support a particular political action. Such social engineering activities can threaten national security by weakening democratic processes and institutions.

“The Risks of Artificial Intelligence to Security and the Future of Work” written by Osonde A. Osoba and William Welser IV focuses on the effects of artificial intelligence on employment. The potential of artificial intelligence to substitute manpower is analyzed in terms of income inequality. The fact that investments in AI technologies are limited to large technology companies and firms with access to data causes the productivity increase and gains brought by AI-induced automation to be concentrated among these "superstar firms". This has the potential to further deepen income inequality (Osoba, Welser 2017: 1-23). At this point, it has been deemed necessary to automate tasks rather than fully automating jobs, despite the fact that nearly half of the jobs in the US could be automated in the future.

Human Rights and Ethical Concerns

Today, artificial intelligence, whose influence is increasing, touches every aspect of human life. From human health to smart vehicles, AI, which is everywhere where the human hand touches, provides a wide range of benefits, but also brings concerns in terms of human rights. The harmonious progress of human rights and artificial intelligence is seen as a necessity. It is important to examine and evaluate the potential impacts of AI systems on human rights in order to provide recommendations for the development and use of AI. Therefore, understanding the relationship between AI and human rights will enable the development of a more transparent and fair system in the future.

Artificial intelligence can be used in the diagnosis and treatment of diseases in health services to protect the right to life. It can help eliminate discrimination among people receiving education by creating equal opportunities in education. It can develop technologies to reduce poverty in the economic sense and help increase global prosperity by supporting economic productivity (Singil 2022: 128-153). Artificial intelligence can help increase production by using the area to be cultivated in the most efficient way in planting seeds in agriculture. This will be beneficial to humanity in terms of food security and economic development.

Artificial intelligence can be used to ensure gender equality. As an example, the artificial intelligence robot named RAAJI, developed as a joint project with UNESCO in Pakistan, has the ability to chat with women on issues such as reproductive health (Singil 2022: 128-153). Artificial intelligence that will help empower women can be useful in this sense. Therefore, artificial intelligence should be considered not only as an element that will affect economic development but also contribute to society.

Today, while we live in a period when politicians can start a war on the social media platform X, the spread of unfounded news is happening at any moment. The creation of disinformation or fake videos, Deepfake applications, which include actions committed by different people, undermine the freedom of expression and the freedom of people to make their own decisions. Artificial intelligence can cause this violation by playing a role in the creation and dissemination of such news.

Medium-term effects are that changes in the nature of work may call into question the participatory status of many people in society. Risse expresses concern that while technological change is always good for society and humanity, AI could exclude millions, potentially weakening their presence in the political community. (Risse 2019: 1-16). This is a concern even in developed countries such as the USA. The long-term effects raise more philosophical questions. Such as the emergence of complex societies where machines and humans coexist (Risse 2019: 1-16). This raises questions such as the moral status of machines, the blurring of the distinction between humans and machines, and even whether the Universal Declaration of Human Rights (UDHR) should be applied to some machines.

The EU expects AI to be human-centered, facilitating human life and contributing to society and the economy. In this process, the protection of fundamental rights is the EU's top priority. In addition, developing the EU's technological industrial capacity and being prepared for socio-economic changes is another priority (Misuraca, Hasselbalch 2022). With the EU Artificial Intelligence Coordination in 2021, the European Commission and Member States aim to promote the EU's vision of sustainable AI to the world (Misuraca, Hasselbalch 2022).

The EU believes that AI should serve people and avoid potential risks such as discrimination, prejudice and invasion of privacy. However, other researchers believe that the EU's human-centered approach may be idealistic and that these risks cannot be completely eliminated. For example, examples such as China's social credit system, which is one of the pioneers of AI, draw attention to the fact that AI may violate human rights (Cataleta, Cataleta 2020: 41-59). Although the EU introduces strict rules on the development and use of artificial intelligence, it is difficult for these rules to be universally accepted. Especially in countries where individualism is more prominent, such as America, AI technologies will be more libertarian. On the other hand, since countries such as China will be more restrictive in the impact of AI technologies on human rights, it will be more difficult to create an international consensus. Therefore, the rapid development of AI technologies makes it difficult for regulatory and ethical frameworks to keep pace with these developments. These ethical risks of AI shows that the EU's human-centered approach needs to be constantly updated.

The EU's approach to human rights in AI technologies is more surrounded by regulations such as the General Data Protection Regulation (GDPR) with a strong emphasis on data privacy and individual rights, while the US prioritizes more flexible and voluntary guidance. Yet both approaches recognize the risks of bias and lack of transparency. In addition, while both approaches seek to maximize the benefits of AI, the timeliness and compatibility of guidance to any individual or institution needs to be continuously improved. Stronger international co-operation and regulations are necessary in a period when technology is developing so rapidly.

The US recognizes the potential impact of artificial intelligence on human rights and is working to ensure that this technology is compatible with human rights. The 'Blueprint for an AI Bill of Rights' is an important step towards this goal (The White House Office of Science and Technology Policy, 2022). The blueprint envisions that as artificial intelligence technologies are developed, there will be ongoing collaboration and testing to reduce risks. One of the biggest potential negative impacts of artificial intelligence is discrimination. Algorithmic precautions should be taken to prevent all kinds of linguistic, religious and racial discrimination. Data collection is another risk, and the US has given more space to individuals in matters such as collecting or deleting data to prevent this, prioritizing respect for individual choices and showing a more flexible attitude than the EU towards any danger that may arise from any data.

The fact that artificial intelligence is developing more and more every day, and that we are using this technology individually in many areas, makes the need for harmonization between human rights and AI imperative. The approaches of the US, which is a pioneer in AI, and the EU, which wants to ensure safer progress, are similar in their emphasis on ethical concerns and respect for fundamental rights. Where they diverge is in the difficulty of reconciling the rapid development of AI with ethical frameworks. International cooperation, continuous updates and evaluation processes are therefore needed to realize the vision of human-centered AI.

Comparative Analysis of EU And US Approaches to AI

European Union’s Regulation on Artificial Intelligence

The European Union's stance on artificial intelligence is characterized by a human-centered perspective. While describing the potential benefits of AI, the EU also addresses the risks and potential harms of this technology. The primary focus of the EU's AI strategy is ensuring the safety and protection of individuals, while concurrently fostering the development of a robust AI ecosystem capable of competing globally. The EU, which is part of the developing process of artificial intelligence, started with discussions at the European Commission in 2018, realizing that a step should be taken on this path that includes its own values and can respond to risks in its own way.

The 1995 Copenhagen Criteria is an important element in the human-centered approach followed by the European Union in Artificial Intelligence regulations. Copenhagen criteria are the basic criteria that candidate countries for EU membership must fulfil. These criteria were set at the EU summit in Copenhagen in 1993. Accordingly, candidate countries have basic political, economic and legal requirements to fulfil in order to apply for EU membership. Candidate countries are expected to have democratic institutions and to protect human rights. These include freedom of the press, freedom of expression and the right to a fair trial. The EU's AI policies promote compliance with the EU's core values, while the Copenhagen Criteria ensure that this technology is developed and used in a human-centered manner.

In 2000, the EU Charter of Fundamental Rights is an important document reflecting the fundamental values of the Union and its endeavor to protect human rights. This charter has had a direct impact on AI regulations because it emphasizes that human dignity must be respected, and human rights must not be violated during the development of AI technologies. AI technologies largely process data. This makes it essential to protect personal data, which is one of the core values of the EU. Also, The EU Charter against artificial intelligence technologies such as Deepfake, which carry the risk of manipulation of personal data and breach of privacy, requires measures to be taken against these threats. Therefore, AI regulations should be compatible with the data protection principle of the Charter of Fundamental Rights. The Charter stipulates that the Union's objectives include improving the welfare of its citizens and establishing a peaceful environment for their residence (Göçen 2023: 9-11). 

On the other hand, the Lisbon Treaty, which came into force in 2009 and affects the EU's competences and decision-making processes, reinforces the EU's commitment to safeguard and secure its fundamental values. Thus, it encourages the use of artificial intelligence in a safe manner that respects human rights. The General Data Protection Regulation adopted in 2016 plays a crucial role in emphasizing the importance of the EU's ethical considerations and human-centered nature in the race for Artificial Intelligence. The GDPR, an important part of EU data protection laws, deals with AI systems, especially focusing on the 'right to an explanation' for decisions made automatically (Rudin, 2019). Since EU’s approach to AI centers on safeguarding fundamental rights such as human dignity and data privacy, the EU underscores fairness, model explainability, and accountability as core tenets. Furthermore, the EU's regulatory framework for AI prioritizes trustworthiness and aims to prevent the infringement of fundamental rights by AI operators. Thus, GDPR’s requirement for transparency and accountability in decision-making systems, especially in the automated systems are aligned with explainable AI.

To foster excellence and trust, the EU published the White Paper on Artificial Intelligence on 2020. In order to ensure that the EU remains globally competitive while preserving European values, the paper provides insights into the risks and opportunities that may emerge with AI technologies. Also paper highlighting that AI systems should be considered from different perspectives such as individual and societal. To achieve this, while preserving European culture, EU plans to establish a regulatory framework that is compatible EU’s core values of fundamental rights. As in every document on Artificial Intelligence published by the EU, the White Paper will also include policies to ensure that AI systems are human-centered and to protect against discrimination. However, use of remote biometric identification, such as facial recognition in public places, has started a argument between European states.

The White Paper on Artificial Intelligence introduces the concepts "ecosystem of trust" and "ecosystem of excellence" and discusses policy solutions to encourage the development of a trustworthy AI. The paper also discusses need for the investments and innovation in AI research and development in various sectors. With the improvement in healthcare paper stated that AI can enhance diagnostics, disease prevention and treatment. Agriculture is in everyone agenda for the implementing AI systems. White Paper promoted that, AI systems can optimize farming practices and help to develop solutions for addressing climate change. Although the European Union states with this paper that artificial intelligence should be applied to many points in the developing age and that it will achieve a successful result when it is done in accordance with ethical values, it is aware that measures should also be taken against the existing risks. These risks include cyberattacks, deepfakes. Also, in decision-making process AI systems can make decisions that is complicated to understand by a human. This can cause a discrimination and even amplify existing biases. Also, autonomous applications like self-driving cars, can pose safety risks because of the unforeseen circumstances. That White Paper emphasizes to address these challenges are important while building trust in AI.

In the section 5 of the White Paper, discusses the types of requirements that could be imposed on high-risk AI applications, such as training data-requirements, record keeping, transparency, non-discrimination and fairness and human agency and oversight. The paper emphasizes the need to minimize these risks, in areas especially human rights affects directly such as aw enforcement and judiciary. In this respect, the challenges posed by the opacity and complexity of AI in verifying compliance with existing laws and ensuring their effective enforcement are emphasized. As a result, White Paper shows that AI should work for people and be a force for good in society, strengthening the competitiveness of European industry and improving citizens' well-being.

The EU AI Act, proposed by European Commission in 2021, is one of the EU's most comprehensive attempts to regulate AI initiatives. The Act establishes a harmonized legal framework for the development, marketing and use of artificial intelligence technologies. Defining AI as software that can produce outputs such as content, predictions, recommendations or decisions for a set of human-defined goals, the Act aims to be future-proof. The purpose of this regulation is to ensure that AI systems are technically advanced, ethically democratic and in compliance with the law, respecting democratic values, human rights and the rule of law. The proposal adopts a risk-based approach to AI systems and categorizes them into unaccountable risk, high risk, limited risk and minimal risk. The high-risk AI Systems in the Act contains a set of rules. These are stated as artificial intelligence systems that may harm people, children and especially disabled individuals, and it is emphasized that such practices are contrary to EU values. It is proposed to ban certain high-risk AI applications, such as general-purpose social scoring by public authorities and real-time biometric identification in public places by law enforcement agencies. These systems, which may pose a danger to human health and safety, are subjected to strict conformity assessments before being placed on the market. These include details such as cyber security, transparency, human oversight, data quality and technical documentation.

The Artificial Intelligence Liability Directive (AILD) aims to harmonies liability rules for damages caused by artificial intelligence systems alongside a Product Liability Directive (PLD). Artificial intelligence technologies are being used more frequently to improve decision-making in various fields such as health, mobility and agriculture. However, the risks associated with the use of AI, in particular liability rules, pose challenges. Artificial intelligence technologies are being used more frequently to improve decision-making in various fields such as health, mobility and agriculture. However, the risks associated with the use of AI, in particular liability rules, pose challenges. The current EU liability framework consists of the Product Liability Directive 85/374/EEC (PLD) and national liability rules applied in parallel (Madiega 2023). The proposed EU AI liability directive aims to increase legal certainty for damages caused by AI, thereby increasing consumer confidence and ensuring successful innovations across the EU.

Lastly, with the European Parliament legislative resolution of 13 March 2024, the Artificial Intelligence Act aims to ensure a high level of respect for fundamental rights while advocating the adoption of human-centered artificial intelligence. The Act is to protect the right to privacy by prohibiting the creation or expansion of facial recognition databases through untargeted collection from the internet or CCTV footage. The use of Artificial Intelligence in educational fields carries the risk of causing discrimination. Therefore, it is aimed to prevent the exclusion of individuals.  In addition, the Act requires a prior fundamental rights impact assessment to assess the impact of AI systems on fundamental rights. It ensures that individuals, in particular people with disabilities and workers, are informed about and have access to high-risk AI systems in the workplace.

The AI Act aims to ensure that artificial intelligence is not only beneficial for individuals but also for the environment. Therefore, it emphasizes that environmental concerns and sustainability should be considered in the development of artificial intelligence. It also encourages member states to support and incentivize access for people with disabilities and the elimination of socio-economic inequalities. Member states should support the development of AI solutions by allocating sufficient resources to these areas. EU funds can also be allocated as resources. AI developers are also encouraged to support interdisciplinary projects that affect a wide range of areas. By encouraging AI innovation, there will be not only commercial gains but also social and environmental benefits that align with the EU's values.

The European Union has established a human-centred and trustworthy foundation for AI developments. The EU wants to ensure that AI helps people, is developed according to ethical rules and avoids unjust biases. It therefore supports research and provides funding in this field. It stresses transparency and accountability. It is important that technological progress is compatible with human rights and social benefits. The EU stresses that all aspects of society should work together to realise this potential. Overall, the EU's regulatory approach to AI is a multi-pronged one that aims to ensure safety and consumer rights while fostering innovation. However, some concerns remain, such as the effectiveness of the proposed rules, their impact on innovation and their interaction with national laws.

United States’ Regulations on Artificial Intelligence

As a pioneer in the AI race, the United States' approach to regulating AI is evolving with an increasing emphasis on encouraging innovation in AI while reducing potential risks and ensuring that its development and deployment is consistent with ethical principles. Artificial Intelligence systems have the potential to revolutionize many sectors in recent years, especially in many areas such as health, finance, transport and education. With such rapid development of technology, it has also brought some responsibilities in terms of ethical, legal and social aspects. USA, as a pioneer in the field of artificial intelligence, has developed regulations and policies to manage the development and potential dangers of this field.

In the absence of laws directly regulating AI systems, existing consumer protection laws and anti-discrimination laws applied to AI systems where appropriate. For example, the US Federal Trade Commission Act of 1914 plays an important role in controlling the fairness and reliability of AI systems, particularly in sectors such as healthcare, by prohibiting unfair and deceptive trade practices. The Civil Rights Act of 1964 prohibits discrimination based on race, color or sex. This plays a critical role in regulating the prevention of discrimination in the recruitment process of candidates. These laws directed AI developers to act ethically and responsibly at that period and tried to ensure that AI systems did not harm the society.

The 1950-1960 years, when Artificial Intelligence research sprouted, is the period when studies started in the USA. In this period, the USA made significant investments in artificial intelligence research, especially in the field of defense and scientific studies. The Dartmouth Summer Research Project on Artificial Intelligence conference held at Dartmouth College in 1956, one of the first official steps in the field of artificial intelligence, brought together many artificial intelligence researchers. In the 1960s, the US Department of Defense’s Advanced Research Projects Agency (DARPA) made significant investments in areas such as robotics, natural language processing and machine translation.

Towards the end of the 1960s, advances in artificial intelligence research and discussions on the potential effects of this technology led to a focus on the ethical and social implications of artificial intelligence and regulations. In this period, it was aimed to develop artificial intelligence in accordance with ethical and safety standards and to increase the transparency of research. In the 1970-80s, known as AI winter, it is a period when not many AI-specific regulations were made in the USA. However, regulations at that time, such as the Privacy Act of 1974, helped to deal with areas such as privacy of personal data, discrimination and exclusion.

Since the 2000s, the US has conducted more studies on the impact of artificial intelligence on human rights. The 2019 Algorithmic Accountability Act requires companies to detect potential bias and discrimination in artificial AI systems used in high-risk decision-making processes.  The US has generally supported the development of AI technologies and has made its regulations specific to certain sectors. Nevertheless, AI regulations in the US have remained limited at the federal level and some states have enacted their own laws regulating the use of AI technologies. Recently, as awareness of the potential negative impacts of AI on human rights has increased in the US, California in particular has enacted strict privacy and anti-discrimination laws regarding the use of AI.

“The American Artificial Intelligence Initiative: Year One Annual Report” outlines the US strategy to maintain and enhance its leadership in artificial intelligence. Based on the information in the annual report published in 2020, in 2019, US President Donal Trump launched the American Artificial Intelligence Initiative (AAII) to protect American leadership and ensure economic and national security. This initiative aims to use artificial intelligence in government services. It updated the National AI R&D Strategic Plan to realize the AAII's main goals of investing in AI research, expanding access to data and computing resources, removing innovation barriers, and training an AI-ready workforce. These efforts demonstrate the United States' commitment to maintaining its leadership in AI and bringing the benefits of this technology to all segments of society.

The 2022 “Blueprint for an AI Bill of Rights” provides a framework to protect the rights of the American people in the design and use of AI systems. It emphasizes that individuals should be able to opt out of automated systems at any time and have access to a person who can solve their problems. In addition, individuals should know how AI is being used and have a say in how their data is used. The framework includes concerns about the American public's use of artificial intelligence systems in ways that could threaten their access to critical resources and opportunities. Measures such as proactive equality assessments and the use of representative data are proposed to prevent discrimination. It goes beyond wide competencies and adds sensitivity and specificity to ensure health equity and enable effective proactive algorithm monitoring (Sendak 2023).

Executive Order (E.O.) 14110 on the Safe, Reliable, and Secure Development and Use of Artificial Intelligence, issued by the Biden administration on 30 October 2023, establishes a government-wide effort to guide the development of artificial intelligence. It signifies a significant step towards regulating and promoting the responsible utilization of artificial intelligence (AI) technologies (Tuğaç 2023: 74-94). The Executive Order sets forth numerous requirements for numerous federal agencies. Their implementation can be complex and intensive. On the other hand, while AI innovation is encouraged, reporting and regulations can overwhelm small start-ups.

The development of artificial intelligence in America has always been encouraged. Today, studies in this field continue rapidly. For example, at the artificial intelligence fair held in Las Vegas in January 2024, education and security-oriented artificial intelligence attracted a lot of attention. In particular, the Weapon Detection System developed by Bosch utilizes AI technology to prevent armed attacks in schools. On the other hand, some concerns arise from this liberalized life. As the presidential elections are approaching, disinformation is being created with the content that is put into the media using artificial intelligence. Deepfake videos, images and audio recordings reflect the potential for manipulation of elections.

It is observed that the US regulations in the field of AI try to minimize the negative effects on society while ensuring that the potential in the development of this technology is revealed. In order for artificial intelligence to be managed not only as an innovation tool but also with a human-oriented approach, it should be built on basic principles such as transparency and accountability. In particular, the rapidly changing nature of artificial intelligence technologies and the new ethical and legal issues that arise will require regulatory frameworks to have a dynamic and flexible structure. With the further influence of artificial intelligence in all aspects of social life, the US will change and improve its regulations in order to maintain its leading position in this field.

Comparative Analysis of European Union and United States’ Approaches

One of the 21st century's most transformative technologies, Artificial Intelligence is attracting great global interest and investment. Artificial intelligence, which has the potential to revolutionise medicine, education, transport, energy, finance and many other fields, is the harbinger of a great transformation that will affect every aspect of our lives. However, there are also ethical and legal problems behind this rapid development. Therefore, it is necessary to develop comprehensive regulations to minimise the potential risks while maximising the benefits of artificial intelligence. As we enter deeper into the era of technology, we utilize it across various facets of our everyday routines. This presents an immense opportunity that could see us solve global issues while bettering the lives of people; however, there are also dangers which could lead to violations of human rights. Therefore, the impact of artificial intelligence on human rights has a very complex structure. On the one hand, while combating discrimination, increasing access to health services, making our lives easier with automation systems, it may pose serious threats to society such as data privacy violations and unemployment with the large data sets it uses.

When analysing documents arguing that artificial intelligence (AI) poses enormous challenges to human rights, they are divided into short-term, medium-term and long-term. Human rights are interfered with by technology, which is an example of what short-term challenges look like. Medium-term challenges include changes in the nature of work that may call into question the participatory status of many people in society. In the long term, humans may have to live with machines that, while highly speculative, are intellectually and possibly morally superior to them. Artificial intelligence also gives new meaning to moral debates that used to seem mysterious to many.

In this area, the United States of America (USA) and the European Union (EU) stand out as two important powers competing for global leadership in the development, use and management of artificial intelligence technologies.  Both powers value the potential of AI for innovation and economic growth and focus on different priorities and methods in their regulation. While the US takes a more flexible and market-oriented approach in the broad perspective, the EU takes a more security- and fundamental freedoms and social values-oriented approach. The EU wants to be a big player in AI worldwide, so they are using AI to make their economy stronger and more competitive (Hälterlein 2022). On the other hand, the American method to controlling artificial intelligence is best described as a more hands-off strategy, as shown by documents from the White House that describe their views on artificial intelligence readiness (Cath et al., 2017).

The EU has taken proactive steps to establish a "human-centric" approach to AI, as demonstrated by documents such as the "Ethics Guidelines for Trustworthy AI" and the White Paper on AI (Mobilio, 2023). Moreover, the EU has been recognized as well-positioned to lead in utilizing AI to address climate change, with specific suggestions to harness AI for environmental sustainability (Cowls et al., 2021). Regarding healthcare AI, the EU and the US have different data governance approaches, each with its own advantages and disadvantages (Bak et al., 2022). The EU's focus on mitigating potential societal impacts of AI aligns with its broader approach to AI governance, which centers on ethical considerations and societal well-being.

Although both the US and the European Union (EU) aim to be leaders in the development and use of artificial intelligence, their approaches differ. In the realm of domestic AI research and development, the US is widely regarded as a primary competitor to China. However, when it comes to matters of ethics, governance, and regulation, the US has been somewhat less proactive institutionally in comparison to China and the EU. This situation changed with the recent Trump Administration Executive Order on Maintaining American Leadership in Artificial Intelligence, issued in February 2019. (Daly et al. 2019: 11-24). With this decree, the American Artificial Intelligence Initiative was launched and a Selection Committee was established within the National Science and Technology Council. In the EU, the General Data Protection Regulation (GDPR), which is the important legislation in this field, entered into force in 2018 and focuses on data protection and privacy issues in artificial intelligence applications. In the US, the human rights perspective is balanced with other factors such as national security and economic interests.

The US, differing from the EU, focuses more on national security and economic interests than human rights in its AI strategies. The US's greater investment in military applications in AI technologies raises concerns about human rights. Although the US Department of Defence states that the use of AI in the military domain will develop ethical and security principles, it is unclear how compatible these principles will be with international human rights standards. The European Union's objective is to guarantee the ethical development and use of artificial intelligence products, with a focus on upholding human rights (Brand 2022: 130-150). The EU has adopted a thorough and enforceable framework designed to tackle the challenges that come with the rising use of AI concerning human rights protection (Chatterjee, N.S. 2021: 110-134). Thus, some scholars invest in AI arguing that because of the EU’s core values such as being fair, being explainable, being accountable causing it to fall behind in the AI race.

The US takes a sector-specific, more liberal approach to fostering innovation, whereas the EU has a strict framework, centred on human rights and ethics. The EU's Artificial Intelligence Act categorises AI systems according to their risks.  Systems that manipulate the cognitive behaviour of individuals, such as voice-communicating toys that cause dangerous behaviour in children, systems that classify people according to their personal characteristics or socio-economic status, systems such as facial recognition, are seen as unacceptable risks and a threat to humans. (European Parliament 2023:1). Although there are exceptions in some applications in unacceptable risks, a very strict system is applied. Systems that jeopardise security and fundamental rights are defined as high risk, and accordingly, all products that fall within the EU's product legislation are included in artificial intelligence systems in areas that should be included in the EU's database. In contrast, there is not yet a comprehensive federal law in the USA. Therefore, although AI systems seem to be developing faster in the USA, it brings many concerns about human rights and ethical values.

The US legal landscape regarding AI is evolving, with discussions around potential changes in laws to accommodate AI as a standard of care (Jassar et al. 2022: 185-189). When an application related to artificial intelligence is required, the US tries to address its concerns by relying on existing legal principles and tort law. When an application related to artificial intelligence is required, the US tries to address its concerns by relying on existing legal principles and tort law. On the other hand, it is observed that it focuses more on possible liability consequences for the health sector.

One of the goals of the European Commission chaired by Von Der Leyen for the period 2019-2024 is to fit the EU for the digital age. With the motto of working with new generation technologies and people's adaptation to the digital age, their work in this field continues rapidly. While realising this digital transformation, climate-neutral EU is among the targets by 2050 (European Commission 2020:1). A reliable artificial intelligence touches many aspects of human life, such as health, safe and cleaner transport, and aims to integrate this technology into people's lives. In the development of these systems, however, it is imperative to have a human-centred and ethical approach, which are the core values of the EU.

In order to develop a reliable AI system, the EU is funding many projects. In 2020, according to the “Report on safety and liability implications of AI, the Internet of Things and Robotics” published by the Commission, the EU stated that a number of changes will be made to regulate the challenges posed by new technologies such as the Internet of Things (IoT) and robotics (European Commission  2020: 1). These changes are intended to protect consumers, encourage innovation and promote safe and responsible use. By aiming to ensure that artificial intelligence systems are verifiable at every step, the physical and mental security of individuals is kept at the highest level, which indirectly ensures the protection of fundamental rights such as the right to life.

Some argue that the EU's prioritization of data protection and privacy, and its adoption of a human-centered approach are the reasons why it is lagging behind in the global race. This can be attributed to several factors highlighted in the literature. The EU's emphasis on ethical AI regulation, while commendable, may have slowed down the pace of AI development and adoption compared to the US, where a more permissive regulatory environment has fostered rapid innovation in AI technologies (Malmborg 2022: 757-780). The European Union's approach to regulating AI, as seen in the proposed AI Act, prioritizes the safety and ethical use of AI systems. While this is a noble goal, it could potentially stifle innovation and make it harder for European companies to compete with their US counterparts, who enjoy a more relaxed regulatory environment (Ronanki, 2023: 10). Yet, both in the US and the EU, they lack the capacity to formulate AI policies.

Global AI Landscape and Human Rights

Artificial intelligence has emerged as a major power in today's fast-developing technological age, and countries have entered a race against this development. China, the US and the European Union stand out as the leading actors in this race, investing heavily in AI research and developing comprehensive AI strategies (Castro, McLaughlin 2021: 5-30). While the benefits of AI are facilitating our lives in economic, social and many other ways, its ethical practices also pose potential risks. In this context, the approaches of China, the US and the European Union to AI are shaping both their domestic policies and their international cooperation efforts.

China

China has strategically positioned itself in the global landscape of artificial intelligence (AI) development, with a clear focus on policy, ethics and regulation (Roberts et al. 2020: 59-77). China, which has been supporting artificial intelligence studies since 2015, stands out as a country with ambitious goals in this regard, by giving importance to the growth and development of artificial intelligence. The Next Generation Artificial Intelligence Development Plan, published by the Chinese government in 2017, reveals that the country will invest heavily in the artificial intelligence sector in the coming years and aims to become a world leader in artificial intelligence innovation by 2030 (Wu et al. 2020: 312-316). This plan, which determines new laws and regulations, also includes the establishment of an ethical and normative policy for artificial intelligence.

China's approach to artificial intelligence is based on being beneficial to humans and nature. It promotes global co-operation in this field and encourages research on human-artificial intelligence co-operation. However, the uncertainty of the impact of China's AI initiatives on policies raises some concerns (Castro, McLaughlin 2021: 5-30). China, which bases artificial intelligence on socialist values, is committed to social harmony. However, allegations about the use of artificial intelligence for the purpose of repression and surveillance, especially against the Uyghur population in the Xinjiang Uyghur Autonomous Region, question China's commitment to AI ethics (Dixon 2022: 2-81).

In the context of human rights, the intersection of AI and ethics has attracted global attention, with calls for comprehensive policies to regulate AI and protect human rights (Chatterjee, N.S. 2021: 110-134). China's approach to AI, unlike that of Western democracies, prioritizes public safety and social cohesion over human rights.

China's approach to AI, unlike that of Western democracies, prioritizes public safety and social cohesion over human rights. This raise concerns that human rights may be ignored in the use of artificial intelligence. China's social credit system can be given as an example. The Chinese social credit system is a system developed to regulate the behavior of citizens and businesses (Dixon 2022: 2-81). This system, which collects data in many areas (from individuals' use of social media to their financial transactions), is used to grant or restrict certain social and economic privileges to individuals. This system can even restrict citizens' ability to find a job or travel. The data used in this Social Credit System violates human rights and data confidentiality. It also has the potential to restrict privacy and freedom of expression (Daly et al. 2019: 11-24).

China's approach to AI in the field of human rights is also intertwined with its geopolitical ambitions and security considerations (Obeid et al. 2020: 42-52). Particularly in minority regions, AI technologies raise serious concerns about human rights violations. On the other hand, although China, which has been criticized on the data privacy side, does not have a regulation such as the GDPR of the European Union in data protection, studies have been carried out in this direction in recent years. However, China's data protection approach focuses on protecting consumer rights rather than limiting government access to data. But while this facilitates the government's access to data in the use of artificial intelligence, it also brings the risk of not protecting the privacy of individuals.

India

India's approach to AI is largely influenced by the country's national-level initiatives such as Digital India, Make in India, and the Smart Cities Mission. These initiatives aim to make India an empowered knowledge economy and make India a leading force in AI by creating smart cities (Marda, 2018). The task force set up by the Government of India under the Ministry of Industry and Commerce on harnessing AI for beneficial purposes and its use in various sectors provides avenues to support development in this area (Chatterjee and N.S. 2022: 110-130). On 20 March 2018, the task force published a report detailing the necessary next steps (Chatterjee and N.S. 2022: 110-130). This report will play a role in the formulation of an AI policy in India.

The report has identified many areas (such as health, education, retail) where India can benefit from AI applications. At this point, India's utilization of artificial intelligence technologies to provide better service to citizens and better manage resources will have positive effects on the economy. In order to realize these, they are aware of the need to reach a good point in the level of education. Therefore, they support projects in this field at universities, establish research centers and support entrepreneurs.

The use of AI in the context of human rights requires careful consideration of ethical principles and regulatory frameworks in India. In the face of the existence of data protection issues, India needs to encourage data collection and provide accessibility to this data with full protection, ensure the security of data within the appropriate framework, ensure digitalization through IoT, as stated in the report published in 2018 (Chatterjee and N.S. 2022: 110-134).

Australia

Australia is the only Western democracy that lacks a comprehensive and enforceable bill of human rights (no comprehensive constitutional protection of rights) (Daly et al. 2020: 11-20). However, with AI technology becoming even more influential in recent years, Australia has also increased its interest in the impact of technology on human rights. In June 2019, the Australian federal government published a paper providing a streaming perspective on AI governance and developments in this area. With this document, the Australian government is demonstrating its willingness to pay attention to human rights.

The discussion paper focuses on ethical considerations in the development and use of AI, focusing on algorithmic bias, accountability and transparency. However, the Tech Council of Australia, an industry lobby group, advocates a responsible and safe approach to AI development. While Australia has a strong technology sector and some AI technologies, they state that Australia lags behind other Western countries in the AI race (Tech Council of Australia Submission 2023: 2-29). Australia recognizes the role of AI in driving innovation and increasing productivity. However, investment in the sector is low compared to other countries, reducing the capacity for research in this area. 

Australia, which has adopted a risk-based approach, is aware of the potential risks that AI may pose. Although its safe and risk-based regulatory approach seems to enable the development of this field, this approach does not pave the way for targeted regulation that does not stifle innovation. Another shortcoming is the fact that it has not yet been determined how existing legal regulations will be integrated with artificial intelligence technology and what their impact will be.

On the other hand, Australia recognizes that AI contributes to economic and social development and that developments in this area should be accelerated. However, the Office of the Australian Information Commissioner (the OAIC) has noted that there are significant community concerns about the use of personal information in the development of artificial intelligence. The two main issues that Australia emphasizes in the development of artificial intelligence are trust and transparency. Therefore, it has been observed that the current regulatory laws do not provide sufficient assurance against the potential risks that may arise from the use of personal data by artificial intelligence. Australia's privacy framework is based on the Privacy Act 1988 and aims to minimize privacy risks (Law Council of Australia 2023: 1-39).

Australia also recognizes the need to streamline the regulatory regime for the use of AI activities by individuals.  In this context, it emphasizes the need to be mindful of issues such as bias and discrimination, and to carefully monitor and assess such issues. It also recognizes that certain AI applications, such as facial recognition technology, may raise concerns about privacy, civil liberties and abuse (Tech Council of Australia Submission 2023: 2-29). While Australia takes its commitment to human rights seriously, there are many developments that need to be followed to make progress in the AI race.

Policy Recommendations for Human-Centered AI

Since artificial intelligence is present in every aspect of human life, the need for governments to adopt policies in this area has been stated. In this sense, when comparing the USA and the EU in the context of human rights of artificial intelligence, which is the aim of the thesis, both powers have points of divergence and similarities. Both approaches attach importance to human rights and apply the regulations they have developed accordingly. The EU has implemented particularly strict rules in this regard and has sought to develop a human-centred AI system.

It is understood that China, another major power in the AI race, has a more socialist perspective in its approach. As can be seen, the issues that each country prioritises in the development of artificial intelligence differ. While the US leaves a freer space for innovation, the EU places great importance on the protection of personal data. However, despite its benefits, artificial intelligence has many potential risks in the context of human rights. It is important to develop human-centred AI to address these risks.

The lack of AI ethics guidelines worldwide highlights the need for a wide range of stakeholders to establish principles and policies to guide the ethical use of AI (Jobin, Ienca 2019: 389-399). Therefore, the first recommendation would be to encourage global collaboration and knowledge sharing. By sharing the experiences and practices of different countries, common standards for human-centred AI can be developed. However, for this to happen, there should be continuous negotiation and exchange of ideas between countries. The value placed on human rights may not be the same in each country. Therefore, in the first process, some flexibility should be provided to help countries adapt to the system, and then help them improve both the regulation of artificial intelligence and the value given to human rights.

Equal opportunities in technology do not apply to every country. At this point, the development of artificial intelligence should be ensured through cooperation. Furthermore, the role of AI in achieving the Sustainable Development Goals highlights the need for supportive policies and regulations to guide the rapid development of AI (Vinuesa et al. 2020). UNESCO and the OECD have published principles and recommendations for the responsible development and use of AI, highlighting the importance of promoting an ethical and human-centred approach to AI (Ronanki 2023: 10).

The second is transparency and openness. The US, the EU and China are paying attention to this issue. However, more attention should be paid to high-risk AI, where the EU lists the risks of artificial intelligence. Artificial intelligence operators should be open in all testing processes and provide reports that clearly show everything. It is essential that these are constantly reviewed, updated and amended. Techniques should be developed to increase the accountability of complex AI systems, especially those known as 'black boxes'.

The introduction of AI into many sectors, such as education, health and transport, makes it difficult to set a general standard. Therefore, the third policy recommendation is to establish sector-specific ethical rules. Independent ethics committees should be set up to determine these ethical standards. These committees should include people with different areas of expertise, such as academics, bureaucrats and NGOs, and should establish comprehensive rules that meet the needs of each sector, taking into account different perspectives.

One of the policies proposed by Ren Bin Lee Dixon in his article ‘Artificial Intelligence Governance: A Comparative Analysis of China, the European Union, and the United States’ published in 2022 is ‘Invest in AI Research’ (Dixon 2022: 77). This area is mentioned at several points in the document. Dixon's work emphasises that US investments in AI research and development are part of the country's goal of maintaining its AI leadership and supporting economic development, and particularly emphasises US investments in expanding access to data and computing resources in research, removing innovation barriers, and training an AI-ready workforce (Dixon 2022: 77).

Investing in AI research in the EU context was emphasised in the White Paper and stated that it is necessary for the EU to increase its global power in this field (European Commission, 2020). On the US side, ‘The American Artificial Intelligence Initiative: Year One Annual Report", investments in this field are part of the country's goal to maintain its AI leadership and support economic development. Australia's reason for investing in this field is attributed to the country's lagging behind in the AI race against other Western countries (Tech Council of Australia Submission, 2023: 2-29).

The development and use of artificial intelligence (AI) technologies has the potential to have a profound impact on human rights. Potential risks such as prejudice and discrimination, privacy violations, unemployment due to automation and the undermining of human dignity raise significant human rights concerns. Therefore, the development of human-centred AI policies is crucial for the protection and promotion of human rights. When formulating policies, the main objective should be to ensure that AI is used in a fair, transparent, accountable and dignity-respecting manner, so that the technology is used for the benefit of humanity. Adopting a human-centred approach will contribute to the creation of a sustainable and equitable AI ecosystem that aims to maximise the social benefits of AI while minimising its potential harms.

Conclusion

This thesis comparatively analyses the perspectives of the United States of America and the European Union on the protection of human rights in the development and use of artificial intelligence technologies. The study examined the regulations of both actors in the field of artificial intelligence and human rights, and aimed to reveal the differences by analysing various policies and ethical approaches in this field.

The study started by addressing the historical development and periods of artificial intelligence, followed by its benefits and potential risks. While the potential of artificial intelligence to revolutionise many sectors such as health, education and transportation is emphasised, attention is also drawn to its potential dangers such as discrimination, unemployment, data privacy violations and security risks. In this context, the importance of a human-centred approach to artificial intelligence that respects human rights was emphasised.

The attitudes, regulations and policies of the European Union and the United States of America on artificial intelligence have been analysed in detail by taking the opinions of different sources, and the strengths and weaknesses of both powers in leading artificial intelligence have been determined. The differences between the European Union's regulations focusing on data protection and privacy issues such as the General Data Protection Regulation (GDPR) and the United States' approach based on more flexible and voluntary guidelines are analysed in detail.

It then analyses China, which is at the forefront of the global race for artificial intelligence, and Australia, which has yet to develop in this field. Finally, India's approach to artificial intelligence will be analysed. The approaches of these countries will be examined in the context of human rights and an idea of the steps taken today will be given. Although China has a more socialist perspective, it raises concerns on this issue by having a social credit system that will restrict human rights. Australia, recognising the importance of investing in artificial intelligence, has also included private initiatives to address this area and is moving forward with some reporting. India wants to be a leader in AI with its national-level initiatives such as Digital India, Make in India and the Smart Cities Mission, but it needs to improve its methods for safely collecting data in the context of human rights.

After analysing the different perspectives of the countries, policy recommendations were developed for action in this evolving field. In order to increase transparency and accountability in artificial intelligence systems, it has been suggested that ethics committees should be established and regularly reviewed, international cooperation should be strengthened, and investments in AI technologies should be made. However, as establishing international cooperation in the face of this ever-changing technology is a process that requires a lot of negotiation, future studies should focus on this area.

In conclusion, this thesis aims to contribute to the debates in the field of AI ethics and law by comparatively analysing the impact of AI technologies on human rights and different regulatory approaches in this field. The result of the study shows that ethically compatible policies should be developed by considering human rights in the development, use and updating processes of artificial intelligence systems. This study provides a basis for understanding the complex relationship between human rights and AI research. The more moderate or harsh attitudes of the two leading pioneers of AI to human rights on the international stage have been revealed and, in this sense, shed light on the areas to be regulated in the field of AI. Although the study looked at five different countries at a global level, more countries' approaches to AI should be examined in order to establish common standards. In addition, the rapid development and ever-changing nature of AI technologies shows that research in this area should continue and current developments should be closely monitored. Future studies can contribute to the body of knowledge in this area by examining the impact of AI technologies on human rights in more specific areas and in different cultural contexts. In particular, issues such as the detection and prevention of bias and discrimination, transparency and explainability of AI decisions, and AI technologies' respect for human dignity may be important topics for future research.


References

  • Alic, D. (2021). The Role of Data Protection and Cybersecurity Regulations in Artificial Intelligence Global Governance: A Comparative Analysis of the European Union, the United States, and China Regulatory Framework (C. Ashraf, Ed.) [Master Thesis]. https://www.etd.ceu.edu/2021/alic_dalia.pdf
  • Bak, M. A. R., Madai, V. I., Fritzsche, M., Mayrhofer, M. T., & McLennan, S. (2022). You Can’t Have AI Both Ways: Balancing Health Data Privacy and Access Fairly. Frontiers in Genetics. doi:10.3389/fgene.2022.929453
  • Bakiner, O. (2023). The Promises and Challenges of Addressing Artificial Intelligence With Human Rights. Big Data & Society. doi:10.1177/20539517231205476
  • Bory, P. (2019). Deep New: The Shifting Narratives of Artificial Intelligence From Deep Blue to AlphaGo. Convergence the International Journal of Research Into New Media Technologies. doi:10.1177/1354856519829679
  • Brand, D. (2022). Responsible Artificial Intelligence in Government: Development of a Legal Framework for South Africa. Jedem - Ejournal of Edemocracy and Open Government. doi:10.29379/jedem.v14i1.678
  • Brock, D. C. (2018). Learning From Artificial Intelligence’s Previous Awakenings: The History of Expert Systems. Ai Magazine. doi:10.1609/aimag.v39i3.2809
  • Cataleta, M. S., & Cataleta, A. (2020). Artificial Intelligence and Human Rights, an Unequal Struggle. CIFILE Journal of International Law, 1(2). https://doi.org/10.30489/cifj.2020.223561.1015
  • Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach. Science and Engineering Ethics. doi:10.1007/s11948-017-9901-7
  • Castro, D., & McLaughlin, M. (2021). Who Is Winning the AI Race: China, the EU, or the United States? — 2021 Update. Center for Data Innovation. https://itif.org/publications/2021/01/25/who-winning-ai-race-china-eu-or-united-states-2021-update/
  • Chatterjee, S., & N.S., S. (2021). Artificial intelligence and human rights: a comprehensive study from Indian legal and policy perspective. In International Journal of Law and Management (C. 64, Issue 1, ss. 110-134). Emerald. https://doi.org/10.1108/ijlma-02-2021-0049
  • Chutcheva, Y. V., Kuprianova, L. M., Seregina, A. A., & Kukushkin, S. (2022). Environmental Management of Companies in the Oil and Gas Markets Based on AI for Sustainable Development: An International Review. Frontiers in Environmental Science. doi:10.3389/fenvs.2022.952102
  • Daly, A., Hagendorff, T., Hui, L., Mann, M., Marda, V., Wagner, B., Wang, W., & Witteborn, S. (2019). Artificial Intelligence Governance and Ethics: Global Perspectives (Version 1). arXiv. https://doi.org/10.48550/ARXIV.1907.03848
  • Data Management, Analytics and Innovation. (2020). In N. Sharma, A. Chakrabarti, & V. E. Balas (Eds.), Advances in Intelligent Systems and Computing. Springer Singapore. https://doi.org/10.1007/978-981-13-9364-8
  • Dixon, Ren Bin Lee. (2022). Artificial Intelligence Governance: A Comparative Analysis of China, the European Union, and the United States. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/229505.
  • European Commision. (2020). Report On The Safety And Liability Implications Of Artificial Intelligence, The Internet Of Things And Robotics (COM(2020) 64 final), EC: Brussels.
  • European Commision. (2020). White Paper On Artificial Intelligence - A European Approach To Excellence And Trust (COM(2020) 65 final) EC: Brussels.
  • European Commision. (2021). Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts (2021/0106 (COD)), EC: Brussels.
  • European Commission. Directorate General for Communication. (2020). Excellence and trust in artificial intelligence: shaping Europe’s digital future. Publications Office. https://doi.org/10.2775/988466
  • European Commission. Joint Research Centre. (2020). AI Watch, historical evolution of artificial intelligence: analysis of the three main paradigm shifts in AI. Publications Office. https://doi.org/10.2760/801580
  • European Parliament. (2024). European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), EP: Brussels.
  • Forghani, R. (2020). Machine Learning and Other Artificial Intelligence Applications, An Issue of Neuroimaging Clinics of North America, E-Book: Machine Learning and Other Artificial Intelligence Applications, An Issue of Neuroimaging Clinics of North America, E-Book. Retrieved from https://books.google.com.tr/books?id=GpYFEAAAQBAJ
  • Gonsalves, T. (01 2019). The Summers and Winters of Artificial Intelligence. doi:10.4018/978-1-5225-7368-5.ch014
  • Goussous, J. (2020). Artificial Intelligence-Based Restoration: The Case of Petra. Civil Engineering and Architecture. doi:10.13189/cea.2020.080618
  • Göçen, I. (2023). European Union’s Approach to Artificial Intelligence in The Context of Human Rights [Master’s Thesis]. https://avesis.deu.edu.tr/yonetilen-tez/4097bca0-ce80-4549-a3c2-43f1273d3d2a/european-unions-approach-to-artificial-intelligence-in-the-context-of-human-rights
  • Hälterlein, J. (2022). Technological Expectations and the Making of Europe. Science & Technology Studies. doi:10.23987/sts.110036
  • Harris, L. A., & Jaikaran, C., (2023). Highlights of the 2023 Executive Order on artificial intelligence for Congress, [Report], Highlights of the 2023 Executive Order on Artificial Intelligence for Congress (2023). Washington, D.C; Congressional Research Service.
  • Huang, C., Zhang, Z., Mao, B., & Yao, X. (2023). An Overview of Artificial Intelligence Ethics. Ieee Transactions on Artificial Intelligence. doi:10.1109/tai.2022.3194503
  • Jassar, S., Adams, S., Zarzeczny, A., & Burbridge, B. (2022). The Future of Artificial Intelligence in Medicine: Medical-Legal Considerations for Health Leaders. Healthcare Management Forum. doi:10.1177/08404704221082069
  • Jobin, A., & Ienca, M. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence. doi:10.1038/s42256-019-0088-2
  • Jones, K. (2023). AI governance and human rights: Resetting the relationship. Royal Institute of International Affairs. https://doi.org/10.55317/9781784135492
  • Kissinger, H., Schmidt E., Huttenlocher D. (2022), “The Age of AI”, Great Britain: Hachette UK Company.
  • Lane, L. (2023). Preventing Long-Term Risks to Human Rights in Smart Cities: A Critical Review of Responsibilities for Private AI Developers. Internet Policy Review. doi:10.14763/2023.1.1697
  • Law of Australia. (2023, August). Safe and responsible AI in Australia. Retrieved from https://lawcouncil.au/resources/submissions/safe-and-responsible-ai-in-australia
  • Liaropoulos, A. (2020). Fostering EU’s Digital Autonomy: Different Perspectives in the Transatlantic Community (Working Paper Series Νο. 3). University of Piraeus, Laboratory of Intelligence & Cyber-Security. Retrieved May 1, 2024, from https://www.des.unipi.gr/files/lab-ics/wps/wps3.pdf
  • Luger, G. F. (2008). Artificial Intelligence: Structures and Strategies for Complex Problem Solving (6th ed.). USA: Addison-Wesley Publishing Company.
  • Madiega, T. (2023). Artificial Intelligence Liability Directive. European Parliamentary Research Service (PE 739.342 – February 2023). https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/757605/EPRS_ATA(2024)757605_EN.pdf
  • Malmborg, F. af. (2022). Narrative Dynamics in European Commission AI Policy—Sensemaking, Agency Construction, and Anchoring. Review of Policy Research. doi:10.1111/ropr.12529
  • Manning, C. (2020). Artificial Intelligence Definitions [Review of Artificial Intelligence Definitions]. Stanford University, Human-Centered Artificial Intelligence. https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf
  • Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Defining Organizational AI Governance. Ai and Ethics. doi:10.1007/s43681-022-00143-x
  • Martinez, D., Malyska, N., & Streilein, B. (2019). Artificial intelligence: short history, present developments, and future outlook, final report. In MIT Lincoln Laboratory (DFARS Part 252.227-7013). Massachusetts Institute of Technology. Retrieved May 15, 2024, from https://www.ll.mit.edu/r-d/publications/artificial-intelligence-short-history-present-developments-and-future-outlook
  • Mirbabaie, M., Brünker, F., Möllmann, N. R. J., & Stieglitz, S. (2021). The Rise of Artificial Intelligence – Understanding the AI Identity Threat at the Workplace. Electronic Markets. doi:10.1007/s12525-021-00496-x
  • Misuraca, G., & Hasselbalch, G. (2022). International Outreach for human-centric artificial intelligence initiative. Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/international-outreach-ai
  • Mobilio, G. (2023). Your Face Is Not New to Me – Regulating the Surveillance Power of Facial Recognition Technologies. Internet Policy Review. doi:10.14763/2023.1.1699
  • Nadimpalli, M. (06 2017). Artificial Intelligence Risks and Benefits. 6.
  • Naraine, M. L., & Wanless, L. (2020). Going All in on AI. Sports Innovation Journal. doi:10.18060/23898
  • Nirenburg, S., Somers, H.L., & Wilks, Y. (2003). ALPAC: The (In)Famous Report. https://aclanthology.org/www.mt-archive.info/90/MTNI-1996-Hutchins.pdf
  • Obeid, H., Hillani, F., Fakih, R., & Mozannar, K. (2020). Artificial Intelligence: Serving American Security and Chinese Ambitions. Financial Markets Institutions and Risks. doi:10.21272/fmir.4(3).42-52.2020
  • Osoba, O.A., & Welser, W. (2017). The Risks of Artificial Intelligence to Security and the Future of Work.  https://www.rand.org/pubs/perspectives/PE237.html.
  • Risse, Mathias. (2019). Human rights and artificial intelligence: an urgently needed agenda. Human Rights Quarterly, 41(1), 1-16.
  • Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2020). The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation. Ai & Society. doi:10.1007/s00146-020-00992-2
  • Salgado-Criado, J., & Fernandez-Aller, C. (2021). A Wide Human-Rights Approach to Artificial Intelligence Regulation in Europe. In IEEE Technology and Society Magazine (Vol. 40, Issue 2, pp. 55–65). Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/mts.2021.3056284
  • Samsun, A. T. (2021). Yapay Zekâ Yarışında Avrupa Birliği’nin Konumu. EURO Politika(8), 24-31.
  • Sargano, A. B., Sarfraz, M. S., & Haq, N. (2014). An Intelligent System for Paper Currency Recognition With Robust Features. Journal of Intelligent & Fuzzy Systems. doi:10.3233/ifs-141156
  • Science and Technology Policy Office. (2020). American Artificial Intelligence Initiative: Year One Annual Report. [Government]. Science and Technology Policy Office. https://www.govinfo.gov/app/details/GOVPUB-PREX23-PURL-gpo136646
  • Science and Technology Policy Office. (2020).  Blueprint For An Ai Bill Of Rights Making Automated Systems Work For The American People. [Government]. Science and Technology Policy Office. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
  • Sendak, M., Balu, S., & Hernandez, A. F. (2023). Proactive Algorithm Monitoring to Ensure Health Equity. Jama Network Open. doi:10.1001/jamanetworkopen.2023.45022
  • Sharma, L., & Garg, P. K. (2021). Artificial Intelligence: Technologies, Applications, and Challenges. Retrieved from https://books.google.com.tr/books?id=PdSLzgEACAAJ
  • Shein, E. (2024). Governments Setting Limits on AI. In Communications of the ACM (Vol. 67, Issue 4, pp. 12–14). Association for Computing Machinery (ACM). https://doi.org/10.1145/3640506
  • Shrestha, Y. R., Ben-Menahem, S. M., & Krogh, G. von. (2019). Organizational Decision-Making Structures in the Age of Artificial Intelligence. California Management Review. doi:10.1177/0008125619862257
  • Siau, K., & Wang, W. (2018, May). Artificial Intelligence: A Study on Governance, Policies, and Regulations. http://aisel.aisnet.org/mwais2018/40
  • Singil, N. (2022). Yapay Zekâ ve İnsan Hakları. In Public and Private International Law Bulletin (C. 0, İssue 0, ss. 121-158). Istanbul University. https://doi.org/10.26650/ppil.2022.42.1.970856
  • Szczepański, M. (2024). United States Approach to Artificial Intelligence. European Parliamentary Research Service (PE 757.605, January 2024). https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/757605/EPRS_ATA(2024)757605_EN.pdf
  • Tallberg, J., Lundgren, M., & Geith, J. (2023). AI Regulation in the European Union: Examining Non-State Actor Preferences. arXiv. https://doi.org/10.48550/ARXIV.2305.11523
  • Tech Council of Australia. (2023). Supporting Safe and Responsible AI. Retrieved April 30, 2024, from https://techcouncil.com.au/wp-content/uploads/2023/08/Tech-Council-of-Australia-AI-Submission-vF.pdf
  • Tuğaç, Ç. (2023). Climate Change and Artificial Intelligence: Opportunities and Challenges. Hitit Sosyal Bilimler Dergisi. doi:10.17218/hititsbd.1240744
  • Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., … Nerini, F. F. (2020). The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications. doi:10.1038/s41467-019-14108-y
  • Wu, F., Lu, C., Zhu, M., Chen, H., Zhu, J., Yu, K., … Pan, Y. (2020). Towards a new generation of artificial intelligence in China. Nature Machine Intelligence, 2(6), 312–316. doi:10.1038/s42256-020-0183-4