Oxford Martin School logo

Interactive visualization requires JavaScript

Related research and data

  • Artificial intelligence has advanced despite having few resources dedicated to its development – now investments have increased substantially
  • Affiliation of research teams building notable AI systems, by year of publication
  • Annual attendance at major artificial intelligence conferences
  • Annual global corporate investment in artificial intelligence, by type
  • Annual granted patents related to artificial intelligence, by industry
  • Annual industrial robots installed
  • Annual patent applications related to AI per million people
  • Annual patent applications related to AI, by status
  • Annual patent applications related to artificial intelligence
  • Annual private investment in artificial intelligence NetBase Quid
  • Annual private investment in artificial intelligence CSET
  • Annual private investment in artificial intelligence, by focus area NetBase Quid
  • Annual professional service robots installed, by application area
  • Annual reported artificial intelligence incidents and controversies
  • Artificial intelligence: Performance on knowledge tests vs. dataset size
  • Artificial intelligence: Performance on knowledge tests vs. number of parameters
  • Artificial intelligence: Performance on knowledge tests vs. training computation
  • Chess ability of the best computers
  • Computation used to train notable AI systems, by affiliation of researchers
  • Computation used to train notable artificial intelligence systems
  • Countries with national artificial intelligence strategies
  • Cumulative AI-related bills passed into law since 2016
  • Cumulative number of notable AI systems by domain
  • Datapoints used to train notable artificial intelligence systems
  • Domain of notable artificial intelligence systems, by year of publication
  • Employer of new AI PhDs in the United States and Canada
  • GPU computational performance per dollar
  • Global views about AI's impact on society in the next 20 years, by demographic group
  • Global views about the safety of riding in a self-driving car, by demographic group
  • How worried are Americans about their work being automated?
  • ImageNet: Top-performing AI systems in labeling images
  • Industrial robots: Annual installations and total in operation
  • Market share for logic chip production, by manufacturing stage
  • Newly-funded artificial intelligence companies
  • Parameters in notable artificial intelligence systems
  • Parameters vs. training dataset size in notable AI systems, by researcher affiliation
  • Protein folding prediction accuracy
  • Scholarly publications on artificial intelligence per million people
  • Share of artificial intelligence jobs among all job postings
  • Share of companies using artificial intelligence technology
  • Share of computer science PhDs specializing in artificial intelligence in the US and Canada
  • Share of notable AI systems by domain
  • Share of notable AI systems by researcher affiliation
  • Share of women among new artificial intelligence and computer science PhDs, United States and Canada
  • Top performing AI systems in coding, math, and language-based knowledge tests
  • Training computation vs. dataset size in notable AI systems, by researcher affiliation
  • Training computation vs. parameters in notable AI systems, by domain
  • Training computation vs. parameters in notable AI systems, by researcher affiliation
  • Views about AI's impact on society in the next 20 years
  • Views about the safety of riding in a self-driving car
  • Views of Americans about robot vs. human intelligence

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

THE AI INDEX REPORT

Measuring trends in Artificial Intelligence

ai iNDEX anNUAL rEPORT

Welcome to the 2023 AI Index Report

The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks , collates , distills , and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind. The AI Index collaborates with many different organizations to track progress in artificial intelligence. These organizations include: the Center for Security and Emerging Technology at Georgetown University, LinkedIn, NetBase Quid, Lightcast, and McKinsey. The 2023 report also features more self-collected data and original analysis than ever before. This year’s report included new analysis on foundation models, including their geopolitics and training costs, the environmental impact of AI systems, K-12 AI education, and public opinion trends in AI. The AI Index also broadened its tracking of global AI legislation from 25 countries in 2022 to 127 in 2023.

TOP TAKEAWAYS

  • Industry races ahead of academia.

Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, compute, and money, resources that industry actors inherently possess in greater amounts compared to nonprofits and academia.

  • Performance saturation on traditional benchmarks.

AI continued to post state-of-the-art results, but year-over-year improvement on many benchmarks continues to be marginal. Moreover, the speed at which benchmark saturation is being reached is increasing. However, new, more comprehensive benchmarking suites such as BIG-bench and HELM are being released.

  • AI is both helping and harming the environment.

New research suggests that AI systems can have serious environmental impacts. According to Luccioni et al., 2022, BLOOM’s training run emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco. Still, new reinforcement learning models like BCOOLER show that AI systems can be used to optimize energy usage.

  • The world’s best new scientist … AI?

AI models are starting to rapidly accelerate scientific progress and in 2022 were used to aid hydrogen fusion, improve the efficiency of matrix manipulation, and generate new antibodies.

  • The number of incidents concerning the misuse of AI is rapidly rising.

According to the AIAAIC database, which tracks incidents related to the ethical misuse of AI, the number of AI incidents and controversies has increased 26 times since 2012. Some notable incidents in 2022 included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and U.S. prisons using call-monitoring technology on their inmates. This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities.

  • The demand for AI-related professional skills is increasing across virtually every American industrial sector.

Across every sector in the United States for which there is data (with the exception of agriculture, forestry, fishery and hunting), the number of AI-related job postings has increased on average from 1.7% in 2021 to 1.9% in 2022. Employers in the United States are increasingly looking for workers with AI-related skills.

  • For the first time in the last decade, year-over-year private investment in AI decreased.

Global AI private investment was $91.9 billion in 2022, which represented a 26.7% decrease since 2021. The total number of AI-related funding events as well as the number of newly funded AI companies likewise decreased. Still, during the last decade as a whole, AI investment has significantly increased. In 2022 the amount of private investment in AI was 18 times greater than it was in 2013.

  • While the proportion of companies adopting AI has plateaued, the companies that have adopted AI continue to pull ahead.

The proportion of companies adopting AI in 2022 has more than doubled since 2017, though it has plateaued in recent years between 50% and 60%, according to the results of McKinsey’s annual research survey. Organizations that have adopted AI report realizing meaningful cost decreases and revenue increases.

Policymaker interest in AI is on the rise.

An AI Index analysis of the legislative records of 127 countries shows that the number of bills containing “artificial intelligence” that were passed into law grew from just 1 in 2016 to 37 in 2022. An analysis of the parliamentary records on AI in 81 countries likewise shows that mentions of AI in global legislative proceedings have increased nearly 6.5 times since 2016.

Chinese citizens are among those who feel the most positively about AI products and services. Americans … not so much.

In a 2022 IPSOS survey, 78% of Chinese respondents (the highest proportion of surveyed countries) agreed with the statement that products and services using AI have more benefits than drawbacks. After Chinese respondents, those from Saudi Arabia (76%) and India (71%) felt the most positive about AI products. Only 35% of sampled Americans (among the lowest of surveyed countries) agreed that products and services using AI had more benefits than drawbacks.

Chapter 1: Research and Development

This chapter captures trends in AI R&D. It begins by examining AI publications, including journal articles, conference papers, and repositories. Next it considers data on significant machine learning systems, including large language and multimodal models. Finally, the chapter concludes by looking at AI conference attendance and open-source AI research. Although the United States and China continue to dominate AI R&D, research efforts are becoming increasingly geographically dispersed.

  • The United States and China had the greatest number of cross-country collaborations in AI publications from 2010 to 2021, although the pace of collaboration has since slowed.
  • AI research is on the rise, across the board.
  • China continues to lead in total AI journal, conference, and repository publications.
  • Large language models are getting bigger and more expensive.

ai research papers by country

Chapter 2: Technical Performance

This year’s technical performance chapter features analysis of the technical progress in AI during 2022. Building on previous reports, this chapter chronicles advancement in computer vision, language, speech, reinforcement learning, and hardware. Moreover, this year this chapter features an analysis on the environmental impact of AI, a discussion of the ways in which AI has furthered scientific progress, and a timeline-style overview of some of the most significant recent AI developments.

  • Generative AI breaks into the public consciousness.
  • AI systems become more flexible.
  • Capable language models still struggle with reasoning.
  • AI starts to build better AI.

ai research papers by country

Chapter 3: Technical AI Ethics

Fairness, bias, and ethics in machine learning continue to be topics of interest among both researchers and practitioners. As the technical barrier to entry for creating and deploying generative AI systems has lowered dramatically, the ethical issues around AI have become more apparent to the general public. Startups and large companies find themselves in a race to deploy and release generative models, and the technology is no longer controlled by a small group of actors. In addition to building on analysis in last year’s report, this year the AI Index highlights tensions between raw model performance and ethical issues, as well as new metrics quantifying bias in multimodal models.

  • The effects of model scale on bias and toxicity are confounded by training data and mitigation methods.
  • Generative models have arrived and so have their ethical problems.
  • Fairer models may not be less biased.
  • Interest in AI ethics continues to skyrocket.
  • Automated fact-checking with natural language processing isn’t so straightforward after all.

ai research papers by country

Chapter 4: The Economy

Increases in the technical capabilities of AI systems have led to greater rates of AI deployment in businesses, governments, and other organizations. The heightening integration of AI and the economy comes with both excitement and concern. Will AI increase productivity or be a dud? Will it boost wages or lead to the widespread replacement of workers? To what degree are businesses embracing new AI technologies and willing to hire AI-skilled workers? How has investment in AI changed over time, and what particular industries, regions, and fields of AI have attracted the greatest amount of investor interest? This chapter examines AI-related economic trends by using data from Lightcast, LinkedIn, McKinsey, Deloitte, and NetBase Quid, as well as the International Federation of Robotics (IFR). This chapter begins by looking at data on AI-related occupations and then moves on to analyses of AI investment, corporate adoption of AI, and robot installations.

  • Once again, the United States leads in investment in AI.
  • In 2022, the AI focus area with the most investment was medical and healthcare ($6.1 billion); followed by data management, processing, and cloud ($5.9 billion); and Fintech ($5.5 billion).
  • AI is being deployed by businesses in multifaceted ways.
  • AI tools like Copilot are tangibly helping workers.
  • China dominates industrial robot installations.

ai research papers by country

Chapter 5: Education

Studying the state of AI education is important for gauging some of the ways in which the AI workforce might evolve over time. AI-related education has typically occurred at the postsecondary level; however, as AI technologies have become increasingly ubiquitous, this education is being embraced at the K–12 level. This chapter examines trends in AI education at the postsecondary and K–12 levels, in both the United States and the rest of the world. We analyze data from the Computing Research Association’s annual Taulbee Survey on the state of computer science and AI postsecondary education in North America, Code.org’s repository of data on K–12 computer science in the United States, and a recent UNESCO report on the international development of K–12 education curricula.

  • More and more AI specialization.
  • New AI PhDs increasingly head to industry.
  • New North American CS, CE, and information faculty hires stayed flat.
  • The gap in external research funding for private versus public American CS departments continues to widen.
  • Interest in K–12 AI and computer science education grows in both the United States and the rest of the world.

ai research papers by country

Chapter 6: Policy and Governance

The growing popularity of AI has prompted intergovernmental, national, and regional organizations to craft strategies around AI governance. These actors are motivated by the realization that the societal and ethical concerns surrounding AI must be addressed to maximize its benefits. The governance of AI technologies has become essential for governments across the world. This chapter examines AI governance on a global scale. It begins by highlighting the countries leading the way in setting AI policies. Next, it considers how AI has been discussed in legislative records internationally and in the United States. The chapter concludes with an examination of trends in various national AI strategies, followed by a close review of U.S. public sector investment in AI.

  • From talk to enactment—the U.S. passed more AI bills than ever before.
  • When it comes to AI, policymakers have a lot of thoughts.
  • The U.S. government continues to increase spending on AI.
  • The legal world is waking up to AI.

ai research papers by country

Chapter 7: Diversity

AI systems are increasingly deployed in the real world. However, there often exists a disparity between the individuals who develop AI and those who use AI. North American AI researchers and practitioners in both industry and academia are predominantly white and male. This lack of diversity can lead to harms, among them the reinforcement of existing societal inequalities and bias. This chapter highlights data on diversity trends in AI, sourced primarily from academia. It borrows information from organizations such as Women in Machine Learning (WiML), whose mission is to improve the state of diversity in AI, as well as the Computing Research Association (CRA), which tracks the state of diversity in North American academic computer science. Finally, the chapter also makes use of Code.org data on diversity trends in secondary computer science education in the United States. Note that the data in this subsection is neither comprehensive nor conclusive. Publicly available demographic data on trends in AI diversity is sparse. As a result, this chapter does not cover other areas of diversity, such as sexual orientation. The AI Index hopes that as AI becomes more ubiquitous, the amount of data on diversity in the field will increase such that the topic can be covered more thoroughly in future reports.

  • North American bachelor’s, master’s, and PhD-level computer science students are becoming more ethnically diverse.
  • New AI PhDs are still overwhelmingly male.
  • Women make up an increasingly greater share of CS, CE, and information faculty hires.
  • American K–12 computer science education has become more diverse, in terms of both gender and ethnicity.

ai research papers by country

Chapter 8: Public Opinion

AI has the potential to have a transformative impact on society. As such it has become increasingly important to monitor public attitudes toward AI. Better understanding trends in public opinion is essential in informing decisions pertaining to AI’s development, regulation, and use. This chapter examines public opinion through global, national, demographic, and ethnic lenses. Moreover, we explore the opinions of AI researchers, and conclude with a look at the social media discussion that surrounded AI in 2022. We draw on data from two global surveys, one organized by IPSOS, and another by Lloyd’s Register Foundation and Gallup, along with a U.S-specific survey conducted by PEW Research. It is worth noting that there is a paucity of longitudinal survey data related to AI asking the same questions of the same groups of people over extended periods of time. As AI becomes more and more ubiquitous, broader efforts at understanding AI public opinion will become increasingly important.

  • Chinese citizens are among those who feel the most positively about AI products and services. Americans … not so much.
  • Men tend to feel more positively about AI products and services than women. Men are also more likely than women to believe that AI will mostly help rather than harm.
  • People across the world and especially America remain unconvinced by self-driving cars.
  • Different causes for excitement and concern.
  • NLP researchers … have some strong opinions as well.

ai research papers by country

Past Reports

ai research papers by country

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

AI Content Wiki

AI Papers by Country

  • By Olive Marshal
  • Published November 23, 2019
  • Updated December 14, 2023
  • 13 mins read

ai research papers by country

Artificial Intelligence (AI) research has been rapidly growing over recent years, with numerous countries contributing to this development. In this article, we will examine the number of AI papers published by different countries and gain insights into their respective contributions to this field.

Key Takeaways

  • AI research is a global phenomenon, with countries around the world actively participating.
  • China and the United States lead in terms of AI paper production, followed by other major research hubs.
  • Collaboration between countries is an important aspect of AI research, leading to increased knowledge sharing.

**China** has emerged as a frontrunner in AI research, producing a vast number of papers in recent years. With its strong governmental support and investment in technology, the country aims to become the global leader in AI by 2030. *The rapid growth of AI research in China is a testament to the nation’s commitment to technological advancement.*

The **United States** has long been a leader in AI research and development. It boasts a number of prestigious institutions, such as Stanford University and MIT, which contribute significantly to the field. *The United States continues to be at the forefront of groundbreaking AI research, pushing the boundaries of what is possible.*

Overview of AI Research Publications by Country

Other countries, such as **India** and **Canada**, have also made significant contributions to AI research. India, with its growing tech industry and skilled workforce, is steadily increasing its presence in the field. *India’s research and development efforts in AI are expected to further accelerate in the coming years.* Meanwhile, Canada is known for its expertise in deep learning and hosts leading research institutions like the Vector Institute. *Canada’s AI research has gained international recognition, attracting top talent from around the globe*.

Collaboration and Knowledge Sharing

International collaboration plays a crucial role in driving AI research forward. Researchers from different countries often collaborate on projects, combining their expertise and resources to tackle complex problems. Such collaboration leads to knowledge sharing and accelerates advancements in the field. Additionally, research conferences and journals provide platforms for researchers to present their findings and exchange ideas. *These platforms foster collaboration and facilitate the dissemination of knowledge in the AI community.*

Global AI Research Collaboration

  • In 2019, the most common international collaboration in AI research was between the United States and China.
  • Other prominent collaborations include China with the United Kingdom, Germany, and Australia.
  • International collaboration strengthens the ecosystem of AI research, fostering innovation and global progress in the field.

AI research is a global effort, with countries around the world actively contributing to the field. China and the United States lead in terms of the number of AI papers published, but collaborations between different countries are also instrumental in advancing AI knowledge and driving innovation. As AI continues to evolve, so too will the contributions of different nations, shaping the future of this transformative field.

Image of AI Papers by Country

Common Misconceptions

1. ai papers by country.

There are several common misconceptions surrounding the topic of AI papers by country. One misconception is that only developed countries produce high-quality AI research. However, this is not true as countries with emerging economies, such as India and China, have also made significant contributions to the field. Another misconception is that the number of AI papers published by a country reflects its overall expertise in AI. While a high number of papers can indicate a strong research community, it does not necessarily reflect the quality or impact of the research. Furthermore, some may assume that AI research is primarily dominated by academic institutions, but there is also a growing presence of research contributions from industry players and collaborations between academia and industry.

  • Emerging economies contribute to AI research
  • Number of papers doesn’t guarantee expertise
  • Industry players also contribute to AI research

2. AI Research Competition

Another misconception is that AI research is a competition among countries to establish dominance. While there might be some level of competitiveness, the AI research community operates on a collaborative basis. Researchers from different countries often collaborate and share knowledge to advance the field collectively. The global nature of AI research encourages the exchange of ideas and fosters innovation. Additionally, AI research benefits from diversity and different perspectives, making international collaboration crucial for its progress.

  • AI research community is collaborative
  • Countries often share knowledge and collaborate
  • International collaboration fosters innovation

3. Research Output and AI Leadership

There is a misconception that the number of AI research papers produced by a country is directly proportional to its AI leadership. While research output can be an indicator of a country’s research activity, AI leadership involves various factors such as investment in AI technology, talent pool, infrastructure, and government policies. Some countries may prioritize the application of AI over research, leading to a higher implementation rate without necessarily publishing a large number of research papers. It is important to have a holistic view of a country’s AI ecosystem rather than solely relying on research output.

  • Research output doesn’t determine AI leadership
  • AI leadership involves multiple factors
  • Prioritizing AI application may not reflect in research output

4. Bias and AI Research

There is a misconception that AI research is entirely unbiased and objective. However, AI systems are developed by humans and can inherit biases from the data they are trained on or the algorithms used. Researchers are increasingly aware of this issue and are actively working on mitigating biases in AI systems. AI research explores techniques for fair and ethical AI, but it is an ongoing challenge. Understanding and addressing bias in AI research is vital to ensure AI systems are inclusive and do not perpetuate discrimination or inequality.

  • AI research can contain biases
  • Researchers are working on mitigating biases in AI systems
  • Fair and ethical AI is an ongoing challenge

5. AI’s Impact on Employment

One common misconception is the belief that AI will replace human workers entirely, leading to widespread unemployment. While AI technologies may automate certain tasks and job roles, it is unlikely to replace all human workers. AI has the potential to augment human capabilities and enable humans to focus on higher-value tasks that require creativity, empathy, and critical thinking. Moreover, as AI technologies advance, new job opportunities will emerge in sectors related to AI development, implementation, and maintenance. Preparing the workforce for these changes is important to ensure a smooth societal transition.

  • AI augments human capabilities
  • New job opportunities will arise in AI-related sectors
  • Preparing the workforce is essential for a smooth transition

Image of AI Papers by Country

AI Research Papers by Country: A Global Perspective

In recent years, the field of artificial intelligence (AI) has witnessed significant growth, with research papers emerging from various countries around the world. This article presents a global perspective on AI research, highlighting the number of papers published by different countries. The tables below showcase the contributions of each country, shedding light on their role in shaping the advancements of AI technologies.

United States: Pioneers in AI Research

As a leader in technological advancements, the United States has played a pivotal role in AI research. The table below presents the top five states within the US that have produced a substantial number of AI research papers.

China: Rapid Growth in AI Research

China has rapidly emerged as a major contributor to the field of AI, with significant investments in research and development. The following table highlights the top five provinces within China that have made remarkable contributions to AI research.

United Kingdom: Solid Contributions to AI Research

The United Kingdom has a rich history of scientific research and has made notable contributions to AI. The table below showcases the top five universities in the UK that have contributed significantly to AI research.

Canada: Advancing AI Technologies

Canada has emerged as a leading country in AI research, fostering a collaborative and innovative environment. The following table highlights the top five cities within Canada that have made significant contributions to AI research.

Germany: Influential Contributions to AI Research

Germany has been instrumental in shaping AI research through its commitment to scientific excellence. The following table highlights the top five research institutions in Germany that have produced a significant number of AI research papers.

Australia: Thriving AI Research Scene

Australia has witnessed a thriving AI research scene, with universities and research institutions making significant contributions. The table below showcases the top five universities in Australia that have excelled in AI research.

India: Emerging Hub for AI Research

India has rapidly emerged as an important hub for AI research, with a growing number of research papers being published. The following table highlights the top five cities in India that have made significant contributions to AI research.

Japan: Contributions to Cutting-Edge AI Research

Japan has a rich history of technological innovation and has played a vital role in AI research. The table below showcases the top five research institutions in Japan that have contributed significantly to AI advancements.

This article sheds light on the global landscape of AI research, showcasing the contributions made by various countries and institutions. The United States, China, the United Kingdom, Canada, Germany, Australia, India, and Japan have emerged as key players in shaping the field of AI. Through substantial investments, collaborations, and scientific contributions, these countries have driven innovations and breakthroughs in AI technologies. As research in AI continues to expand, it is crucial for countries and institutions worldwide to foster cross-border collaborations and interdisciplinary approaches in order to further advance the potential of AI and its positive impact on society.

Frequently Asked Questions

What are the top ai research papers from the united states.

Some notable AI research papers from the United States include ‘A Few Useful Things to Know About Machine Learning’ by Pedro Domingos, ‘DeepFace Recognition’ by Yaniv Taigman et al., ‘Playing Atari with Deep Reinforcement Learning’ by Volodymyr Mnih et al., and ‘Generative Adversarial Networks’ by Ian J. Goodfellow et al.

You Might Also Like

Read more about the article AI Content Pro

AI Content Pro

Read more about the article Content Creator Hashtags

Content Creator Hashtags

Read more about the article Copy AI Careers

Copy AI Careers

Artificial intelligence national strategy in a developing country

  • Open access
  • Published: 01 October 2023

Cite this article

You have full access to this open access article

  • Mona Nabil Demaidi   ORCID: orcid.org/0000-0001-8161-4992 1  

5535 Accesses

7 Altmetric

Explore all metrics

Artificial intelligence (AI) national strategies provide countries with a framework for the development and implementation of AI technologies. Sixty countries worldwide published their AI national strategies. The majority of these countries with more than 70% are developed countries. The approach of AI national strategies differentiates between developed and developing countries in several aspects including scientific research, education, talent development, and ethics. This paper examined AI readiness assessment in a developing country (Palestine) to help develop and identify the main pillars of the AI national strategy. AI readiness assessment was applied across education, entrepreneurship, government, and research and development sectors in Palestine (case of a developing country). In addition, it examined the legal framework and whether it is coping with trending technologies. The results revealed that Palestinians have low awareness of AI. Moreover, AI is barely used across several sectors and the legal framework is not coping with trending technologies. The results helped develop and identify the following five main pillars that Palestine’s AI national strategy should focus on: AI for Government, AI for Development, AI for Capacity Building in the private, public and technical and governmental sectors, AI and Legal Framework, and international Activities.

Similar content being viewed by others

ai research papers by country

Education for AI, not AI for Education: The Role of Education and Ethics in National AI Policy Strategies

Daniel Schiff

ai research papers by country

The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations

Josh Cowls, Andreas Tsamados, … Luciano Floridi

ai research papers by country

Explainable AI Methods - A Brief Overview

Avoid common mistakes on your manuscript.

1 Introduction

Artificial intelligence (AI) is a cutting-edge technology (Chatterjee et al. 2022 ). Its applications can be found in many fields including computer science, banking, agriculture, and healthcare (Pham et al. 2020 ; Zhang et al. 2020 ). AI has two domains: Weak AI and Strong AI. Weak AI is specialized for specific tasks, while Strong AI aims to create machines with human-like general intelligence. Developed countries lead in these advancements with advanced technologies and ample resources (Tizhoosh and Pantanowitz 2018 ).

Nations have recognized the transformational potential of AI (Fatima et al. 2020 ). Therefore, more than 60 countries published their AI national strategies in the past 5 years following Canada which was the first to publish the strategy in 2017 (Vats et al. 2022 ; Zhang et al. 2021 ). The majority of countries (more than 70%) who launched their AI national strategies are developed countries (Holon IQ 2020 ).

The approach of AI national strategies differentiates between developed and developing countries. Developed countries have advanced economies and strong technological infrastructures, focusing on leveraging AI to maintain their competitive advantage and drive economic growth. They are at the forefront of both Weak and Strong AI. In the domain of Weak AI, they utilize specialized systems for practical applications in various industries such as healthcare, finance, and transportation, resulting in efficiency gains and economic advantages. In Strong AI endeavors, their significant research involves infrastructure, financial resources, and access to top talent propel innovation. These countries prioritize the development of ethical AI frameworks, invest in education and workforce development, and foster global collaborations to sustain their AI leadership. Their dedication to AI innovation places them at the vanguard of technology, shaping the future of AI-driven industries and applications.

Developing countries, on the other hand, are navigating the AI landscape with varying degrees of progress in Weak and Strong AI. In Weak AI applications, they often rely on cost-effective solutions, such as chatbots or data analytics, to address local challenges like healthcare access and agriculture optimization. However, resource limitations hinder their full adoption. The pursuit of Strong AI remains a challenge due to inadequate research infrastructure and funding constraints. Developing nations prioritize capacity building, fostering local talent, and seeking international collaborations to bridge the AI technology gap. While progress is gradual, their commitment to AI development is a crucial step toward unlocking future socio-economic benefits. The difference between developed and developing countries is expected since developing countries are consumers of technologies produced by developed countries (Monasterio Astobiza et al. 2022 ). Moreover, developing countries have low awareness of applications of AI across several fields (Kahn et al. 2018 ). This increases the gap of AI technology development between developed and developing countries (Kahn et al. 2018 ).

Several developed and developing countries in the MENA region are coping with AI and developing their AI national strategies. According to Google report, the potential economic impact of AI on the Middle East and North Africa (MENA) region is estimated to 320 billion USD dollars by 2030 (Economist Impact 2022 ). Currently, the following 7 countries out of 19 in MENA region launched their AI national strategies: United Arab Emirates, Qatar, Saudi Arabia, Egypt, Oman, Tunisia, and Jordan.

Following other countries in the MENA region, in 2021, the Ministry of Telecom and Information Technology in Palestine issued the need for an AI national strategy. Palestine has good infrastructure since 92% of Palestinian households have home Internet access and in 2022, the optical fiber network was established in Palestine. Therefore, this paper aims to identify AI national strategy pillars in Palestine which is a case of a developing country in the MENA region. To achieve this, the paper assessed the AI status across education, entrepreneurship, government, and research and development sectors in Palestine (the case of a developing country). In addition, it examined the legal framework and whether it is coping with trending technologies.

The paper is structured as follows. Section  2 provides a brief review of AI national strategies in developed and developing countries. The AI readiness assessment in Palestine as a case of a developing country is explained in Sect.  3 . Section  4 presents the results obtained which are essential to identify the Palestinian AI national strategy’s main pillars. Finally, conclusions and future work are depicted in Sect.  5 .

2 Literature review

In recent years, many countries have developed national strategies for AI, which provide a framework for the development and implementation of the technology. These strategies have focused on different pillars, depending on the specific country and its needs (Jorge et al. 2022 ; Economist Impact 2022 ; Kazim et al. 2021 ; Escobar and Sciortino 2022 ).

The use of AI benefits both developed and developing countries (Makridakis 2017 ). However, the ways in which these countries approach AI can be quite different. In general, developed countries have the resources and infrastructure necessary to support the development and implementation of advanced AI technologies (Mhlanga 2021 ). As a result, AI national strategies in these countries focus on using technology to improve efficiencies and productivity in various industries, such as healthcare, finance, and transportation (Ahmed et al. 2022 ; Wahl et al. 2018 ; Kshetri 2021 ; Abduljabbar et al. 2019 ). In contrast, developing countries have more limited resources and infrastructure, so their AI national strategies tend to focus on using technology to address specific needs in their communities. For example, a developing country prioritizes using AI to improve access to education or healthcare or to promote economic growth (Ahmed et al. 2022 ; Guo and Li 2018 ). Additionally, developing countries are focused on using AI to help bridge the gap between themselves and developed countries, in terms of technological advancement and economic growth (Goralski and Tan 2020 ). Overall, the AI national strategies of developed and developing countries tend to differ in terms of their focus and priorities.

This section provides a comprehensive review of existing AI national strategies, and how they differentiate in developed and developing countries.

2.1 AI national strategies in developed countries

Many developed countries have recognized the potential of AI to drive economic growth, improve public services, and advance scientific research. As a result, they have developed national strategies to support the development and deployment of AI technologies in a way that is responsible, ethical, and beneficial to society (Zhang et al. 2021 ).

The United States, released a national AI strategy called the “American AI Initiative” in 2019, which focused on promoting public–private partnerships, investing in AI research and development, and increasing access to data and computing resources for AI researchers (Johnson 2019 ). The initiative is based on the following five key pillars:

Investing in research and development: The United States is investing in AI-focused research institutions and incubators, and is providing support for businesses that are developing AI-related products and services.

Fostering public–private partnerships: The United States is promoting collaboration between government agencies, academia, and the private sector to advance AI research and development.

Promoting the responsible and ethical use of AI: The United States is implementing policies and initiatives to promote the responsible and ethical use of AI by engaging with stakeholders and addressing potential negative impacts of AI.

Supporting the growth of the AI industry: The United States is providing support for businesses that are developing AI-related products and services, and is implementing policies to support the growth of the AI industry.

Building the technological infrastructure and capabilities needed to enable the use of AI: The United States is investing in the development of the technological infrastructure and capabilities needed to enable the use of AI, by implementing policies to support the growth of the AI industry.

Canada has implemented the Pan-Canadian Artificial Intelligence Strategy, which is focused on supporting the growth of the AI industry, and on using AI to address challenges in areas such as healthcare and transportation (Escobar and Sciortino 2022 ). The strategy is based on the following four key pillars:

Investing in research and development: Canada is investing in AI-focused research institutions and incubators, and is providing support for businesses that are developing AI-related products and services.

Supporting the growth of the AI industry: Canada is providing support for businesses that are developing AI-related products and services, and is implementing policies to support the growth of the AI industry.

Using AI to address challenges: Canada is using AI to address challenges in areas such as healthcare and transportation, by implementing AI-powered solutions and initiatives.

Building the technological infrastructure and capabilities needed to enable the use of AI: Canada is investing in the development of the technological infrastructure and capabilities needed to enable the use of AI, by implementing policies to support the growth of the AI industry.

The United Kingdom also launched its strategy “AI Sector Deal” in 2018. This strategy includes a number of initiatives to support the growth of the country’s AI industry, including investments in AI research and development, the establishment of an AI skills institute, and the creation of an AI advisory council to help develop ethical guidelines for the use of AI (Bourne 2019 ). The strategy is based on key pillars similar to the USA.

In Europe, the European Union has also been working on a comprehensive AI strategy “EU AI Strategy”, which includes initiatives to support the development and deployment of AI technologies, as well as measures to ensure the responsible and ethical use of AI (European Commission 2020 ; Cohen et al. 2020 ). The EU AI Strategy is based on three key pillars:

Investing in research and development: The European Union is investing in AI-focused research institutions and incubators, and is providing support for businesses that are developing AI-related products and services.

Supporting the growth of the AI industry: The European Union is providing support for businesses that are developing AI-related products and services, and is implementing policies to support the growth of the AI industry.

Addressing ethical and societal concerns related to AI: The European Union is implementing policies and initiatives to address ethical and societal concerns related to AI, by engaging with stakeholders and promoting the responsible and ethical use of AI.

Other developed countries, such as Japan and South Korea, are also taking steps to develop national AI strategies. Japan has developed the Society 5.0 initiative, which aims to use AI and other emerging technologies to drive economic growth and social development (Fukuyama 2018 ; Shiroishi et al. 2018 ). The Society 5.0 initiative is based on four key pillars similar to Canada.

South Korea has adopted the AI National Development Plan, which is focused on investing in AI research and development, supporting the growth of the AI industry, and promoting the use of AI in various sectors (Chung 2020 ). The AI National Development Plan is based on three key pillars:

Investing in research and development: South Korea is investing in AI-focused research institutions and incubators, and is providing support for businesses that are developing AI-related products and services.

Supporting the growth of the AI industry: South Korea is providing support for businesses that are developing AI-related products and services, and is implementing policies to support the growth of the AI industry.

Promoting the use of AI in various sectors: South Korea is promoting the use of AI in various sectors, by implementing AI-powered solutions and initiatives in areas such as healthcare and transportation.

AI is also increasingly adopted by a number of developing countries in MENA region, including the United Arab Emirates (UAE), Saudi Arabia, and Qatar (Radu 2021 ; Malkawi 2022 ; Ghazwani et al. 2022 ; Alelyani et al. 2021 ). These countries have made significant investments in the development and use of AI technologies, and have implemented a number of initiatives and policies to support the growth of the AI industry (Sharfi 2021 ). For example, the UAE has established partnerships with leading tech companies to develop AI-powered healthcare solutions, and has launched initiatives to support the use of AI in education (Dumas et al. 2022 ; Bhattacharya and Nakhare 2019 ). Saudi Arabia has also invested heavily in research and development in AI, and has implemented policies to support the growth of the AI industry (Bugami 2022 ).

2.2 AI national strategies in developing countries

Many developing countries are still in the early stages of developing and implementing AI national strategies, as the technology is relatively new and can be expensive to implement (Radu 2021 ; Sharma et al. 2022 ). In addition, developing countries face challenges such as limited access to technology and funding, as well as a shortage of skilled workers with expertise in AI (De-Arteaga et al. 2018 ; Sharma et al. 2022 ). As a result, it is likely that the adoption of AI in developing countries will be slower compared to more developed countries.

Regardless of the limited resources and slow adoption of AI, several developing countries have launched AI national strategies following developed countries for several reasons. First and foremost, AI has the potential to benefit developing countries, by providing innovative solutions to challenges and needs in these countries (Strusani and Houngbonon 2019 ). For example, AI-powered healthcare systems can help to improve the availability of medical services in underserved communities (Ilhan et al. 2021 ).

Additionally, developing countries aim to participate in the global AI ecosystem. As AI becomes more prevalent, there is an increasing demand for skilled AI professionals, and developing countries can play a significant role in meeting this demand (Su et al. 2021 ; Squicciarini and Nachtigall 2021 ; Millington 2017 ). By investing in AI education and training, developing countries can help to develop a skilled workforce that is capable of participating in the global AI industry (Millington 2017 ; Sharma et al. 2022 ).

India, Brazil, Mexico, and South Africa developed their AI national strategies which are focused on using AI to address challenges in areas such as healthcare, agriculture, and education, and on building the technological infrastructure and capabilities needed to enable the use of AI (Chatterjee 2020 ; Malerbi and Melo 2022 ; Criado et al. 2021 ; Arakpogun et al. 2021 ). China has implemented the “Next Generation Artificial Intelligence Development” Plan, which is focused on investing in AI research and development, supporting the growth of the AI industry, and promoting the use of AI in various sectors.

Developing countries in the MENA region also launched their AI national strategies or recognized the importance of AI and are currently in the process. Three out of thirteen developing countries in the MENA region (Egypt, Tunisia, and Jordan) launched their AI national strategies (Ministry of Communications and Innovation Technology (Egypt) 2021 ). Their strategies focused on the following pillars:

Building human capacities, expertise, and spreading awareness on AI (develop the capabilities of senior government and private sector leaders in the field of AI).

Importance of participating in AI international and regional conferences and seminars.

Promoting the use and adoption of artificial intelligence and its applications in the public sector and building the necessary partnerships with the private sector

Integrating AI in entrepreneurship and business.

Upskilling employees working in the technology field.

Conducting training for government agencies.

Develop policies related to ethical guidelines, legislative reforms, and standardization.

Develop AI educational courses that could be taught at schools and universities.

As mentioned earlier developing countries recognize the potential benefits of AI, and are taking steps to incorporate it into their economies and societies. This is similar to the current situation in Palestine, as in 2021, the Ministry of Telecom and Information Technology issued the need for an AI national strategy. To develop the strategy, the AI readiness assessment is needed to examine the status of AI in the educational sector (schools, universities), entrepreneurship sector, research and development, governmental sector, and privacy and data Protection. In Palestine, no data is available. Therefore, Sect.  3 illustrates the research methodology which explains in detail the experiment setup needed to identify the main pillars of the Palestinian AI national strategy.

3 Methodology

This paper aims to present AI readiness assessment in Palestine to help develop and identify the main pillars in the AI national strategy. This section describes the experiment questions, the experimental setup, and the participants.

3.1 Research questions

This experiment aims to answer the following main questions:

Do Palestinians have awareness of artificial intelligence?

What is the status of AI across education, entrepreneurship, government, research and development, and sectors in Palestine?

What are the main pillars of the Palestinian AI national strategy?

3.2 Experimental setup

To address the research questions mentioned above, the AI readiness assessment was examined across the educational sector (schools, universities), entrepreneurship sector, research and development sector, governmental sector, and privacy and data protection in Palestine. No data are available in Palestine related to this topic. Therefore, the following data collection methodologies were applied:

One-to-one interviews with experts from the private, public, government, and educational sectors inside and outside Palestine were conducted between 1/9/2021 and 30/8/2022. The experts were presented with a set of interview questions that focused on the current status of AI in their domain and the opportunities and challenges of applying AI in Palestine.

Exploratory research to analyze the higher education BSc and MSc programs, and identify AI courses across universities in Palestine. The data were retrieved from the Ministry of Higher Education in Palestine.

Exploratory research to analyze tech-based educational courses at schools in Palestine. The material taught to school students between fifth grade and twelfth grade was analyzed to assess their coverage of AI-related topics.

Focus groups to assess school students’ and teachers’ awareness of artificial intelligence and identify the existing gaps.

Focus group with MSc students enrolled in AI-related topics.

Questionnaire to assess the Palestinian community’s awareness of AI and identify the existing gaps. The questionnaire consisted of 25 questions to assess participants’ knowledge of AI, AI applications, and gaps to apply AI in Palestine. The questionnaire focused on awareness of Weak AI.

3.3 Participants

Three different groups of participants were involved in the study and informed consent was obtained. The first group included 45 key experts (45+ interview hours) from the private, public, government, and educational sectors inside and outside Palestine. Experts included ministers, chief executive officers from private companies, banks, non-governmental organizations (NGO), incubators, and accelerators in Palestine.

The second group included the following three focus groups:

Ten MSc students enrolled in AI-related programs.

Eight school teachers teaching technology course.

Forty school students (42.8% females and 57.2% males).

The third group consisted of a sample of 240 (44% males and 55.2% females) participants which represent the Palestinian community as it included representatives from the educational, governmental, and private sectors.

4 Results and discussion

This section illustrates the research questions and presents the results obtained.

4.1 Awareness of Palestinians about artificial intelligence

figure 1

What is your assessment of the level of awareness of the following aspects of AI in Palestine

To assess the level of AI awareness among Palestinians, interviews were conducted with 45 experts and 3 focus groups, and a questionnaire was distributed to a sample of 240 people. The results revealed that Palestinians have low awareness of the concept and applications of AI in public, private, educational, leadership, innovation, and research and development sectors. The results of the questionnaire also confirmed that the following topics were not discussed in the field of AI in Palestine (Fig. 1 ):

Opportunities and risks of AI in the government digital transformation.

AI opportunities and risks in addressing climate change, water management, and natural disaster risk reduction.

The opportunities and risks of AI in teaching and learning.

Opportunities and risks of AI in creating jobs and contributing to economic growth.

The opportunities and risks of AI on creativity, language, media, and journalism.

Implications for human rights, such as privacy, discrimination, and equality.

4.2 AI in education

This section aims to examine the integration of AI into the Palestinian educational curriculum at schools and universities.

4.2.1 AI in Palestinian schools

Palestinian schools introduced a technology course that is taught to students from the fifth grade to the twelfth grade. The topics related to AI in each grade are summarized in Table 1 .

To assess the knowledge of school students about AI concept and teachers’ perspective on the importance of adding educational materials focusing on AI topics to the Palestinian curriculum, the following two focus groups were carried out:

A focus group with 40 school students (57.2% of participants were male and 42.8% were female) enrolled in grades 5 up to 12 (Fig.  2 )

Eight teachers teaching the technology course at schools.

figure 2

Percentage of school students participating in focus groups

The results revealed that 42% of students stated that they know the definition of AI (Fig.  3 ). This is expected since the definition is introduced in the educational curriculum. However, there is a gap in students’ understanding of the practical applications of AI. More than 50% of school students did not recognize the practical applications of AI. Figure  4 shows that only 15% of school students knew that AI is used in social medial applications such as TikTok (De Leyn et al. 2021 ). This indicates that students’ have low awareness of the applications of AI.

On the other hand, 90% of the students participating in the study expressed interest to learn more about AI and its applications. Teachers had a similar opinion as they strongly agreed that adding AI-related topics to the Palestinian curriculum is necessary since minimal information is provided in the current curriculum.

figure 3

School students’ knowledge of the definition of AI

figure 4

School students’ knowledge of AI applications

4.2.2 AI in Palestinian universities

The AI Index 2021 annual report released by Stanford University revealed that there is a total of 1032 AI programs in 27 European Union countries (Zhang et al. 2021 ). The vast majority of academic programs specialized in AI in the European Union are taught at the master’s level. The programs aim to provide students with strong competencies for the workforce. Germany provides the highest number of programs specialized in AI, followed by the Netherlands, France, and Sweden.

In Palestine, the number of universities and colleges is 55 (Palestinian Ministry of Higher Education and Scientific Research 2022 ), and only 9% of Palestinian universities and colleges offer academic programs specialized in AI. Palestine offers six programs specialized in AI, which is close to other countries in the European Union. These programs constitute only 2.6% of the 224 technological academic programs offered at universities and colleges. The vast majority of these programs (83.3%) are master’s programs and there is still no Ph.D. program specialized in AI.

The results also revealed that the number of graduates from Palestinian colleges and universities specializing in AI between 2016 and 2021 is very low. Table 2 shows that only 28 out of 13,939 students are specialized in AI. Moreover, 60.7% of students are males and 39.3% are females. This indicates the low participation of females in the field of AI, in contrast to their close participation in various technological sectors (Fig.  5 ).

figure 5

Percentage of male and female graduates in technological disciplines

In 2022, the number of graduates specializing in AI in Palestine doubled by nearly 2.7 (the number of students increased from 28 to 76). However, the number of students enrolled in Palestinian universities specializing in AI constitutes only 0.1% of the 104,499 enrolled in Palestinian universities and colleges from 2016 to 2021, which is a very small percentage. This contributes to the asymmetry between AI skills and industry needs which is currently a pain point in many countries that published their AI national strategies (Vats et al. 2022 ).

The results revealed that Palestine is at a very early stage in terms of the availability of educational resources and trainers. This was confirmed in the interviews with 45 experts and the results of a survey that were published to 240 participants to assess the awareness of the Palestinian community in AI and to identify gaps. The results showed that 53.3% of the sample confirmed that AI educational resources are not available, and 46% of the sample confirmed the lack of expertise in the field of AI (Table 3 ).

Further analysis was carried out with a focus group of ten master’s students enrolled in AI programs at Palestinian universities. The group confirmed the low awareness of the importance of AI in the educational and technological sectors in Palestine. This is due to the lack of AI-applied courses in Palestinian universities. The group also agreed that the labor market in Palestine has become more interested in the field of AI.

4.3 AI in entrepreneurial ecosystem

Palestine has 102 technology-based startups and 94 registered organizations that have worked during the year 2021 and have at least 1 program or project focused on empowering startups (Polaris 2021 ). The vast majority of startups are e-commerce companies, followed by the education and health sector (see Fig.  6 ).

figure 6

Startups per sector in Palestine (Polaris 2021 )

Further analyses were carried out to examine the usage of AI technology in existing startups. The results revealed that a small percentage of startups use AI (0.09%). This was confirmed in interviews with experts leading technology incubators and accelerators, as they emphasized that the number of AI startups is small and there is not enough expertise to evaluate or supervise startups during the incubation and acceleration process in Palestine.

4.4 AI in research and development

According to EduRank, there are 1.51 million academic publications in the field of AI submitted by 2797 universities in the world (EduRank 2022 ). Table 4 shows the top universities in the world ranked based on their research performance in AI. In the MENA region, the number of publications per university highly decreased and the region achieves merely 5.5% of peer-reviewed AI publications (Economist Impact 2022 ). Table 5 shows the universities in the MENA region with the highest number of publications.

In Palestine, the total publications across universities are less than the publications in the American University of Beirut (see Table 6 ).

The results revealed that Palestine had minimal knowledge in the field of AI. This was also confirmed in the interviews with experts and the results of the questionnaire published to 240 people. Figure  7 shows that 56.5% of the sample confirmed that there is a limited number of research centers and a lack of human resources and expertise in the field of AI. This was also confirmed by the interviews with experts who emphasized that there are no links between global AI expertise and national and global AI researchers.

figure 7

Status of research and development in Palestine

4.5 AI in governmental sector

The results of the interviews and the questionnaire published to 240 participants showed that 31% of the sample believed that the level of governmental participation in topics related to AI is at an early development stage (see Fig.  8 ).

figure 8

Level of government participation in topics related to AI

4.6 AI and privacy and protection

Based on the United Nations Conference on Trade and Development (UNCTAD), 128 out of 194 countries had put in place legislation to secure the protection of data and privacy (UNCTAD 2021 ). Table 7 shows the status of protection of data and privacy laws in the MENA region. The status of Privacy and Protection laws in Palestine is also at a very early stage. An exploratory study had been carried out by “7amleh” to study the status of privacy and digital data protection in Palestine (7amleh 2021 ). The results revealed that there are no laws and legislation in Palestine which keep pace with trending technologies. This causes privacy and data protection violations.

This was also confirmed by the research results, as the questionnaire, focus groups, and interviews with experts confirmed that there is a gap in the development of a legal framework that keeps pace with AI. 83.3% of 240 participants confirmed that the legal frameworks have not yet been developed to keep pace with AI in Palestine.

4.7 AI national strategy overview

This section translates the aforementioned findings into a strategic framework that tries to address weaknesses and minimize threats while building on strengths and opportunities. The government sector in Palestine is currently undergoing a significant digital transformation, which inevitably needs to happen concurrently with the implementation of the AI strategy. Additionally, to demonstrate the value of AI across various domains, it is critical to focus on areas where the greatest gains can be made in the short term given that the country has relatively few resources. Therefore, the following sections present the AI national strategy vision and mission statements that spell out precisely what Palestine hopes to accomplish by implementing AI, and where tradeoffs will be made. In addition, it illustrates the objectives and main pillars required to achieve the objectives.

4.7.1 Vision

The AI national strategy vision is “A globally distinguished position in Artificial Intelligence, with sustainable productivity, economic gains, and creation of new areas of growth.”

4.7.2 Mission

The AI national strategy mission is to “Establish an Artificial Intelligence industry in Palestine that includes the development of skills, technology, and infrastructure to ensure its sustainability and competitiveness.”

4.7.3 Goals

To achieve the aforementioned vision and mission, Palestine will work on the following goals:

Support lifelong learning and reskilling programs to contribute to workforce development and sustainability.

Facilitate multi-stakeholder dialogue on the deployment of responsible AI for the benefit of society and encourage relevant policy discussions.

Encourage investment in AI research and entrepreneurship through partnerships between the public and private sectors, initiatives, universities, and research centers.

Make Palestine a regional center and a talent pool in the field of AI by meeting the needs of local and regional markets and attracting international experts and researchers specialized in AI.

Integrating AI technologies into government services to ensure the services are more efficient and transparent.

Use AI in the main development sectors to achieve an economic impact and find solutions to local problems in line with the goals of sustainable development.

Create a thriving environment for AI by encouraging and supporting companies, startups, and scientific research.

Promote a human-centered approach in which people’s well-being is a priority and facilitate multi-stakeholder dialogue on the deployment of responsible AI for the benefit of society.

Using AI as an opportunity to include marginalized people in initiatives that promote human advancement and self-development.

Facilitate cooperation at the local, regional, and international levels in the field of AI.

Contribute to global efforts and international forums on AI ethics, the future of work, responsible AI, and the social and economic impact of AI.

Support the research bridges between Palestinian and international universities in the field of AI.

In addition to the aforementioned goals, the AI national strategy will help achieve the following numeric goals in the upcoming 5 years: having 300 graduates specialized in AI, 100 specialists in the field of AI in Palestine, systematic integration of AI into 4 educational sectors, 30% of the technology startups in Palestine use AI technology, 10% of private companies in Palestine adopt AI-based solutions, 200 published research papers in the field of AI in Palestine, 20 people specializing in privacy and digital data protection, and 50 datasets uploaded into opendata website in Palestine.

4.7.4 AI national strategy pillars

To achieve the goals above, the strategy has been divided into the following five main pillars:

AI for Government: the rapid adoption of AI technology via the automation of governmental procedures and the integration of AI into the decision-making process to improve productivity and transparency.

AI for Development: apply AI to several industries using a staged strategy to realize efficiencies, achieve more economic growth, and improve competitiveness. This could be achieved through domestic and international partnerships.

AI for Capacity Building: spread awareness and provide personalized training to private, public, and governmental sectors.

AI and Legal Framework: develop a legal framework to empower using AI across several sectors.

International Activities: play a key role in fostering cooperation on the regional and international levels by championing relevant initiatives, and actively participating in AI-related discussions and international projects.

These five pillars form a comprehensive approach to the AI national strategies, covering the government’s role, industry-specific implementation, workforce development, legal considerations, and international collaboration. By addressing these dimensions, Palestine can establish a solid foundation for responsible, inclusive, and sustainable AI deployment (Chatterjee 2020 ; Nankervis et al. 2021 ; Barton et al. 2017 ).

4.8 AI national strategy governance

AI national strategy governance is essential to ensure the implementation of AI national strategy. It guides the responsible and effective adoption, development, and use of AI in Palestine. Therefore, in 6/9/2021, the Council of Ministries in Palestine approved the decision to form an AI national team headed by the Ministry of Telecommunications and Information Technology and had 16 representatives from several ministries in Palestine, the private sector, and the educational sector (Telecommunication and Technology 2023 ). The national team is responsible for implementing and managing the AI national strategy in coordination with relevant experts and agencies. Their responsibilities can be summarized as follows:

Establishing a follow-up mechanism for the implementation of the AI national strategy which is consistent with international best practices in this field.

Setting national priorities in the field of AI applications.

Reviewing any form of cooperation at the regional and international levels, including the exchange of best practices and experiences.

Providing recommendations for national policies and plans related to technical, legal, and economic frameworks for AI applications.

Recommending programs for capacity building and to support the AI industry in Palestine.

Reviewing international protocols and agreements in the field of AI.

In addition to the AI national team, an advisory committee has been formed from the private and educational sectors in Palestine to support and assist the AI national team and help them achieve their responsibilities.

5 Conclusion

Sixty countries worldwide published their AI national strategies (Zhang et al. 2021 ). The approach of AI national strategies differentiates between developed and developing countries, since developed countries are consumers of technologies produced by developing countries (Monasterio Astobiza et al. 2022 ). Moreover, developed countries have low awareness of applications of AI across several fields (Kahn et al. 2018 ). This increases the gap in AI technology development between developed and developing countries (Kahn et al. 2018 ).

This paper aims to identify AI national strategy pillars in a developing country. Therefore, the paper assessed the AI status across education, entrepreneurship, government, and research and development sectors in Palestine (the case of a developing country). In addition, it examined the legal framework and whether it is coping with trending technologies.

Three different groups of participants were involved in the study. The first group included 45 experts (45+ interview hours) from the private, public, government, and educational sectors inside and outside Palestine. The second group included three focus groups which consisted of MSc students enrolled in AI-related programs, school teachers, and school students. The third group consisted of a sample of 240 participants which represent the Palestinian community as it included representatives from the educational, governmental, and private sectors.

The results revealed that Palestinians have low awareness of AI. Moreover, AI is barely used across several sectors and the legal framework is not coping with trending technologies. The results helped develop and identify five main pillars Palestine should focus on in the AI national strategy: AI for Government, AI for Development, AI for Capacity Building in the private, public, technical, and governmental sectors, AI and Legal Framework, and International Activities. The pillars will help achieve the following in the upcoming 5 years: having 300 graduates specialized in AI, 100 specialists in the field of AI in Palestine, systematic integration of AI into 4 educational sectors, 30% of the technology startups in Palestine use artificial intelligence techniques, 10% of private companies in Palestine adopt AI-based solutions, 200 published research papers in the field of artificial intelligence in Palestine, 20 people specializing in privacy and digital data protection, and 50 datasets uploaded into opendata website in Palestine. The AI national strategy was approved by the Palestinian cabinet in June 2023.

In the future, further analysis will be carried out to assess Palestinians’ awareness of Weak and Strong AI, in addition to the progress and outcome of AI national strategy across education, entrepreneurship, government, and research and development sectors.

Data availibility

The data analyzed during the current study are available from the corresponding author on reasonable request.

7amleh (2021) The reality of privacy & digital data protection in Palestine–background and summary. https://privacy.7amleh.org/ . Accessed 29 Nov 2022

Abduljabbar R, Dia H, Liyanage S, Bagloee SA (2019) Applications of artificial intelligence in transport: an overview. Sustainability 11(1):189

Article   Google Scholar  

Ahmed Z, Bhinder KK, Tariq A, Tahir MJ, Mehmood Q, Tabassum MS, Malik M, Aslam S, Asghar MS, Yousaf Z (2022) Knowledge, attitude, and practice of artificial intelligence among doctors and medical students in Pakistan: a cross-sectional online survey. Ann Med Surg 76:103493

Alelyani M, Alamri S, Alqahtani MS, Musa A, Almater H, Alqahtani N, Alshahrani F, Alelyani S (2021) Radiology community attitude in Saudi Arabia about the applications of artificial intelligence in radiology. In: Healthcare, vol 9. MDPI, p. 834

Arakpogun EO, Elsahn Z, Olan F, Elsahn F (2021) Artificial intelligence in Africa: challenges and opportunities. The fourth industrial revolution: implementation of artificial intelligence for growing business success, pp 375–388

Barton D, Woetzel J, Seong J, Tian Q (2017) Artificial intelligence: implications for China. McKinsey Global Institute, San Francisco

Bhattacharya P, Nakhare S (2019) Exploring AI-enabled intelligent tutoring system in the vocational studies sector in UAE. In: 2019 Sixth HCT information technology trends (ITT). IEEE, pp 230–233

Bourne C (2019) Ai cheerleaders: public relations, neoliberalism and artificial intelligence. Public Relat Inquiry 8(2):109–125

Bugami MA (2022) Saudi Arabia’s march towards sustainable development through innovation and technology. In: 2022 9th International conference on computing for sustainable global development (INDIACom). IEEE, pp 01–06

Chatterjee S (2020) Ai strategy of India: policy framework, adoption challenges and actions for government. Transform Gov People Process Policy 14(5):757–775

Google Scholar  

Chatterjee S, Chaudhuri R, Kamble S, Gupta S, Sivarajah U (2022) Adoption of artificial intelligence and cutting-edge technologies for production system sustainability: a moderator-mediation analysis. Inf Syst Front. https://doi.org/10.1007/s10796-022-10317-x

Chung C-S (2020) Developing digital governance: South Korea as a global digital government leader. Routledge, Milton Park

Book   Google Scholar  

Cohen IG, Evgeniou T, Gerke S, Minssen T (2020) The European artificial intelligence strategy: implications and challenges for digital health. Lancet Digit Health 2(7):376–379

Criado JI, Sandoval-Almazan R, Valle-Cruz D, Ruvalcaba-Gómez EA (2021) Chief information officers’ perceptions about artificial intelligence: a comparative study of implications and challenges for the public sector. First Monday

De Leyn T, De Wolf R, Vanden Abeele M, De Marez L (2021) In-between child’s play and teenage pop culture: Tweens, Tiktok & privacy. J Youth Stud 25:1108–1125

De-Arteaga M, Herlands W, Neill DB, Dubrawski A (2018) Machine learning for the developing world. ACM Trans Manag Inf Syst (TMIS) 9(2):1–14

Dumas S, Pedersen C, Smith S (2022) Hybrid healthcare will lead the way in empowering women across emerging markets: reimagine and revolutionise the future of female healthcare. Impact of women’s empowerment on SDGs in the digital era. IGI Global, Hershey, pp 251–275

Chapter   Google Scholar  

Economist Impact (2022) Pushing forward: the future of AI in the middle east and north Africa. https://impact.economist.com . Accessed 12 July 2022

EduRank (2022) World’s best artificial intelligence (AI) universities [Rankings]. https://edurank.org/cs/ai/ . Accessed 29 Oct 2022

Escobar S, Sciortino D (2022) Artificial Intelligence in Canada. In: Munoz J, Maurya A (eds) International Perspectives on Artificial Intelligence. Anthem Press, p 13–22

European Commission (2020) On artificial intelligence—a European approach to excellence and trust. https://ec.europa.eu . Accessed 12 July 022

Fatima S, Desouza KC, Dawson GS (2020) National strategic artificial intelligence plans: a multi-dimensional analysis. Econ Anal Policy 67:178–194

Fukuyama M (2018) Society 5.0: aiming for a new human-centered society. Jpn Spotlight 27(5):47–50

Ghazwani S, Esch P, Cui YG, Gala P (2022) Artificial intelligence, financial anxiety and cashier-less checkouts: a Saudi Arabian perspective. Int J Bank Market 40:1200–1216

Goralski MA, Tan TK (2020) Artificial intelligence and sustainable development. Int J Manag Educ 18(1):100330

Guo J, Li B (2018) The application of medical artificial intelligence technology in rural areas of developing countries. Health Equity 2(1):174–181

Article   MathSciNet   Google Scholar  

Holon IQ (2020) 50 National AI strategies—the 2020 AI strategy landscape. Holon IQ. Accessed on 12 July 2022

Ilhan B, Guneri P, Wilder-Smith P (2021) The contribution of artificial intelligence to reducing the diagnostic delay in oral cancer. Oral Oncol 116:105254

Johnson J (2019) Artificial intelligence & future warfare: implications for international security. Defense Secur Anal 35(2):147–169

Jorge RR, Van Roy V, Rossetti F, Tangi L (2022) AI Watch. National strategies on Artificial Intelligence: A European perspective. 2022 edition. Luxembourg (Luxembourg): Publications Office of the European Union, (KJ-NA-31083-EN-N (online)). https://doi.org/10.2760/385851

Kahn KM, Megasari R, Piantari E, Junaeti E (2018) AI programming by children using snap! block programming in a developing country. In: Thirteenth European conference on technology enhanced learning

Kazim E, Almeida D, Kingsman N, Kerrigan C, Koshiyama A, Lomas E, Hilliard A (2021) Innovation and opportunity: review of the UK’s national AI strategy. Discov Artif Intell 1(1):1–10

Kshetri N (2021) The role of artificial intelligence in promoting financial inclusion in developing countries. Taylor & Francis, Milton Park

Makridakis S (2017) The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures 90:46–60

Malerbi FK, Melo GB (2022) Feasibility of screening for diabetic retinopathy using artificial intelligence, Brazil. Bull World Health Org 100(10):643–647

Malkawi AR (2022) Science education in the state of Qatar. Science education in countries along the belt & road. Springer, Berlin, pp 151–167

Mhlanga D (2021) Artificial intelligence in the industry 4.0, and its impact on poverty, innovation, infrastructure development, and the sustainable development goals: lessons from emerging economies? Sustainability 13(11):5788

Millington KA (2017) How changes in technology and automation will affect the labour market in Africa. K4D Helpdesk Report. Brighton, UK: Institute of Development Studies

Ministry of Communications and Innovation Technology (Egypt) (2021) Egypt national AI strategy. https://mcit.gov.eg . Accessed 12 Dec 2022

Monasterio Astobiza A, Ausín T, Liedo B, Toboso M, Aparicio M, López D (2022) Ethical governance of AI in the global south: a human rights approach to responsible use of AI. In: Proceedings, vol 81. MDPI, p 136

Nankervis A, Connell J, Montague A, Burgess J (2021) The fourth industrial revolution: what does it mean for Australian industry? Springer, Berlin

Palestinian Ministry of Higher Education and Scientific Research (2022) Palestinian Ministry of Higher Education and Scientific Research. https://www.mohe.pna.ps/ . Accessed 26 Oct 2022

Pham Q-V, Nguyen DC, Huynh-The T, Hwang W-J, Pathirana PN (2020) Artificial intelligence (AI) and big data for coronavirus (COVID-19) pandemic: a survey on the state-of-the-arts. IEEE Access 8:130820

Polaris (2021) Palestine Startups Ecosystem Map—Polaris. https://polaris.ps/palestine-startups-ecosystem-map/06/02/2022/ . Accessed 29 Oct 2022

Radu R (2021) Steering the governance of artificial intelligence: national strategies in perspective. Policy Soc 40(2):178–193

Sharfi M (2021) The GCC and global health diplomacy: the new drive towards artificial intelligence. Artif Intell Gulf. Springer, Berlin, pp 117–139

Sharma M, Luthra S, Joshi S, Kumar A (2022) Implementing challenges of artificial intelligence: evidence from public manufacturing sector of an emerging economy. Gov Inf Q 39(4):101624

Shiroishi Y, Uchiyama K, Suzuki N (2018) Society 5.0: for human security and well-being. Computer 51(7):91–95

Squicciarini M, Nachtigall H (2021) Demand for AI skills in jobs: Evidence from online job postings (No. 2021/03). OECD Publishing

Strusani D, Houngbonon GV (2019) The Role of Artificial Intelligence in Supporting Development in Emerging Markets (No. 32365). The World Bank Group

Su Z, Togay G, Côté A-M (2021) Artificial intelligence: a destructive and yet creative force in the skilled labour market. Hum Resour Dev Int 24(3):341–352

Telecommunication PM, Technology (2023) Artificial intelligence national platform. https://ai.gov.ps/ar/strategie/show/4 . Accessed 01 Sept 2023

Tizhoosh HR, Pantanowitz L (2018) Artificial intelligence and digital pathology: challenges and opportunities. J Pathol Inform 9(1):38

UNCTAD (2021) Data protection and privacy legislation worldwide|UNCTAD. https://unctad.org . Accessed 28 Nov 2022

Vats A, Natarajan N et al (2022) G20. AI: national strategies, global ambitions. Observer Research Foundation and Observer Research Foundation America, Washington

Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR (2018) Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? BMJ Glob Health 3(4):000798

Zhang Y, Xiong F, Xie Y, Fan X, Gu H (2020) The impact of artificial intelligence and blockchain on the accounting profession. IEEE Access 8:110461–110477

Zhang D, Mishra S, Brynjolfsson E, Etchemendy J, Ganguli D, Grosz B, Lyons T, Manyika J, Niebles JC, Sellitto M et al (2021) The AI index 2021 annual report. arXiv preprint arXiv:2103.06312

Download references

No funding was received to assist with the preparation of this manuscript.

Author information

Authors and affiliations.

Computer Engineering Department, An-Najah National University, Rafidia, Nablus, 9992200, Palestine

Mona Nabil Demaidi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mona Nabil Demaidi .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Demaidi, M.N. Artificial intelligence national strategy in a developing country. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01779-x

Download citation

Received : 08 July 2023

Accepted : 05 September 2023

Published : 01 October 2023

DOI : https://doi.org/10.1007/s00146-023-01779-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • National strategies
  • Developed countries
  • Developing countries
  • Find a journal
  • Publish with us
  • Track your research
  • Technology & Telecommunications ›

Industry-specific and extensively researched technical data (partially from exclusive partnerships). A paid subscription is required for full access.

  • Main 20 AI countries 2023, by research capacity

Leading 20 artificial intelligence (AI) countries in 2023, by research capacity

  • Immediate access to 1m+ statistics
  • Incl. source references
  • Download as PNG, PDF, XLS, PPT

Additional Information

Show sources information Show publisher information Use Ask Statista Research Service

The source adds the following information: "Research looks at the extent of specialist research and researchers, including numbers of publications and citations in credible academic journals".

Other statistics on the topic Artificial intelligence (AI) worldwide

  • Concerns about AI-related issues among U.S. adults 2023
  • Familiarity with ChatGPT in the U.S. 2023, by education

IT Services

  • Challenges to internal business transformation in the U.S, 2023
  • Automation potential of generative AI in the U.S. 2023, by profession

Bergur Thormundsson

To download this statistic in XLS format you need a Statista Account

To download this statistic in PNG format you need a Statista Account

To download this statistic in PDF format you need a Statista Account

To download this statistic in PPT format you need a Statista Account

As a Premium user you get access to the detailed source references and background information about this statistic.

As a Premium user you get access to background information and details about the release of this statistic.

As soon as this statistic is updated, you will immediately be notified via e-mail.

… to incorporate the statistic into your presentation at any time.

You need at least a Starter Account to use this feature.

  • Immediate access to statistics, forecasts & reports
  • Usage and publication rights
  • Download in various formats

You only have access to basic statistics. This statistic is not included in your account.

  • Instant access  to 1m statistics
  • Download  in XLS, PDF & PNG format
  • Detailed  references

Business Solutions including all features.

Statistics on " Generative artificial intelligence (AI) in the U.S. "

  • Artificial Intelligence (AI) market size/revenue comparisons 2018-2030
  • Market size of AI technologies worldwide 2020-2030, by segment
  • Impact of generative AI on U.S. productivity 2022-2040
  • Use and knowledge of generative AI programs in the U.S. 2023
  • Paths to increase the use of generative AI in select countries worldwide in 2023
  • Main desired uses of generative AI in select countries worldwide in 2023
  • Main uses of generative AI in select countries worldwide 2023
  • Generative AI adoption rate at work in the United States 2023, by industry
  • Increased level of automation with generative AI in the U.S. 2023
  • Increased level of labor demand with generative AI in the U.S. 2023
  • Automation potential of generative AI in the U.S. 2023, by education
  • Consumers who are open to product suggestions from AI worldwide 2023, by generation
  • Consumers who trust medical advice from generative AI worldwide in 2023, by country
  • Consumer trust in medical advice from generative AI worldwide in 2023, by generation
  • Popularity of generative AI in marketing in the U.S. 2023
  • Consumer support for usage of generative AI to create ads U.S. 2023, by generation
  • Leading attitudes toward brand usage of generative AI in the U.S. 2023
  • Leading generative AI tools used for marketing purposes in the U.S. 2023
  • Leading marketing purposes of generative AI in the U.S. 2023
  • Attitudes towards ChatGPT and generative AI for legal work 2023
  • Knowledge level on ChatGPT in the U.S. 2023, by gender
  • Appeal of generative AI in social media in the U.S. 2023, by content creator category

Other statistics that may interest you Generative artificial intelligence (AI) in the U.S.

  • Premium Statistic Artificial Intelligence (AI) market size/revenue comparisons 2018-2030
  • Premium Statistic Market size of AI technologies worldwide 2020-2030, by segment
  • Premium Statistic Impact of generative AI on U.S. productivity 2022-2040
  • Premium Statistic Main 20 AI countries 2023, by research capacity
  • Premium Statistic Use and knowledge of generative AI programs in the U.S. 2023
  • Premium Statistic Paths to increase the use of generative AI in select countries worldwide in 2023
  • Premium Statistic Main desired uses of generative AI in select countries worldwide in 2023
  • Premium Statistic Main uses of generative AI in select countries worldwide 2023
  • Basic Statistic Generative AI adoption rate at work in the United States 2023, by industry

Automation and labor

  • Premium Statistic Increased level of automation with generative AI in the U.S. 2023
  • Premium Statistic Increased level of labor demand with generative AI in the U.S. 2023
  • Premium Statistic Automation potential of generative AI in the U.S. 2023, by education
  • Premium Statistic Automation potential of generative AI in the U.S. 2023, by profession
  • Premium Statistic Challenges to internal business transformation in the U.S, 2023
  • Premium Statistic Consumers who are open to product suggestions from AI worldwide 2023, by generation
  • Premium Statistic Consumers who trust medical advice from generative AI worldwide in 2023, by country
  • Premium Statistic Consumer trust in medical advice from generative AI worldwide in 2023, by generation

Usage in Marketing

  • Premium Statistic Popularity of generative AI in marketing in the U.S. 2023
  • Premium Statistic Consumer support for usage of generative AI to create ads U.S. 2023, by generation
  • Premium Statistic Leading attitudes toward brand usage of generative AI in the U.S. 2023
  • Premium Statistic Leading generative AI tools used for marketing purposes in the U.S. 2023
  • Premium Statistic Leading marketing purposes of generative AI in the U.S. 2023

Attitude towards GenAI

  • Premium Statistic Attitudes towards ChatGPT and generative AI for legal work 2023
  • Basic Statistic Concerns about AI-related issues among U.S. adults 2023
  • Premium Statistic Familiarity with ChatGPT in the U.S. 2023, by education
  • Premium Statistic Knowledge level on ChatGPT in the U.S. 2023, by gender
  • Premium Statistic Appeal of generative AI in social media in the U.S. 2023, by content creator category

Further Content: You might find this interesting as well

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Is China Emerging as the Global Leader in AI?

  • Daitian Li,
  • Tony W. Tong,
  • Yangao Xiao

ai research papers by country

It has rapidly caught up with the U.S. — but there is no guarantee it’ll pull ahead.

China is quickly closing the once formidable lead the U.S. maintained on AI research. Chinese researchers now publish more papers on AI and secure more patents than U.S. researchers do. The country seems poised to become a leader in AI-empowered businesses, such as speech and image recognition applications. But while China has caught up with impressive speed, the conditions that have allowed it to do so — the open science nature of AI and the nature of the Chinese market, for instance — will likely also prevent it from taking a meaningful lead and leaving the U.S. in the dust.

Twenty years ago, there was a huge gulf between China and the United States on AI research. While the U.S. was witnessing sustained growth in research efforts by both public institutions and private sectors, China was still conducting low-value-added activities in global manufacturing. But in the intervening years, China has surged to rapidly catch up. From a research perspective, China has become a world leader in AI publications and patents. This trend suggests that China is also poised to become a leader in AI-empowered businesses, such as speech and image recognition applications.

ai research papers by country

  • DL Daitian Li is an Assistant Professor at the University of Electronic Science & Technology of China. He is an affiliated researcher with the China Institute for Science & Technology Policy at Tsinghua University. He holds a Ph.D. in Business Administration & Management from Bocconi University. His research interests focus on technological catch-up, industry evolution, and technology & innovation management. His research has been published in journals including Research Policy and Tsinghua Business Review .
  • TT Tony W. Tong is a Professor of Strategy & Entrepreneurship and currently the Senior Associate Dean for Faculty and Research in the Leeds School of Business at the University of Colorado. He studies firm strategy, innovation management, and international business. He has published numerous top journal papers in these areas as well as multiple bestseller case studies in Harvard Business Publishing .
  • YX Yangao Xiao is a Professor of Management at the University of Electronic Science & Technology of China. His research interests focus on intellectual property rights and latecomer strategies. His research has been published in journals including Research Policy and CEIBS Business Review .

Partner Center

Mobile Navigation

Research index, filter and sort, filter selections, sort options, research papers.

Young Tiger

Hello. It looks like you're using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at [email protected]

Intelligence | Global AI Index

The Global AI Index

The first index to benchmark nations on their level of investment, innovation and implementation of artificial intelligence.

ai research papers by country

Making sense of artificial intelligence… on a global scale

The artificial intelligence revolution will transform business, government and society – and this year it’s taken a huge leap forward. The rise of ChatGPT and the ensuing arms race between big tech companies to develop their own generative AI models has led to a very public debate about how best to manage the risks of this new technology. There’s been a lot of talk, but little understanding.

The Global AI Index aims to make sense of artificial intelligence in 62 countries that have chosen to invest in it. It’s the first ever ranking of countries based on three pillars of analysis; investment, innovation and implementation. This is the fourth iteration of the index.

Ranking Table

Countries are ranked by their AI capacity at the international level. This is the fourth iteration of the Global AI Index, published on 28 June 2023.

The Global AI Index is underpinned by 111 indicators, collected from 28 different public and private data sources, and 62 governments. These are split across seven sub-pillars: Talent, Infrastructure, Operating Environment, Research, Development, Government Strategy and Commercial.

Implementation

Talent  focuses on the availability of skilled practitioners in artificial intelligence solutions. Infrastructure  assesses the reliability and scale of access infrastructure, from electricity and internet to supercomputing capabilities.

Operating Environment focuses on the regulatory context and public opinion on artificial intelligence.

Innovation 

Research looks at the extent of specialist research and researchers, including numbers of publications and citations in credible academic journals.

Development focuses on the development of fundamental platforms and algorithms upon which innovative artificial intelligence projects rely.

Government Strategy gauges the depth of commitment from national governments to artificial intelligence; investigating spending commitments and national strategies.

Commercial focuses on the level of startup activity, investment and business initiatives based on artificial intelligence.

Country Profile s

Overall rank positions for each sub-pillar of the Index

Explore the 100+ Indicators that make up the Global AI Index

Further Reading

See our Results Piece: read here

And our Index Methodology: read here

Serena Cesareo and Joseph White

If you’d like to get in touch about the Global AI Index, please email [email protected]

  • Transatlantic Periscope
  • RANGE Forecasting
  • Transatlantic Barometer
  • BFNA Documentaries
  • Transponder Magazine
  • Politics & Society
  • Future Leadership

Digital World

  • Publications
  • Infographics

Subscribe to receive our updates & newsletters

Enter your email and customize your preferences.

Infographic: AI Research and Development in the U.S., EU and China

As governments and the general public pay closer attention to artificial intelligence (AI) and options for its regulation, we examine U.S., EU and Chinese progress on AI research and development over the last two decades.

AI Research

The U.S. was a leader in AI research in the early 2000s. Its institutions published more AI-related papers than those in any other country. But in 2006, China took the lead as the source of 58,067 AI publications. The U.S. and the EU trailed with, respectively, 52,671 and 49,540. Chinese researchers have since become even more prolific, publishing 155,487 AI papers in 2022, followed by those in the EU with 101,455 and U.S. researchers’ 81,130. The Chinese accounted for nearly 40% of global AI publications in 2021.

In the 2010s, while China’s technology industry was still developing, U.S. researchers argued that China’s lead in volume of papers did not erase the quality of its research and AI talent. The talent gap continues, as more than half of the best Chinese AI scientists work or pursue graduate degrees outside their country, but the quality of China’s AI research papers has improved significantly over the years. A study from Nikkei Asia measuring the quality of AI research by counting the number of papers in the top 10% of citations in other papers shows that China overtook the U.S. in the quality of its AI research by 2019. By 2021, China accounted for 7,401 of the most-cited papers, 70% more than the number of the most-cited U.S. papers.

American technology companies continue to dominate the AI research space with six corporate giants, including Google, Microsoft and IBM, among the top 10 producers of the most-cited research. Chinese companies are gaining traction, however. Only one company was in the top 10 in 2012; there were four in 2021. Tencent, Alibaba and Huawei have forged ahead in the number of AI papers that they produce and the citations that these publications receive.

Venture Capital Investment in AI

The U.S. continues to lead in attracting the most venture capital investment in AI and data startups, scoring a sharp increase between 2020 and 2021. The bulk of this investment was in mobility and autonomous vehicles (AV), healthcare and biotechnology, and business processes and support services. Investment growth in healthcare and biotechnology is likely a result of the COVID-19 pandemic, as is the growth seen in business services since many workers and students transitioned to hybrid or remote arrangements with the help of platforms like Zoom and Microsoft Teams. Investment in these sectors, however, decreased significantly in 2022 and 2023, and replaced by the $25 billion that is expected to flow into AI-powered marketing on social media platforms in 2023. Financial and insurance services is expected to be the second biggest industry for venture capital investment in 2023 at almost $15 billion.

The EU and China saw a similar trend to that in the U.S. The EU sectors that attracted the most venture capital investment between 2020 and 2022 were business processes and financial and insurance services. The top sector for 2023 is expected to be IT infrastructure, drawing almost $1.5 billion, followed by AI in social media marketing at almost $1 billion, even if both are significantly less than that seen in the U.S. In China, the AV industry has seen a stark increase in investment since 2018. It continues to be the sector attracting the most venture capital there, even if the amount was much lower in 2023 than in 2021. China’s second-biggest sector for such investment in 2021 was robots, sensors and IT hardware, which brought in $10 billion. The figure for 2023 is expected to show a decline to $2 billion.

AI Software Development

The U.S. and EU are ahead of China in AI software development, though not significantly. As the U.S. and EU gradually decrease their contributions to AI software development, the gap with China is closing. This trend is reflected in the software development contributions made to public AI projects by American, European and Chinese developers on GitHub, a platform and cloud-based service for software development and version control. GitHub is the primary platform for developers to store and manage their code and to collaborate.

The Organization for Economic Co-operation and Development (OECD) collects data on the number of GitHub’s AI projects, or AI-related GitHub repositories, and developer contributions made to these projects. Analysis of that data allows identifying AI software developers, their locations, the development tools they use and the level of impact of their AI projects. All this provides insight into the broader trends in software development and innovation. Level of impact is determined by the number of managed copies, or forks, that other developers make of a given AI project. By this measure, U.S. and EU high-impact AI projects declined from 40% and 26% respectively in 2011 to 20% and 16% in 2022. In the same period, the number of high-impact Chinese AI software projects grew from almost 0% to 11.6% in 2022.

Daniela Rojas Medina

Research analyst bertelsmann foundation.

[email protected]

AI Research and Development in the U.S., EU and China

Download Original

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, artificial intelligence and employment: new cross-country evidence.

ai research papers by country

  • Organisation for Economic Co-operation and Development, Paris, France

Recent years have seen impressive advances in artificial intelligence (AI) and this has stoked renewed concern about the impact of technological progress on the labor market, including on worker displacement. This paper looks at the possible links between AI and employment in a cross-country context. It adapts the AI occupational impact measure developed by Felten, Raj and Seamans—an indicator measuring the degree to which occupations rely on abilities in which AI has made the most progress—and extends it to 23 OECD countries. Overall, there appears to be no clear relationship between AI exposure and employment growth. However, in occupations where computer use is high, greater exposure to AI is linked to higher employment growth. The paper also finds suggestive evidence of a negative relationship between AI exposure and growth in average hours worked among occupations where computer use is low. One possible explanation is that partial automation by AI increases productivity directly as well as by shifting the task composition of occupations toward higher value-added tasks. This increase in labor productivity and output counteracts the direct displacement effect of automation through AI for workers with good digital skills, who may find it easier to use AI effectively and shift to non-automatable, higher-value added tasks within their occupations. The opposite could be true for workers with poor digital skills, who may not be able to interact efficiently with AI and thus reap all potential benefits of the technology 1 .

Introduction

Recent years have seen impressive advances in Artificial Intelligence (AI), particularly in the areas of image and speech recognition, natural language processing, translation, reading comprehension, computer programming, and predictive analytics.

This rapid progress has been accompanied by concern about the possible effects of AI deployment on the labor market, including on worker displacement. There are reasons to believe that its impact on employment may be different from previous waves of technological progress. Autor et al. (2003) postulated that jobs consist of routine (and thus in principle programmable) and non-routine tasks. Previous waves of technological progress were primarily associated with the automation of routine tasks. Computers, for example, are capable of performing routine cognitive tasks including record-keeping, calculation, and searching for information. Similarly, industrial robots are programmable manipulators of physical objects and therefore associated with the automation of routine manual tasks such as welding, painting or packaging ( Raj and Seamans, 2019 ) 2 . These technologies therefore mainly substitute for workers in low- and middle-skill occupations.

Tasks typically associated with high-skilled occupations, such as non-routine manual tasks (requiring dexterity) and non-routine cognitive tasks (requiring abstract reasoning, creativity, and social intelligence) were previously thought to be outside the scope of automation ( Autor et al., 2003 ; Acemoglu and Restrepo, 2020 ).

However, recent advances in AI mean that non-routine cognitive tasks can also increasingly be automated ( Lane and Saint-Martin, 2021 ). In most of its current applications, AI refers to computer software that relies on highly sophisticated algorithmic techniques to find patterns in data and make predictions about the future. Analysis of patent texts suggests AI is capable of formulating medical prognosis and suggesting treatment, detecting cancer and identifying fraud ( Webb, 2020 ). Thus, in contrast to previous waves of automation, AI might disproportionally affect high-skilled workers.

Even if AI automates non-routine, cognitive tasks, this does not necessarily mean that AI will displace workers. In general, technological progress improves labor efficiency by (partially) taking over/speeding up tasks performed by workers. This leads to an increase in output per effective labor input and a reduction in production costs. The employment effects of this process are ex-ante ambiguous: employment may fall as tasks are automated (substitution effect). On the other hand, lower production costs may increase output if there is sufficient demand for the good/service (productivity effect) 3 .

To harness this productivity effect, workers need to both learn to work effectively with the new technology and to adapt to a changing task composition that puts more emphasis on tasks that AI cannot yet perform. Such adaptation is costly and the cost will depend on worker characteristics.

The areas where AI is currently making the most progress are associated with non-routine, cognitive tasks often performed by medium- to high-skilled, white collar workers. However, these workers also rely more than other workers on abilities AI does not currently possess, such as inductive reasoning or social intelligence. Moreover, highly educated workers often find it easier to adapt to new technologies because they are more likely to already work with digital technologies and participate more in training, which puts them in a better position than lower-skilled workers to reap the potential benefits of AI. That being said, more educated workers also tend to have more task-specific human capital 4 , which might make adaption more costly for them ( Fossen and Sorgner, 2019 ).

As AI is a relatively new technology, there is little empirical evidence on its effect on the labor market to date. The literature that does exist is mostly limited to the US and finds little evidence for AI-driven worker displacement ( Lane and Saint-Martin, 2021 ). Felten et al. (2019) look at the effect of exposure to AI 5 on employment and wages in the US at the occupational level. They do not find any link between AI exposure and (aggregate) employment, but they do find a positive effect of AI exposure on wage growth, suggesting that the productivity effect of AI may outweigh the substitution effect. This effect on wage growth is concentrated in occupations that require software skills and in high-wage occupations.

Again for the US, Fossen and Sorgner (2019) look at the effect of exposure to AI 6 on job stability and wage growth at the individual level. They find that exposure to AI leads to higher employment stability and higher wages, and that this effect is stronger for higher educated and more experienced workers, again indicating that the productivity effect dominates and that it is stronger for high-skilled workers.

Finally, Acemoglu et al. (2020) look at hiring in US firms with task structures compatible with AI capabilities 7 . They find that firms' exposure to AI is linked to changes in the structure of skills that firms demand. They find no evidence of employment effects at the occupational level, but they do find that firms that are exposed to AI restrict their hiring in non-AI positions compared to other firms. They conclude that the employment effect of AI might still be too small to be detected in aggregate data (given also how recent a phenomenon AI is), but that it might emerge in the future as AI adoption spreads.

This paper adds to the literature by looking at the links between AI and employment growth in a cross-country context. It adapts the AI occupational impact measure proposed by Felten et al. (2018 , 2019 )—an indicator measuring the degree to which occupations rely on abilities in which AI has made the most progress in recent years—and extends it to 23 OECD countries by linking it to the Survey of Adult Skills, PIAAC. This indicator, which allows for variations in AI exposure across occupations, as well as within occupations and across countries, is matched to Labor Force Surveys to analyse the relationship with employment growth.

The paper finds that, over the period 2012–2019, there is no clear relationship between AI exposure and employment growth across all occupations. Moreover, in occupations where computer use is high, AI appears to be positively associated with employment growth. There is also some evidence of a negative relationship between AI exposure and growth in average hours worked among occupations where computer use is low. While further research is needed to identify the exact mechanisms driving these results, one possible explanation is that partial automation by AI increases productivity directly as well as by shifting the task composition of occupations toward higher value-added tasks. This increase in labor productivity and output counteracts the direct displacement effect of automation through AI for workers with good digital skills, who may find it easier to use AI effectively and shift to non-automatable, higher-value tasks within their occupations. The opposite could be true for workers with poor digital skills, who may be unable to interact efficiently with AI and thus reap all potential benefits of the technology.

The paper starts out by presenting indicators of AI deployment that have been proposed in the literature and discussing their relative merits (Section Indicators of Occupational Exposure to AI). It then goes on to present the indicator developed in this paper and builds some intuition on the channels through which occupations are potentially affected by AI (Section Data). Section Results presents the main results.

Indicators of Occupational Exposure to AI

To analyse the links between AI and employment, it is necessary to determine where in the economy AI is currently deployed. In the absence of comprehensive data on the adoption of AI by firms, several proxies for (potential) AI deployment have been proposed in the literature. They can be grouped into two broad categories. The first group of indicators uses information on labor demand to infer AI activity across occupations, sectors and locations. In practice, these indicators use online job postings that provide information on skills requirements and they therefore will only capture AI deployment if it requires workers to have AI skills. The second group of indicators uses information on AI capabilities—that is, information on what AI can currently do—and links it to occupations. These indicators measure potential exposure to AI and not actual AI adoption. This section presents some of these indicators and discusses their advantages and drawbacks.

Indicators Based on AI-Related Job Posting Frequencies

The first set of indicators use data on AI-related skill requirements in job postings as a proxy for AI deployment in firms. The main data source for these indicators is Burning Glass Technologies (BGT), which collects detailed information—including job title, sector, required skills etc. —on online job postings (see Box 1 for details). Because of the rich and up-to-date information BGT data offers, these indicators allow for a timely tracking of the demand for AI skills across the labor market.

Box 1. Burning Glass Technologies (BGT) online job postings data

Burning Glass Technologies (BGT) collects data on online job postings by web-scraping 40 000 online job boards and company websites. It claims to cover the near-universe of online job postings. Data are currently available for Australia, Canada, New Zealand, Singapore, the United Kingdom, and the United States for the time period 2012–2020 (2014–2020 for Germany and 2018–2020 for other European Union countries). BGT extracts information such as location, sector, occupation, required skills, education, and experience levels from the text of job postings (deleting duplicates) and organizes it into up to 70 variables that can be linked to labor force surveys, providing detailed, and timely information on labor demand.

Despite its strengths, BGT data has a number of limitations:

• It misses vacancies that are not posted online. Carnevale et al. (2014) compare vacancies from survey data according to the Job Openings and Labor Turnover Survey (JOLTS) from the US Bureau of Labor Statistics, a representative survey of 16,000 US businesses, with BGT data for 2013. They find that roughly 70% of vacancies were posted online, with vacancies requiring a college degree significantly more likely to be posted online compared to jobs with lower education requirements.

• There is not necessarily a direct, one-to-one correspondence between an online job ad and an actual vacancy: firms might post one job ad for several vacancies, or post job ads without firm plans to hire, e.g., because they want to learn about available talent for future hiring needs.

• BGT data might over-represent growing firms that cannot draw on internal labor markets to the same extent as the average firm.

• Higher turnover in some occupations and industries can produce a skewed image of actual labor demand since vacancies reflect a mixture of replacement demand as well as expansion.

In addition, since BGT data draws on published job advertisements, it is a proxy of current vacancies, and not of hiring or actual employment. As a proxy for vacancies, BGT data performs reasonably well, although some occupations and sectors are over-represented. Hershbein and Kahn (2018) show for the US that, compared to vacancy data from the U.S. Bureau of Labor Statistics' Job Openings and Labor Turnover Survey (JOLTS), BGT over-represents health care and social assistance, finance and insurance, and education, while under-representing accommodation, food services and construction (where informal hiring is more prevalent) as well as public administration/government. These differences are stable across time, however, such that changes in labor demand in BGT track well with JOLTS data. Regarding hiring, they also compare BGT data with new jobs according to the Current Population Survey (CPS). BGT data strongly over-represents computer and mathematical occupations (by a factor of over four, which is a concern when looking at growth in demand for AI skills as compared to other skills), as well as occupations in management, healthcare, and business and financial operations. It under-represents all remaining occupations, including transportation, food preparation and serving, production, or construction.

Cammeraat and Squicciarini (2020) argue that, because of differences in turnover across occupations, countries and time, as well as differences in the collection of national vacancy statistics, the representativeness of BGT data as an indicator for labor and skills demand should be measured against employment growth. They compare growth rates in employment with growth rates in BGT job postings on the occupational level in the six countries for which a BGT timeline exists. They find that, across countries, the deviation between BGT and employment growth rates by occupation is lower than 10 percentage points for 65% of the employed population. They observe the biggest deviations for agricultural, forestry and fishery workers, as well as community and personal service workers, again occupations where informal hiring may be more prevalent.

Squicciarini and Nachtigall (2021) identify AI-related job postings by using keywords extracted from scientific publications, augmented by text mining techniques and expert validation [see Baruffaldi et al. (2020) for details]. These keywords belong to four broad groups: (i) generic AI keywords, e.g., “artificial intelligence,” “machine learning;” (ii) AI approaches or methods: e.g., “decision trees,” “deep learning;” (iii) AI applications: e.g., “computer vision,” “image recognition;” (iv) AI software and libraries: e.g., Python or TensorFlow. Since some of these keywords may be used in job postings for non AI-related jobs (e.g., “Python” or “Bayesian”), the authors only tag a job as AI-related if the posting contains at least two AI keywords from at least two distinct concepts. This indicator is available on an annual basis for Canada, Singapore, the United Kingdom and the United States, for 2012–2018 8 .

Acemoglu et al. (2020) take a simpler approach by defining vacancies as AI-related if they contain any keyword belonging to a simple list of skills related to AI 9 . As this indicator will tag any job posting that contains one of the keywords, it is less precise than the indicator proposed by Squicciarini and Nachtigall (2021) , but also easier to reproduce.

Dawson et al. (2021) develop the skills-space or skills-similarity indicator. This approach defines two skills as similar if they often occur together in BGT job postings and are both simultaneously important for the job posting. A skill is assumed to be less “important” for a particular job posting if it is common across job postings. For example, “communication” and “team work” occur in about a quarter of all job adds, and would therefore be less important than “machine learning” in a job posting requiring both “communication” and “team work 10 .” The idea behind this approach is that, if two skills are often simultaneously required for jobs, (i) they are complementary and (ii) mastery of one skill means it is easier to acquire the other. In that way, similar skills may act as “bridges” for workers wanting to change occupations. It also means that workers who possess skills that are similar to AI skills may find it easier to work with AI, even if they are not capable of developing the technology themselves. For example, the skill “copy writing” is similar to “journalism,” meaning that a copy writer might transition to journalism at a lower cost than, say, a social worker, and that a copy writer might find it comparatively easier to use databases and other digital tools created for journalists.

Skill similarity allows the identification and tracking of emerging skills: using a short list of “seed skills 11 ,” the indicator can track similar skills as they appear in job ads over time, keeping the indicator up to date. For example, TensorFlow is a deep learning framework introduced in 2016. Many job postings now list it as a requirement without additionally specifying “deep learning” ( Dawson et al., 2021 ).

The skill similarity approach is preferable to the simple job posting frequency indicators mentioned above ( Acemoglu et al., 2020 ; Squicciarini and Nachtigall, 2021 ) as it does not only pick up specific AI job postings, but also job postings with skills that are similar (but not identical) to AI skills, and may thus enable workers to work with AI technologies. Another advantage of this indicator is its dynamic nature: as technologies develop and skill requirements evolve, skill similarity can identify new skills that appear in job postings together with familiar skills, and keep the relative skill indicators up-to-date. This indicator is available at the annual level from 2012 to 2019 for Australia and New Zealand 12 .

Task-Based Indicators

Task-based indicators for AI adoption are based on measures of AI capabilities linked to tasks workers perform, often at the occupational level. They identify occupations as exposed to AI if they perform tasks that AI is increasingly capable of performing.

The AI occupational exposure measure developed by Felten et al. (2018 , 2019 ) is based on progress scores in nine AI applications 13 (such as reading comprehension or image recognition) from the AI progress measurement dataset provided by the Electronic Frontier Foundation (EFF). The EFF monitors progress in AI applications using a mixture of academic literature, blog posts and websites focused on AI. Each application may have several progress scores. One example of a progress score would be a recognition error rate for image recognition. The authors rescale these scores to arrive at a composite score that measures progress in each application between 2010 and 2015.

Felten et al. (2018 , 2019 ) then link these AI applications to abilities in the US Department of Labor's O * NET database. Abilities are defined as “enduring attributes of the individual that influence performance,” e.g., “peripheral vision” or “oral comprehension.” They enable workers to perform tasks in their jobs (such as driving a car or answering a call), but are distinct from skills in that they cannot typically be acquired or learned. Thus, linking O * NET abilities to AI applications means linking human to AI abilities.

The link between O * NET abilities and AI applications (a correlation matrix) is made via an Amazon Mechanical Turk survey of 200 gig workers per AI application, who are asked whether a given AI application—e.g., image recognition—can be used for a certain ability—e.g., peripheral vision 14 . The correlation matrix between applications and abilities is then calculated as the share of respondents who thought that a given AI application could be used for a given ability. These abilities are subsequently linked to occupations using the O * NET database. This indicator is available for the US for 2010–2015 15 .

Similarly, the Suitability for Machine Learning indicator developed by Brynjolfsson and Mitchell (2017) ; Brynjolfsson et al. (2018) assigns a suitability for machine learning score to each of the 2,069 narrowly defined work activities from the O * NET database that are shared across occupations (e.g., “assisting and caring for others,” “coaching others,” “coordinating the work of others”). For these scores, they use a Machine Learning suitability rubric consisting of 23 distinct statements describing a work activity. For example, for the statement “Task is describable by rules,” the highest score would be “Task can be fully described by a detailed set of rules (e.g., following a recipe),” whereas the lowest score would be “The task has no clear, well-known set of rules on what is and is not effective (e.g., writing a book).” They use the human intelligence task crowdsourcing platform CrowdFlower to score each direct work activity by seven to ten respondents. The direct work activities are then aggregated to tasks (e.g., “assisting and caring for others,” “coaching others,” “coordinating the work of others” aggregate to “interacting with others”), and the tasks to occupations. This indicator is available for the US for the year 2016/2017.

Tolan et al. (2021) introduce a layer of cognitive abilities to connect AI applications (that they call benchmarks) to tasks. The authors define 14 cognitive abilities (e.g., visual processing, planning and sequential decision-making and acting, communication, etc.) from the psychometrics, comparative psychology, cognitive science, and AI literature 16 . They link these abilities to 328 different AI benchmarks (or applications) stemming from the authors' own previous analysis and annotation of AI papers as well as from open resources such as Papers with Code 17 . These sources in turn draw on data from multiple verified sources, including academic literature, review articles etc. on machine learning and AI. They use the research intensity in a specific benchmark (number of publications, news stories, blog entries etc.) obtained from AI topics 18 . Tasks are measured at the worker level using the European Working Conditions Survey (EWCS), PIAAC and the O * NET database. Task intensity is derived as a measure of how much time an individual worker spends on a task and how often the task is performed.

The mapping between cognitive abilities and AI benchmarks, as well as between cognitive abilities and tasks, relies on a correspondence matrix that assigns a value of 1 if the ability is absolutely required to solve a benchmark or complete a task, and 0 if it is not necessary at all. This correspondence matrix was populated by a group of multidisciplinary researchers for the mapping between tasks and cognitive abilities, and by a group of AI-specialized researchers for the mapping between AI benchmarks and cognitive abilities. This indicator is available from 2008 to 2018, at the ISCO-3 level, and constructed to be country-invariant (as it combines data covering different countries).

Webb (2020) constructs his exposure of occupations to any technology indicator by directly comparing the text of patents from Google patents public data to the texts of job descriptions from the O * NET database to quantify the overlap between patent descriptions and job task descriptions. By limiting the patents to AI patents (using a list of key-words), this indicator can be narrowed to only apply to AI. Each particular task is then assigned a score according to the prevalence of such patents that mention this task; tasks are then aggregated to occupations.

What Do These Indicators Measure?

To gauge the link between AI and employment, the chosen indicator for this study should proxy actual AI deployment in the economy as closely as possible. Furthermore, it should proxy AI deployment at the occupation level because switching occupations is more costly for workers than switching firms or sectors, making the occupation the relevant level for the automation risk of individual workers.

Task-based approaches measure potential automatability of tasks (and occupations), so they are measures of AI exposure, not deployment. Because task-based measures look at potential automatability, they cannot capture uneven adoption of AI across occupations, sectors or countries. Thus, in a cross-country analysis, the only source of variation in a task-based indicator are differences in the occupational task composition across countries, as well as cross-country differences in the occupational distribution.

Indicators based on job posting data measure demand for AI skills (albeit with some noise, see Box 1 ), as opposed to AI use . Thus, they rely on the assumption that AI use in a firm, sector or occupation will lead to employer demand for AI skills in that particular firm, sector, or occupation . This is not necessarily the case, however:

• Some firms will decide to train workers in AI rather than recruit workers with AI skills; their propensity to do so may vary across occupations.

• Many AI applications will not require AI skills to work with them.

• Even where AI skills are needed, many firms, especially smaller ones, are likely to outsource AI development and support with its adoption to specialized AI development firms. In this case, vacancies associated with AI adoption would emerge in a different firm or sector to where the technology was actually being deployed.

• The assumption that AI deployment requires hiring of staff with AI skills is even more problematic when the indicator is applied at the occupation level. Firms that adopt AI may seek workers with AI skills in completely different occupations than the workers whose tasks are being automated by AI. For instance, an insurance company wanting to substitute or enhance some of the tasks of insurance clerks with AI would not necessarily hire insurance clerks with AI skills, but AI professionals to develop or deploy the technology. Insurance clerks may only have to interact with this technology, which might not require AI development skills (but may well-require other specialized skills). Thus, even with broad-based deployment of AI in the financial industry, this indicator may not show an increasing number of job postings for insurance clerks with AI skills. This effect could also be heterogeneous across countries and time. For example, Qian et al. (2020) show that law firms in the UK tend to hire AI professionals without legal knowledge, while law firms in Singapore and the US do advertise jobs with hybrid legal-AI skillsets.

Thus, indicators based on labor demand data are a good proxy for AI deployment at the firm and sector level as long as there is no significant outsourcing of AI development and maintenance, and the production process is such that using the technology requires specialized AI skills. If these assumptions do not hold, these indicators will be incomplete. Whether or not this is the case is an empirical question that requires further research. To date the only empirical reference on this question is Acemoglu et al. (2020) who show for the US that the share of job postings that require AI skills increases faster in firms that are heavily exposed to AI (according to task-based indicators). For example, a one standard deviation increase in the measure of AI exposure according to Felten et al. (2018 , 2019 ) leads to a 15% increase in the number of published AI vacancies.

To shed further light on the relationship between the two types of indicators, Figure 1 plots the 2012–2019 percentage point change in the share of BGT job postings that require AI skills 19 across 36 sectors against a sector-level task-based AI exposure score, similar to the occupational AI exposure score developed in this paper (see Section Construction of the AI Occupational Exposure Measure) 20 . This analysis only covers the United Kingdom and the United States 21 because of data availability. For both countries, a positive relationship is apparent, suggesting that, overall, (i) the two measures are consistent and (ii) AI deployment does require some AI talent at the sector level . Specifically, a one standard deviation increase in AI exposure (approximately the difference in exposure between finance and public administration) is associated with a 0.33 higher percentage point change in the share of job postings that require AI skills in the United-Kingdom; a similar relationship emerges in the United-States 22 .

www.frontiersin.org

Figure 1 . Sectors with higher exposure to AI saw a higher increase in their share of job postings that require AI skills. Percentage point* change in the share of job postings that require AI skills (2012–2019) vs. average exposure to AI (2012), by sector. The share of job postings that require AI skills in a sector is the number of job postings requiring such skills in that sector divided by the total number of job postings in that same sector. Not all sectors have marker labels due to space constraints. *Percentage point changes are preferred over percentage changes because the share of job postings that require AI skills is equal to zero in some sectors in 2012. Source: Author' calculations using data from Burning Glass Technologies, PIAAC and Felten et al. (2019) . (A) United Kingdom and (B) United States.

While it is reassuring that, at the sector level, the two measures appear consistent, it is also clear that job postings that require AI skills fail to identify certain sectors that are, from a task perspective, highly exposed to AI, such as education, the energy sector, the oil industry, public administration and real estate activities. This suggests that AI development and support may be outsourced and/or that the use of AI does not require AI skills in these sectors.

In addition, and as stated above, there is a priori no reason that demand-based indicators would pick up AI deployment at the occupational level, as firms that adopt AI may seek workers with AI skills in completely different occupations than the workers whose tasks are being automated by AI. This is also borne out in the analysis in this paper (see Section Exposure to AI and Demand for AI-Related Technical Skills: A Weak but Positive Relationship Among Occupations Where Computer Use is High). Thus, labor demand-based indicators are unlikely to be good proxies for AI deployment at the occupational level and, in the analysis described in this paper, preference will be given to task-based measures even though they, too, are only an imperfect proxy for AI adoption.

Which Employment Effects Can These Indicators Capture?

This paper analyses the relationship between AI adoption and employment at the occupational level, since it is automation risk at the occupational level that is most relevant for individual workers. The analysis will therefore require a measure of AI adoption at the occupational level and this section assesses which type of indicator might be best suited to that purpose.

It is useful to think of AI-driven automation as having two possible, but opposed, employment effects. On the one hand, AI may depress employment via automation/substitution. On the other, it may increase it by raising worker productivity.

Focusing on the substitution effect first, task-based indicators will pick up such effects since they measure what tasks could potentially be automated by AI. By contrast, labor-demand based indicators identify occupational AI exposure only if AI skills are mentioned in online job postings for a particular occupation. Thus, they will only pick up substitution effects (that is, a subsequent decline in employment for a particular occupation) if the production process is such that workers whose tasks are being automated need AI skills to interact with the technology.

Regarding the productivity effect, there are several ways in which AI might increase employment. The most straightforward way is that AI increases productivity in a given task, and thus lowers production costs, which can lead to increased employment if demand for a product or service is sufficiently price elastic. This was the case, for example, for weavers in the industrial revolution [see Footnote 4, Bessen (2016) ].

In addition, technological progress may allow workers to focus on higher value-added tasks within their occupation that the technology cannot (yet) perform. For example, AI is increasingly deployed in the financial services industry to forecast stock performance. Grennan and Michaely (2017) show that stock analysts have shifted their attention away from stocks for which an abundance of data is available (which lends itself to analysis by AI) toward stocks for which data is scarce. To predict the performance of “low-AI” stocks, analysts gather “soft” information directly from companies' management, suppliers and clients, thus concentrating on tasks requiring a capacity for complex human interaction, of which AI is not (yet) capable.

Task-based indicators will pick up these productivity effects (as they identify exposed occupations directly via their task structure), while labor-demand based indicators will only do so if workers whose tasks are being automated need to interact with the technology, and interacting with the technology requires specialized AI skills.

AI can also be used to augment other technologies, that subsequently automate certain tasks. For example, in robotics, AI supports the efficient automation of physical tasks by improving the vision of robots, or by enabling robots to “learn” from the experience of other robots, e.g., by facilitating the exchange of information on the layout of rooms between cleaning robots ( Nolan, 2021 ). While these improvements to robotics are connected to AI applications (in this example: image recognition and sensory perception of room layouts), the tasks that are being automated (cleaning of rooms) mostly consist of the physical manipulation of objects and thus pertain to the field of robotics. Thus, AI improves the effectiveness of robots to perform tasks associated with cleaners, without performing physical cleaning tasks. As task-based indicators only identify tasks that AI itself can perform (and not tasks that it merely facilitates), they would not capture this effect. In robotics, this would mostly affect physical tasks often performed by low and medium-skilled workers. Indicators based on online vacancies would also be unlikely to capture AI augmenting other technologies at the occupation level—unless cleaners require AI skills to work with cleaning robots.

Finally, AI could enable the launch of completely new products or services, that lead to job creation, e.g., in marketing or sales of AI-based products and services ( Acemoglu et al., 2020 ). Both task- and labor-demand-based indicators cannot generally measure this effect (unless marketing/selling of AI products requires AI-skills).

To conclude, both types of indicators are likely to understate actual AI deployment at the occupational level (see Table 1 ). Labor-demand based indicators in particular will miss a significant part of AI deployment if workers whose tasks are being automated do not need to interact with AI or if the use of AI does not require any AI skills. Task-based indicators, on the other hand, are not capable of picking up differences in actual AI deployment across time and space (this is because they only measure exposure, not actual adoption). Finally, neither indicator will capture AI augmenting other automating technologies, such as robotics, which is likely to disproportionally affect low-skilled, blue collar occupations.

www.frontiersin.org

Table 1 . Which potential employment effects of AI can task-based and labor-demand based indicators capture?

On the whole, for assessing the links between AI and employment at the occupational level, indicators based on labor demand data are likely to be incomplete. Task-based indicators are therefore more appropriate for the analysis carried out in this paper. Keeping their limitations in mind, however, is crucial.

This paper extends the occupational exposure measure , proposed by Felten et al. (2018 , 2019 ) to 23 OECD countries 23 to look at the links between AI and labor market outcomes for 36 occupations 24 , 25 in recent years (2012–2019). The measure of occupational exposure to AI proxies the degree to which tasks in those occupations can be automated by AI. Thus, the analysis compares occupations with a high degree of automatability by AI to those with a low degree.

This section presents the data used for the analysis. It begins by describing the construction of the measure of occupational exposure to AI developed and used in this paper, and builds some intuition as to why some occupations are exposed to a higher degree of potential automation by AI than others. It then shows some descriptive statistics for AI exposure and labor market outcomes: employment, working hours, and job postings that require AI skills. Finally, it describes different measures of the task composition of occupations, which will help shed light on the relationship between AI exposure and labor market outcomes.

Occupational Exposure to AI

Several indicators for (potential) AI deployment have been proposed in the literature (see Section Indicators of Occupational Exposure to AI), most of them geared to the US. Since this paper looks at the links between AI and employment across several countries, country coverage is a key criterion for the choice of indicator. This excludes indicators based on AI-related job-posting frequencies, as pre-2018 BGT data is only available for English-speaking countries) 26 . In addition to data availability issues, indicators based on labor demand data are also likely to be less complete than task-based indicators (see Section What Do These Indicators Measure?). Among the task-based measures, the suitability for machine learning indicator ( Brynjolfsson and Mitchell, 2017 ; Brynjolfsson et al., 2018 ) was not publicly accessible at the time of publication. Webb's (2020) indicator captures the stock of patents until 2020, and is therefore too recent to look at the links between AI and the labor market during the observation period (2012–2019), particularly given that major advancements in AI occurred between 2015 and 2020, and the slow pace of diffusion of technology in the economy. The paper therefore uses the occupational exposure measure ( Felten et al., 2018 , 2019 ), which has the advantage of capturing AI developments until 2015, leaving some time for the technology to be deployed in the economy. It is also based on actual scientific progress in AI, as opposed to research activity as the indicator proposed by Tolan et al. (2021) .

While the preferred measure for this analysis is the AI occupational exposure measure proposed by Felten et al. (2018 , 2019) , the paper also presents additional results using Agrawal's, Gans and Goldfarb (2019) job-posting indicator (an indicator based on job postings), as well as robustness checks using task-based indicators by Webb (2020) and Tolan et al. (2021) 27 . This section describes the construction of the main indicator, and some descriptive statistics.

Construction of the AI Occupational Exposure Measure

The AI occupational exposure measure links progress in nine AI applications to 52 abilities in the US Department of Labor's O * NET database (see Section What Do These Indicators Measure? for more details). This paper extends it to 23 OECD countries by mapping the O * NET abilities to tasks from the OECD's Survey of Adult Skills (PIAAC), and then back to occupations (see Figure 2 for an illustration of the link). Specifically, instead of using the O * NET US-specific measures of an ability's “prevalence” and “importance” in an occupation, country-specific measures have been developed based on data from PIAAC, which reports the frequency with which a number of tasks are performed on the job by each surveyed individual. This information was used to measure the average frequency with which workers in each occupation (classified using two-digit ISCO-08) perform 33 tasks, and this was done separately for each country. Each O * NET ability was then linked to each of these 33 tasks, based on the authors' binary assessments of whether the ability is needed to perform the task or not 28 .

www.frontiersin.org

Figure 2 . Construction of the measure of occupational exposure to AI. Adaptation from Felten et al. (2018) to 23 OECD countries. The authors link O*NET abilities and PIAAC tasks manually by asking whether a given ability is indispensable for performing a given task. The link between O*NET abilities and AI applications (a correlation matrix) is taken from Felten et al. (2019) . The matrix was built by an Amazon Mechanical Turk survey of 200 gig workers per AI application, who were asked whether a given AI application can be used for a certain ability. The correlation matrix between applications and abilities is then calculated as the share of respondents who thought that a given AI application could be used for a given ability. This chart is for illustrative purposes and is not an exhaustive representation of the links between the tasks, abilities and AI applications displayed.

This allows for task-content variations in AI exposure across occupations, as well as within occupations and across countries that may arise because of institutional or socio-economic differences across countries. Thus, the indicator proposed in this paper differs from that of Felten et al. (2019) only in that it relies on PIAAC data to take into account occupational task-content heterogeneity across countries. That is, the indicator adopted in this paper is defined at the occupation-country cell level rather than at the occupation level [as in Felten et al. (2019) ]. It is scaled such that the minimum is zero and the maximum is one over the full sample of occupation-country cells. It indicates relative exposure to AI , and no other meaningful interpretation can be given to its actual values.

In this paper, the link between O * NET abilities and PIAAC tasks is performed manually by asking whether a given ability is indispensable for performing a given task, e.g., is oral comprehension absolutely necessary to teach people ? A given O * NET ability can therefore be linked to several PIAAC tasks, and conversely, a given PIAAC task can be linked to several O * NET abilities. This link was made by the authors of the paper and, in case of diverging answers, agreement was reached through an iterative discussion and consensus method, similar to the Delphi method described in Tolan et al. (2021) . Of the 52 O * NET abilities, 35 are related to at least one task in PIAAC. Thus, the indicator loses 17 abilities compared to Felten's et al. (2018 , 2019 ) measure. All the measures that are lost in this way are physical, psychomotor or sensory, as there are no tasks requiring these abilities in PIAAC 29 . As a result, the occupational intensity of physical, psychomotor, or sensory abilities is poorly estimated using PIAAC data. Therefore, whenever possible, robustness checks use O * NET scores of “prevalence” and “importance” of abilities within occupations for the United States (as in Felten et al., 2018 ) instead of PIAAC-based measures. These robustness tests necessarily assume that the importance and prevalence of abilities are the same in other countries as in the United States. Another approach would have been to assign the EFF applications directly to the PIAAC tasks. However, we preferred to preserve the robustly established mapping of Felten et al. (2018) .

The level of exposure to AI in a particular occupation reflects: (i) the progress made by AI in specific applications and (ii) the extent to which those applications are related to abilities required in that occupation. Like all task-based measures, it is at its core a measure of potential automation of occupations by AI, as it indicates which occupations rely most on abilities in which AI has made progress in recent years. It should capture potential positive productivity effects of AI, as well as negative substitution effects caused by (partial) automation of tasks by AI. However, it cannot capture any effects of AI progress on occupations when these effects do not rely on worker abilities that are directly related to the capabilities of AI, such as might be the case when AI augments other technologies, which consequently make progress in the abilities that a person needs in his/her job (see also Section What Do These Indicators Measure?). Section Occupational Exposure to AI shows AI exposure across occupations and builds some intuition on why the indicator identifies some occupations as more exposed to AI than others.

AI Progress and Abilities

Over the period 2010–2015, AI has made the most progress in applications that affect abilities required to perform non-routine cognitive tasks, in particular: information ordering, memorisation, perceptual speed, speed of closure, and flexibility of closure ( Figure 3 ) 30 . By contrast, AI has made the least progress in applications that affect physical and psychomotor abilities 31 . This is consistent with emerging evidence that AI is capable of performing cognitive, non-routine tasks ( Lane and Saint-Martin, 2021 ).

www.frontiersin.org

Figure 3 . AI has made the most progress in abilities that are required to perform non-routine, cognitive tasks. Progress made by AI in relation to each ability, 2010–2015. The link between O*NET abilities and AI applications (a correlation matrix) is taken from Felten et al. (2019) . The matrix was built by an Amazon Mechanical Turk survey of 200 gig workers per AI application, who were asked whether a given AI application—e.g., image recognition—can be used for a certain ability—e.g., near vision. The correlation matrix between applications and abilities is then calculated as the share of respondents who thought that a given AI application could be used for a given ability. To obtain the score of progress made by AI in relation to a given ability, the shares corresponding to that ability are first multiplied by the Electronic Frontier Foundation (EFF) progress scores in the AI applications; these products are then summed over all nine AI applications. Authors' calculations using data from Felten et al. (2019) .

The kind of abilities AI has made the most progress in are disproportionately used in highly-educated, white-collar occupations. As a result, white-collar occupations requiring high levels of formal education are among the occupations with the highest exposure to AI: Science and Engineering Professionals, but also Business and Administration Professionals, Managers; Chiefs Executives; and Legal, Social, and Cultural Professionals ( Figure 4 ). By contrast, occupations with the lowest exposure include occupations with an emphasis on physical tasks: Cleaners and Helpers; Agricultural Forestry, Fishery Laborers; Food Preparation Assistants and Laborers 32 .

www.frontiersin.org

Figure 4 . Highly educated white-collar occupations are among the occupations with the highest exposure to AI. Average exposure to AI across countries by occupation, 2012. The averages presented are unweighted. Cross-country averages are taken over the 23 countries included in the analysis. Authors' calculations using data from the Programme for the International Assessment of Adult Competencies (PIAAC) and Felten et al. (2019) .

The occupational intensity of some abilities is poorly estimated due to PIAAC data limitations. In particular, the 33 PIAAC tasks used in the analysis include only two non-cognitive tasks, and some of the O * NET abilities are not related to any of these tasks. Therefore, as a robustness exercise, Figure A A.1 displays the level of exposure to AI obtained when using O * NET scores of “prevalence” and “importance” of abilities within occupations for the United States (as in Felten et al., 2018 ) instead of the PIAAC-based measures. That is, the robustness test assumes that the importance and prevalence of abilities is the same in other countries as in the United States. The robustness test shows the same patterns in terms of AI exposure by occupation, suggesting that it is fine to use the measure linked to PIAAC abilities.

Cleaners and Helpers, the least exposed occupation according to this measure, have a low score of occupational exposure to AI because they rely less than other workers on cognitive abilities (including those in which AI has made the most progress), whereas they rely more on physical and psychomotor abilities (in which AI has made little progress). Figure 5A illustrates this by plotting the extent to which Cleaners and Helpers use any of the 35 abilities (relative to the average use of that ability across all occupations) against AI progress in that ability. Compared to the average worker, Cleaners and Helpers rely heavily on physical abilities such as dynamic / static/trunk strength and dexterity, areas in which AI has made the least progress in recent years. They rely less than other occupations on abilities with the fastest AI progress, such as information ordering and memorisation. Business Professionals, in contrast, are heavily exposed to AI because they rely more than other workers on cognitive abilities, and less on physical and psychomotor abilities ( Figure 5B ).

www.frontiersin.org

Figure 5 . Cross-occupation differences in AI exposure are caused by differences in the intensity of use of abilities. Intensity of use of an ability relative to the average across occupations, and progress made by AI in relation to that ability, 2012. Ability intensity represents the cross-country average frequency of the use of an ability among Cleaner and helpers (top) or Business professionals (bottom) minus the cross-country average frequency of the use of that ability, averaged across the 36 occupations in the sample. Authors' calculations using data from the Programme for the International Assessment of Adult Competencies (PIAAC) and Felten et al. (2019) . (A) Cleaners and helpers and (B) Business and administration professionals.

As a robustness check, Figure A A.2 replicates this analysis using O * NET scores of “prevalence” and “importance” of abilities within occupations instead of PIAAC-based measures, and it shows the same patterns.

As abilities are the only link between occupations and progress in AI, the occupational exposure measure cannot detect any effects of AI that do not work directly through AI capabilities, for example if AI is employed to make other technologies more efficient. Consider the example of drivers, an occupation often discussed as at-risk of being substituted by AI. Drivers receive a below-average score in the AI occupational exposure measure (see Figure 4 ). This is because the driving component of autonomous vehicle technologies relies on the physical manipulation of objects, which is in the realm of robotics, not on AI. AI does touch upon some abilities needed to drive a car—such as the ability to plan a route or perceive and distinguish objects at a distance—but the majority of tasks performed when driving a car are physical. AI might well be essential for driverless cars, but mainly by enabling robotic technology, which possesses the physical abilities necessary to drive a vehicle. Thus, this indicator can be seen as isolating the “pure” effects of AI ( Felten et al., 2019 ).

Cross-Country Differences in Occupational Exposure to AI

On average, an occupation's exposure to AI varies little across countries—differences across occupations tend to be greater. The average score of AI exposure across occupations ranges from 0.52 (Lithuania) to 0.72 (Finland, Figure 6 ) among the 23 countries analyzed 33 . By contrast, the average score across countries for the 36 occupations ranges from 0.26 (cleaners and helpers) to 0.87 (business professionals). Even the most exposed cleaners and helpers (in Finland) are only about half as exposed to AI as the least exposed business professionals (in Lithuania) ( Figure A A.3 ). That being said, occupations tend to be slightly more exposed to AI in Northern European countries than in Eastern European ones ( Figure 6 ).

www.frontiersin.org

Figure 6 . Cross-country differences in exposure to AI for a given occupation are small compared to cross-occupation differences. Average exposure to AI across occupations by country, 2012. The averages presented are unweighted averages across the 36 occupations in the sample. Authors' calculations using data from the Programme for the International Assessment of Adult Competencies (PIAAC) and Felten et al. (2019) .

A different way of showing that AI exposure varies more across occupations than across countries for a given occupation is by contrasting the distribution of exposure to AI across occupations in the most exposed country in the sample (Finland) with that in the least exposed country (Lithuania, Figure 7 ). The distributions are very similar. In both countries, highly educated white-collar occupations have the highest exposure to AI and non-office-based, physical occupations have the lowest exposure.

www.frontiersin.org

Figure 7 . The distribution of AI exposure across occupations is similar in Finland and Lithuania. Exposure to AI, 2012. Authors' calculations using data from the Programme for the International Assessment of Adult Competencies (PIAAC) and Felten et al. (2019) .

Differences in exposure to AI between Finland and Lithuania are greater for occupations in the lower half of the distribution of exposure to AI ( Figure 7 ). For example, Food Preparation Assistants in Finland are more than twice as exposed to AI than food preparation assistants in Lithuania, while the score for Business and Administration Professionals is only 12% higher in Finland than in Lithuania.

This is because, while occupations across the entire spectrum of exposure to AI rely more on physical than on cognitive abilities in Lithuania than in Finland, this reliance is more pronounced at the low end of the exposure spectrum. Figure 8 illustrates this for the least (Cleaners and Helpers) and the most exposed occupations (Business and Administration Professionals). The top panel displays: (i) the difference in the intensity of use of each ability by Cleaners and Helpers between Finland and Lithuania; and (ii) the progress made by AI in relation to that ability. The bottom panel shows the same for Business and Administration Professionals.

www.frontiersin.org

Figure 8 . Cross-country differences in occupational AI exposure are caused by differences in the intensity of use of abilities. Intensity of use of an ability in Finland relative to Lithuania and progress made by AI in relation to that ability, 2012. Ability intensity represents the difference in the frequency of the use of an ability among Cleaner and helpers (top) or Business professionals (bottom) between Finland and Lithuania. Authors' calculations using data from the Programme for the International Assessment of Adult Competencies (PIAAC) and Felten et al. (2019) . (A) Cleaners and helpers and (B) Business and administration professionals.

For both occupations, workers in Lithuania tend to rely more on physical and psychomotor abilities (which are little exposed to AI), and less on cognitive abilities, including cognitive abilities in which AI has made the most progress. The differences in the intensity of use of cognitive, physical, and psychomotor abilities between Finland and Lithuania are however greater for Cleaners and Helpers than they are for Business and Administration Professionals ( Figure 8 ). As an example of how cleaners may be more exposed to AI in Finland than in Lithuania, AI navigation tools may help cleaning robots map out their route. They could therefore substitute for cleaners in supervising cleaning robots, especially in countries where cleaning robots are more prevalent (e.g., probably in Finland 34 ). More generally, it is likely that cleaners in Finland use more sophisticated equipment and protocols, resulting in a greater reliance on more exposed cognitive abilities. That being said, even in Finland, the least exposed occupation remains Cleaners and Helpers ( Figure 7 ).

Workers in Lithuania may rely more on physical abilities than in Finland because, in 2012, when these ability requirements were measured, technology adoption was more advanced in Finland than in Lithuania. That is, in 2012, technology may have already automated some physical tasks (e.g., cleaning) and created more cognitive tasks (e.g., reading instructions, filling out documentation, supervising cleaning robots) in Finland than in Lithuania, and this might have had a bigger effect on occupations that rely more on physical tasks (like cleaning).

Occupational Exposure to AI and Education

Section Occupational Exposure to AI showed that white-collar occupations requiring high levels of formal education are the most exposed to AI, while low-educated physical occupations are the least exposed 35 . Figure 9 confirms this pattern. It shows a clear positive relationship between the share of highly educated workers within an occupation in 2012 and the AI exposure score in that occupation in that year (red line). By contrast, low-educated workers were less likely to work in occupations with high exposure to AI (blue line). The relationship is almost flat for middle-educated workers. In 2012, 82% of highly educated workers were in the most exposed half of occupations, compared to 37% of middle-educated and only 16% of low-educated 36 .

www.frontiersin.org

Figure 9 . Highly educated workers are disproportionately exposed to AI. Average share of workers with low, medium or high education within occupations vs. average exposure to AI, across countries (2012). For each education group, occupation shares represent the share of workers of that group in a particular occupation. Each dot reports the unweighted average across the 23 countries analyzed of the share of workers with a particular education in an occupation. Authors' calculations using data from the European Union Labor Force Survey (EU-LFS), the Mexican National Survey of Occupation and Employment (ENOE), the US Current Population Survey (US-CPS) PIAAC, and Felten et al. (2019) .

Labor Market Outcomes

The analysis links occupational exposure to AI to a number of labor market outcomes: employment 37 , average hours worked 38 , the share of part-time workers, and the share of job postings that require AI-related technical skills. This section presents some descriptive statistics on labor market outcomes for the period 2012 and 2019. Two thousand twelve is chosen as the first year for the period of analysis because it ensures consistency with the measure of occupational exposure to AI, for two reasons. First, the measure of exposure to AI is based on the task composition of occupations in 2012 for most countries 39 . Second, progress in AI applications is measured over the period 2010–2015. As a result, AI, as proxied by the occupational AI exposure indicator, could affect the labor market starting from 2010 and fully from 2015 onwards. Starting in 2012 provides a long enough observation period, while closely tracking the measure of recent developments in AI.

Employment and Working Hours

Overall, in most occupations and on average across the 23 countries, employment grew between 2012 and 2019, a period that coincides with the economic recovery from the global financial crisis. Employment grew by 10.8% on average across all occupations and countries in the sample ( Figure 10 ). Average employment growth was negative for only four occupations: Other Clerical Support Workers (−9.2%), Skilled Agricultural Workers (−8.2%), Handicraft and Printing Workers (−7.9%), and Metal and Machinery Workers (-1.7%).

www.frontiersin.org

Figure 10 . Employment has grown in most occupations between 2012 and 2019. Average percentage change in employment level across countries by occupation, 2012–2019. Occupations are classified using two-digit ISCO-08. The averages presented are unweighted averages across the 23 countries analyzed. Source: ENOE, EU-LFS, and US-CPS.

By contrast, average usual weekly hours declined by 0.40% (equivalent to 9 min per week 40 average over the same period ( Figure 11 ) 41 . On average across countries, working hours declined in most occupations. Occupations with the largest drops in working hours include (but are not limited to) occupations that most often use part-time employment, such as Sales Workers (−2.0%); Legal, Social, Cultural Related Associate Professionals (−1.8%); and Agricultural, Forestry, Fishery Laborers (−1.8%).

www.frontiersin.org

Figure 11 . Average usual working hours have decreased in most occupations between 2012 and 2019. Average percentage change in average usual weekly hours across countries by occupation, 2012–2019. Occupations are classified using two-digit ISCO-08. The averages presented are unweighted averages across the 22 countries analyzed (Mexico is excluded from the analysis of working time due to data availability). Usual weekly working hours by country-occupation cell are calculated by taking the average across individuals within that cell. Source: ENOE, EU-LFS, and US-CPS.

Job Postings That Require AI Skills

Beyond its effects on job quantity, AI may transform occupations by changing their task composition, as certain tasks are automated and workers are increasingly expected to focus on other tasks. This may result in a higher demand for AI-related technical skills as workers interact with these new technologies. However, it is not necessarily the case that working with AI requires technical AI skills. For example, a translator using an AI translation tool does not necessarily need any AI technical skills.

This section looks at the share of job postings that require AI-related technical skills ( AI skills ) by occupation using job postings data from Burning Glass Technologies 42 for the United Kingdom and the United States 43 . AI-related technical skills are identified based on the list provided in Acemoglu et al. (2020) 44 .

In the United States, the share of job postings requiring AI skills has increased in almost all occupations between 2012 and 2019 ( Figure 12 ). Science and Engineering Professionals experienced the largest increase, but growth was also substantial for Managers, Chief Executives, Business and Administration Professionals, and Legal, Social, Cultural Professionals. That being said, the share of job postings that require AI skills remains very low overall, with an average across occupations of 0.24% in 2019 (against 0.10% in 2012). These orders of magnitude are in line with Acemoglu et al. (2020) and Squicciarini and Nachtigall (2021) .

www.frontiersin.org

Figure 12 . Nearly all occupations have increasingly demanded AI skills between 2012 and 2019 in the United States. Percentage point* change in the share of job postings that require AI skills, 2012–2019, USA. The share of job postings that require AI skills in an occupation is the number of job postings requiring such skills in that occupation divided by the total number of job postings in that same occupation. *Percentage point changes are preferred over percentage changes because the share of job postings that require AI skills is equal to zero in some occupations in 2012. Source: Burning Glass Technologies.

This section looks at the link between an occupation's exposure to AI in 2012 and changes in employment, working hours, and the demand for AI-related technical skills between 2012 and 2019. Exposure to AI appears to be associated with greater employment growth in occupations where computer use is high, and larger reductions in hours worked in occupations where computer use is low. So, even though AI may substitute for workers in certain tasks, it also appears to create job opportunities in occupations that require digital skills. In addition, there is some evidence that greater exposure to AI is associated with greater increase in demand for AI-related technical skills (such as natural language processing, machine translation, or image recognition) in occupations where computer use is high. However, as the share of jobs requiring AI skills remains very small, this increase in jobs requiring AI skills cannot account for the additional employment growth observed in computer-intensive occupations that are exposed to AI.

Empirical Strategy

The analysis links changes in employment levels within occupations and across countries to AI exposure 45 . The regression equation is the following:

where Y ij is the percentage change in the number of workers (both dependent employees and self-employed) in occupation i in country j over the period 2012–2019 46 ; AI ij is the index of exposure to AI for occupation i in country j as measured in 2012; X ij is a vector of controls including exposure to other technological advances (software and industrial robots), offshorability, exposure to international trade, and 1-digit occupational ISCO dummies; α j are country fixed effects; and u ij is the error term. The coefficient of interest β captures the link between exposure to AI and changes in employment. The inclusion of country fixed effects means that the analysis only exploits within-country variation in AI exposure to estimate the parameter of interest. The specifications that include 1-digit occupational dummies only exploit variation within broad occupational groups, thereby controlling for any factors that are constant across these groups.

To control for the effect of non-AI technologies, the analysis includes measures of exposure to software and industrial robots developed by Webb (2020) based on the overlap between the text of job descriptions provided in the O * NET database and the text of patents in the fields corresponding to each of these technologies 47 . Offshoring is proxied by an index of offshorability developed by Firpo et al. (2011) and made available by Autor and Dorn (2013) , which measures the potential offshoring of job tasks using the average between the two variables “Face-to-Face Contact” and “On-Site Job” that Firpo et al. (2011) derive from the O * NET database 48 . This measure captures the extent to which an occupation requires direct interpersonal interaction or proximity to a specific work location 49 .

The three above indices are occupation-level task-based measures derived from the O * NET database for the United States; this analysis uses those measures for all 23 countries, assuming that the cross-occupation distribution of these indicators is similar across countries 50 . Exposure to international trade is proxied by the share of employment within occupations that is in tradable sectors 51 . These shares are derived from the European Union Labor Force Survey (EU-LFS), the Mexican National Survey of Occupation and Employment (ENOE), the US Current Population Survey (US-CPS).

Exposure to AI and Employment: A Positive Relationship in Occupations Where Computer Use Is High

As discussed in Section Introduction, the effect of exposure to AI on employment is theoretically ambiguous. On the one hand, employment may fall as tasks are automated (substitution effect). On the other hand, productivity gains may increase labor demand (productivity effect) ( Acemoglu and Restrepo, 2019a , b ; Bessen, 2019 ; Lane and Saint-Martin, 2021 ) 52 . The labor market impact of AI on a given occupation is likely to depend on the task composition of that occupation—the prevalence of high-value added tasks that AI cannot automate (e.g., tasks that require creativity or social intelligence) or the extent to which the occupation already uses other digital technologies [since AI applications are often similar to software in their use, workers with digital skills may find it easier to use AI effectively ( Felten et al., 2019 )]. Therefore, the following analysis will not only look at the entire sample of occupation-country cells, but will also split the sample according to what people do in these occupations and countries.

In particular, the level of computer use within an occupation is proxied by the share of workers reporting the use of a computer at work in that occupation, calculated for each of the 23 countries in the sample. It is based on individuals' answers to the question “Do you use a computer in your job?,” taken from the Survey of Adult Skills (PIAAC). Occupation-country cells are then classified into three categories of computer use (low, medium, and high), where the terciles are calculated based on the full sample of occupation-country cells 53 . Another classification used is the country-invariant classification developed by Goos et al. (2014) , which classifies occupations based on their average wage relying on European Community Household Panel (ECHP) data. For example, occupations with an average wage in the middle of the occupation-wage distribution would be classified in the middle with respect to this classification 54 . Finally, the prevalence of creative and social tasks is derived from PIAAC data. PIAAC data include the frequency with which a number of tasks are performed at the individual level. Respondents' self-assessment are based on a 5-point scale ranging from “Never” to “Every day.” This information is used to measure the average frequency with which workers in each occupation perform creative or social tasks, and this is done separately for each country 55 .

While employment grew faster in occupations more exposed to AI, this relationship is not robust. There is stronger evidence that AI exposure is positively related to employment growth in occupations where computer use is high. Table 2 displays the results of regression equation (1) without controls. When looking at the entire sample, the coefficient on AI exposure is both positive and statistically significant (Column 1), but the coefficient is no longer statistically significant as soon as any of the controls described in Section Empirical Strategy are included (with the exception of offshorability) 56 . When the sample is split by level of computer use (low, medium, high), the coefficient on AI exposure remains positive and statistically significant only for the subsample where computer use is high (Columns 2–4). It remains so after successive inclusion of controls for international trade (i.e., shares of workers in tradable sectors), offshorability, exposure to other technological advances (software and industrial robots) and 1-digit occupational dummies ( Table 3 ) 57 . In occupations where computer use is high, a one standard deviation increase in AI exposure is associated with 5.7 percentage points higher employment growth ( Table 2 , Column 4) 58 .

www.frontiersin.org

Table 2 . Exposure to AI is positively associated with employment growth in occupations where computer use is high.

www.frontiersin.org

Table 3 . The relationship between exposure to AI and employment growth is robust to the inclusion of a number of controls.

By contrast, the average wage level of the occupation or the prevalence of creative or social tasks matter little in the link between exposure to AI and employment growth. Table A A.1 in Appendix shows the results obtained when replicating the analysis on the subsamples obtained by splitting the overall sample by average wage level, prevalence of creative tasks, or prevalence of social tasks. All coefficients on exposure to AI remain positive, but are weakly statistically significant and of lower magnitude than those obtained on the subsample of occupations where computer use is high ( Table 3 ).

As a robustness check, Table A A.2 in the Appendix replicates the analysis in Table 2 using the score of exposure to AI obtained when using O * NET scores of “prevalence” and “importance” of abilities within occupations instead of PIAAC-based measures. The results remain unchanged. Table A A.3 replicates the analysis using the alternative indicators of exposure to AI constructed by Webb (2020) and Tolan et al. (2021) , described in Section What Do These Indicators Measure? 59 While the Webb (2020) indicator confirms the positive relationship between employment growth and exposure to AI in occupations where computer use is high, the coefficient obtained with the Tolan et al. (2021) indicator is positive but not statistically significant. This could be due to the fact that the Tolan et al. (2021) indicator reflects different aspects of AI advances, as it focuses more on cognitive abilities and is based on research intensity rather than on measures of progress in AI applications.

The examples of the United Kingdom and the United States illustrate these findings clearly 60 . Figure 13 shows the percentage change in employment from 2012 to 2019 for each occupation against that occupation's exposure to AI in 2012, both in the United Kingdom ( Figure 13A ) and the United States ( Figure 13B ). Occupations are classified according to their level of computer use. The relationship between exposure to AI and employment growth within computer use groups is generally positive, but the correlation is stronger in occupations where computer use is high. For occupations with high computer use, the most exposed occupations tend to have experienced higher employment growth between 2012 and 2019: Business Professionals; Legal, Social and Cultural Professionals; Managers; and Science & Engineering Professionals. AI applications relevant to these occupations include: identifying investment opportunities, optimizing production in manufacturing plants, identifying problems on assembly lines, analyzing and filtering recorded job interviews, and translation. In contrast, high computer-use occupations with low or negative employment growth were occupations with relatively low exposure to AI, such as clerical workers and teaching professionals.

www.frontiersin.org

Figure 13 . Exposure to AI is associated with higher employment growth in occupations where computer use is high. Percentage change in employment level (2012–2019) and exposure to AI (2012). Occupations are classified using two-digit ISCO-08. Not all occupations have marker labels due to space constraints. Skilled forestry, fishery, hunting workers excluded from (A) for readability reasons. Occupation-country cells are classified into low, medium or high computer use by tercile of computer use applied across the full sample of occupation-country cells. Source: Authors' calculations using data from EU-LFS, US-CPS, PIAAC, and Felten et al. (2019) . (A) United Kingdom and (B) United States.

While further research is needed to test the causal nature of these patterns and to identify the exact mechanism behind them, it is possible that a high level of digital skills (as proxied by computer use) indicates a greater ability of workers to adapt to and use new technologies at work and, hence, to reap the benefits that these technologies bring. If AI allows these workers to interact with AI and to substantially increase their productivity and/or the quality of their output, this may, under certain conditions, lead to an increase in demand for their labor 61 .

Exposure to AI and Working Time: A Negative Relationship Among Occupations Where Computer Use Is Low

This subsection extends the analysis by shifting the focus from the number of working individuals (extensive margin of employment) to how much these individuals work (intensive margin).

In general, the higher the level of exposure to AI in an occupation, the greater the drop in average hours worked over the period 2012–2019; and this relationship is particularly marked in occupations where computer use is low. Column (1) of Table 4 presents the results of regression equation (1) using the percentage change in average usual weekly working hours as the variable of interest. The statistically significant and negative coefficient on exposure to AI highlights a negative relationship across the entire sample. Splitting the sample by computer use category shows that this relationship is stronger among occupations with lower computer use (Column 2–4). The size of the coefficients in Column 2 indicates that, within countries and across occupations with low computer use, a one standard deviation increase in exposure to AI is associated with a 0.60 percentage point greater drop in usual weekly working hours 62 (equivalent to 13 min per week) 63 . Columns 1–4 of Table 5 show that the result is robust to the successive inclusion of controls for international trade, offshorability, and exposure to other technologies. However, the coefficient on exposure to AI loses statistical significance when controlling for 1 digit occupational dummies ( Table 5 , Column 5), which could stem from attenuation bias, as measurement errors may be significant relative to the variation in actual exposure within the 1 digit occupation groups 64 .

www.frontiersin.org

Table 4 . Exposure to AI is negatively associated with the growth in average working hours in occupations where computer use is low.

www.frontiersin.org

Table 5 . The relationship between exposure to AI and growth in average working hours is robust to the inclusion of a number of controls.

The relationship between exposure to AI and the drop in average hours worked was driven by part-time employment 65 . Columns 5–8 of Table 4 replicate the analysis in Columns 1–4 using the change in the occupation-level share of part-time workers as the variable of interest. The results are consistent with those in columns 2–4: the coefficient on exposure to AI is positive and statistically significant only for the subsample of occupations where computer use is low (Columns 6–8). The coefficient remains statistically significant and positive when controlling for international trade and offshorability, but loses statistical significance when controlling for exposure to other technological advances and 1-digit occupational dummies ( Table 5 , columns 6–10) 66 . The results hold when replacing the share of part-time workers with the share of involuntary part-time workers 67 ( Table A A.7 ), suggesting that the additional decline in working hours among low computer use occupations that are exposed to AI is not a voluntary choice by workers.

The examples of Germany and Spain provide a good illustration of these results 68 . Figure 14 shows the percentage change in average usual weekly working hours from 2012 to 2019 for each occupation against that occupation's exposure to AI, both in Germany ( Figure 14A ) and in Spain ( Figure 14B ). As before, occupations are classified according to their degree of computer use (low, medium, high). In both countries, there is a clear negative relationship between exposure to AI and the change in working hours among occupations where computer use is low. In particular, within the low computer use category, most occupations with negative growth in working hours are relatively exposed to AI. These occupations include: Drivers and Mobile Plant Operators, Personal Service Workers, and Skilled Agricultural Workers. AI applications relevant to these occupations include route optimisation for drivers, personalized chatbots and demand forecasting in the tourism industry 69 , or the use of computer vision in the agricultural sector to identify plants that need special attention. By contrast, low computer use occupations with the strongest growth in working hours are generally less exposed to AI. This is for example the case for Laborers (which includes laborers in transport and storage, manufacturing, or mining and construction).

www.frontiersin.org

Figure 14 . In occupations where computer use is low, exposure to AI is negatively associated with the growth in average working hours. Percentage change in average usual working hour (2012–2019) and exposure to AI (2012). Occupations are classified using two-digit ISCO-08. Not all occupations have marker labels due to space constraints. Occupation-country cells are classified into low, medium or high computer use by tercile of computer use applied across the full sample of occupation-country cells. Source: Author' calculations using data from EU-LFS, PIAAC, and Felten et al. (2019) . (A) Germany and (B) Spain.

Again, while further research is required, a lack of digital skills may mean that workers are not able to interact efficiently with AI and thus cannot reap all potential benefits of the technology. The substitution effect of AI in those occupations therefore appears to outweigh the productivity effect, resulting in reduced working hours, possibly as a result of more involuntary part-time employment. However, these results remain suggestive, as they are not robust to the inclusion of the full set of controls and the use of alternative indicators of exposure to AI.

Exposure to AI and Demand for AI-Related Technical Skills: A Weak but Positive Relationship Among Occupations Where Computer Use Is High

Beyond its effects on employment, AI may also transform occupations as workers are increasingly expected to interact with the technology. This may result in a higher demand for AI-related technical skills in affected occupations, although it is not necessarily the case that working with AI requires technical AI skills.

Indeed, exposure to AI is positively associated with the growth in the demand for AI technical skills, especially in occupations where computer use is high. Figure 15 shows the correlation between the growth in the share of job postings that require AI skills from 2012 to 2019 within occupations and occupation-level exposure to AI for the United Kingdom ( Figure 15A ) and the United States ( Figure 15B ), the only countries in the sample with BGT time series available. Occupations are again classified according to their computer use. There is a positive correlation between the growth in the share of job postings requiring AI skills and the AI exposure measure, particularly among occupations where computer use is high. The most exposed of these occupations (Science and Engineering Professionals; Managers; Chief Executives; Business and Administration Professionals; Legal, Social, Cultural professionals) are also experiencing the largest increases in job postings requiring AI skills.

www.frontiersin.org

Figure 15 . High computer use occupations with higher exposure to AI saw a higher increase in their share of job postings that require AI skills. Percentage point change in the share of job postings that require AI skills (2012–2019) and exposure to AI (2012). The share of job postings that require AI skills in an occupation is taken as a share of the total number of job postings in that occupation. Occupation-country cells are classified into low, medium or high computer use by tercile of computer use applied across the full sample of occupation-country cells. Source: Author' calculations using data from Burning Glass Technologies, PIAAC, and Felten et al. (2019) . (A) United Kingdom and (B) United States.

However, the increase in jobs requiring AI skills cannot account for the additional employment growth observed in computer-intensive occupations that are exposed to AI (despite the similarities between the patterns displayed in Figures 13 , 15 ). As highlighted by the different scales in those two charts, the order of magnitude of the correlation between exposure to AI and the percentage change in employment ( Figure 13 ) is more than ten times that of the correlation between exposure to AI and the percentage point change in the share of job postings requiring AI skills ( Figure 15 ) 70 . This is because job postings requiring AI skills remain a very small share of overall job postings. In 2019, on average across the 36 occupations analyzed, job postings that require AI skills accounted for only 0.14% of overall postings in the United Kingdom and 0.24% in the United States. By contrast, across the same 36 occupations, employment grew by 8.82% on average in the United States and 11.15% in the United Kingdom between 2012 and 2019.

Recent years have seen impressive advances in artificial intelligence (AI) and this has stoked renewed concern about the impact of technological progress on the labor market, including on worker displacement.

This paper looks at the possible links between AI and employment in a cross-country context. It adapts the AI occupational impact measure developed by Felten et al. (2018 , 2019) —an indicator measuring the degree to which occupations rely on abilities in which AI has made the most progress—and extends it to 23 OECD countries. The indicator, which allows for variations in AI exposure across occupations, as well as within occupations and across countries, is then matched to Labor Force Surveys, to analyse the relationship with employment.

Over the period 2012–2019, employment grew in nearly all occupations analyzed. Overall, there appears to be no clear relationship between AI exposure and employment growth. However, in occupations where computer use is high, greater exposure to AI is linked to higher employment growth. The paper also finds suggestive evidence of a negative relationship between AI exposure and growth in average hours worked among occupations where computer use is low.

While further research is needed to identify the exact mechanisms driving these results, one possible explanation is that partial automation by AI increases productivity directly as well as by shifting the task composition of occupations toward higher value-added tasks. This increase in labor productivity and output counteracts the direct displacement effect of automation through AI for workers with good digital skills, who may find it easier to use AI effectively and shift to non-automatable, higher-value added tasks within their occupations. The opposite could be true for workers with poor digital skills, who may not be able to interact efficiently with AI and thus reap all potential benefits of the technology.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found at: https://www.oecd.org/skills/piaac/data/ .

Author Contributions

Both authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

Special thanks must go to Stijn Broecke for his supervision of the project and to Mark Keese for his guidance and support throughout the project. The report also benefitted from helpful comments provided by colleagues from the Directorate for Employment, Labour and Social Affairs (Andrew Green, Marguerita Lane, Luca Marcolin, and Stefan Thewissen) and from the Directorate for Science, Technology and Innovation (Lea Samek). Thanks to Katerina Kodlova for providing publication support. The comments and feedback received from participants in the February 2021 OECD Expert Meeting on AI indicators (Nik Dawson, Joe Hazell, Manav Raj, Robert Seamans, Alina Sorgner, and Songul Tolan) and the March 2021 OECD Future of Work Seminar are also gratefully acknowledged.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frai.2022.832736/full#supplementary-material

1. ^ This publication contributes to the OECD's Artificial Intelligence in Work, Innovation, Productivity and Skills (AI-WIPS) programme, which provides policymakers with new evidence and analysis to keep abreast of the fast-evolving changes in AI capabilities and diffusion and their implications for the world of work. The programme aims to help ensure that adoption of AI in the world of work is effective, beneficial to all, people-centred and accepted by the population at large. AI-WIPS is supported by the German Federal Ministry of Labour and Social Affairs (BMAS) and will complement the work of the German AI Observatory in the Ministry's Policy Lab Digital, Work & Society. For more information, visit https://oecd.ai/work-innovation-productivity-skills and https://denkfabrik-bmas.de/ .

2. ^ AI may however be used in robotics (“smart robots”), which blurs the line between the two technologies ( Raj and Seamans, 2019 ). For example, AI has improved the vision of robots, enabling them to identify and sort unorganised objects such as harvested fruit. AI can also be used to transfer knowledge between robots, such as the layout of hospital rooms between cleaning robots ( Nolan, 2021 ).

3. ^ This can only be the case if an occupation is only partially automated, but depending on the price elasticity of demand for a given product or service, the productivity effect can be strong. For example, during the nineteenth century, 98% of the tasks required to weave fabric were automated, decreasing the price of fabric. Because of highly price elastic demand for fabric, the demand for fabric increased as did the number of weavers ( Bessen, 2016 ).

4. ^ Education directly increases task-specific human capital as well as the rate of learning-by-doing on the job, at least some of which is task-specific ( Gibbons and Waldman, 2004 , 2006 ). This can be seen by looking at the likelihood of lateral moves within the same firm: lateral moves have a direct productivity cost to the firm as workers cannot utilise their entire task-specific human capital stock in another area (e.g., when moving from marketing to logistics). However, accumulating at least some task-specific human capital in a lateral position makes sense if a worker is scheduled to be promoted to a position that oversees both areas. If a worker's task-specific human capital is sufficiently high, however, the immediate productivity loss associated with a lateral move is higher than any expected productivity gain from the lateral move following a promotion. For example, in academic settings, Ph.D., economists are not typically moved to the HR department prior to becoming the dean of a department. Using a large employer-employee linked dataset on executives at US corporations, Jin and Waldman (2019) show that workers with 17 years of education were twice as likely to be laterally moved before promotion than workers with 19 years of education.

5. ^ An occupation is “exposed” to AI if it has a high intensity in skills that AI can perform, see section What Do These Indicators Measure? for details.

6. ^ Fossen and Sorgner (2019) use the occupational impact measure developed by Felten et al. (2018 , 2019) and the Suitability for Machine Learning indicator developed by Brynjolfsson and Mitchell (2017) and Brynjolfsson et al. (2018) discussed in Section What Do These Indicators Measure?

7. ^ Acemoglu et al. (2020) use data from Brynjolfsson and Mitchell, 2017 ; Brynjolfsson et al., 2018 , Felten et al. (2018 , 2019) , and ( Webb, 2020 ) to identify tasks compatible with AI capabilities; and data from online job postings to identify firms that use AI, see Section Indicators of Occupational Exposure to AI for details.

8. ^ Sectors are available according to the North American Industry classification system (NAICS) for the US and Canada and according to the UK Standard Industrial Classification (SIC) and Singapore Industrial Classification (SSIC) for the UK and Singapore. Occupational codes are available according to the O * NET classification for Canada, SOC for the UK, and the US and SSOC for Singapore. These codes can be converted to ISCO at the one-digit level.

9. ^ This paper uses the same list of skills to look at AI job-postings, see Footnote 44 for the complete list of skills.

10. ^ To measure importance of skills in job ads, the authors use the Revealed Comparative Advantage (RCA) measure, loaned from trade economics, that weighs the importance of a skill in a job posting up if the number of skills for this specific posting is low, and weighs it down if the skill is ubiquitous in all job adds. That is, the skill “team work” will be generally less important given its ubiquity in all job ads, but its importance in an individual job posting would increase if only few other skills were required for that job.

11. ^ “Artificial Intelligence,” “Machine Learning,” “Data Science,” “Data Mining,” and “Big Data”.

12. ^ The indicator is calculated at the division level (19 industries) according to the Australian and New Zealand Standard Industrial Classification Level (ANZSIC).

13. ^ Abstract strategy games, real-time video games, image recognition, visual question answering, image generation, reading comprehension, language modelling, translation, and speech recognition. Abstract strategy games, for example are defined as “the ability to play abstract games involving sometimes complex strategy and reasoning ability, such as chess, go, or checkers, at a high level.” While the EFF tracks progress on 16 applications, AI has not made any progress on 7 of these over the relevant time period ( Felten et al., 2021 ).

14. ^ The background of the gig workers is not known and so they may not necessarily be AI experts. This could be a potential weakness of this indicator. In contrast ( Tolan et al., 2021 ) rely on expert assessments for the link between AI applications and worker abilities ( Tolan et al., 2021 ).

15. ^ At the six digit SOC 2010 occupational level, this can be aggregated across sectors and geographical regions, see Felten et al. (2021) .

16. ^ The abilities are chosen from Hernández-Orallo (2017) to be at an intermediate level of detail, excluding very general abilities that would influence all others, such as general intelligence, and too specific abilities and skills, such as being able to drive a car or music skills. They also exclude any personality traits that do not apply to machines. The abilities are: Memory processing, Sensorimotor interaction, Visual processing, Auditory processing, Attention and search, Planning, sequential decision-making and acting, Comprehension and expression, Communication, Emotion and self-control, Navigation, Conceptualisation, learning and abstraction, Quantitative and logical reasoning, Mind modelling and social interaction, and Metacognition and confidence assessment.

17. ^ Free and open repository of machine learning code and results, which includes data from several repositories (including EFF, NLPD progress etc.).

18. ^ An archive kept by the by the Association for the Advancement of Artificial Intelligence (AAI).

19. ^ AI-related technical skills are identified based on the list provided in Acemoglu et al. (2020) , and detailed in Footnote 44.

20. ^ As with occupations, the industry-level scores are derived using the average frequency with which workers in each industry perform a set of 33 tasks, separately for each country.

21. ^ The United Kingdom and the United States are the only countries in the sample analysed (see Section Construction of the AI Occupational Exposure Measure) with 2012 Burning Glass Technologies data available, thereby allowing for the examination of trends over the past decade.

22. ^ The standard deviation of exposure to AI is 0.083 in the United-Kingdom and 0.075 in the United-States. These values are multiplied by the slopes of the linear relationships displayed in Figure 1 : 3.90 and 4.95, respectively. The average share of job postings that require AI skills was 0.14% in the United-Kingdom and 0.26% in the United-States in 2012, and this has increased to 0.67 and 0.94%, respectively, in 2019.

23. ^ The 23 countries are Austria, Belgium, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Lithuania, Mexico, the Netherlands, Norway, Poland, Slovenia, the Slovak Republic, Spain, Sweden, United Kingdom, and the United States.

24. ^ This paper aims to explore the links between employment and AI deployment in the economy, rather than the direct employment increase due to AI development. Two occupations are particularly likely to be involved in AI development: IT technology professionals and IT technicians. These two occupations both have high levels of exposure to AI and some of the highest employment growth over this paper's observation period, which may be partly related to increased activity in AI development. These occupations may bias the analysis and they are therefore excluded from the sample. Nevertheless, the results are not sensitive to the inclusion of IT technology professionals and IT technicians in the analysis.

25. ^ A few occupation/country cells are missing due to data unavailability for the construction of the indicator of occupational exposure to AI: Skilled forestry, fishery, hunting workers in Belgium and Germany; Assemblers in Greece; Agricultural, forestry, fishery labourers in Austria and France, and Food preparation assistants in the United Kingdom.

26. ^ This paper uses BGT data for additional results for the countries for which they are available.

27. ^ While the three task-based indicators point to the same relationships between exposure to AI and employment, the results are less clearcut for the relationship between exposure to AI and average working hours.

28. ^ The 33 tasks were then grouped into 12 broad categories to address differences in data availability between types of task. For example, “read letters,” “read bills,” and “write letters” were grouped into one category (“literacy–business”), so that this type of task does not weight more in the final score than tasks types associated with a single PIAAC task (e.g., “dexterity” or “management”). For each ability and each occupation, 12 measures were constructed to reflect the frequency with which workers use the ability in the occupation to perform tasks under the 12 broad task categories. This was done by taking, within each category of tasks, the sum of the frequencies of the tasks assigned to the ability divided by the total number of tasks in the category. Finally, the frequency with which workers use the ability at the two-digit ISCO-08 level and by country was obtained by taking the sum of these 12 measures. The methodology, including the definition of the broad categories of tasks, is adapted from Fernández-Macías and Bisello (2020) and Tolan et al. (2021) .

29. ^ The 17 lost abilities are: control prevision, multilimb coordination, response orientation, reaction time, speed of limb movement, explosive strength, extent flexibility, dynamic flexibility, gross body coordination, gross body equilibrium, far vision, night vision, peripheral vision, glare sensitivity, hearing sensitivity, auditory attention, and sound localization.

30. ^ Perceptual speed is the ability to quickly and accurately compare similarities and differences among sets of letters, numbers, objects, pictures, or patterns. Speed of closure is the ability to quickly make sense of, combine, and organize information into meaningful patterns. Flexibility of closure is the ability to identify or detect a known pattern (a figure, object, word, or sound) that is hidden in other distracting material.

31. ^ Only one psychomotor ability has an intermediate score: rate control, which is the ability to time one's movements or the movement of a piece of equipment in anticipation of changes in the speed and/or direction of a moving object or scene.

32. ^ To get results at the ISCO-08 2-digit level, scores were mapped from the SOC 2010 6-digits classification to the ISCO-08 4-digit classification, and aggregated at the 2-digit level by using average scores weighted by the number of full-time equivalent employees in each occupation in the United States, as provided by Webb (2020) and based on American Community Survey 2010 data.

33. ^ Averages are unweighted averages across occupations, so that cross-country differences only reflect differences in the ability requirements of occupations between countries, not differences in the occupational composition across countries.

34. ^ Although specific data on cleaning robots are not available, data from the International Federation of Robotics show that, in 2012, industrial robots were more prevalent in Finland than in Lithuania in all areas for which data are available.

35. ^ Again, as in the rest of the paper, exposure to AI specifically refers to potential automation of tasks, as this is primarily what task-based measures of exposure capture.

36. ^ On average across countries, there is no clear relationship between AI exposure and gender and age, see Figures A A.4 , A A.5 in the Annex.

37. ^ Employment includes all people engaged in productive activities, whether as employees or self-employed. Employment data is taken from the Mexican National Survey of Occupation and Employment (ENOE), the European Union Labour Force Survey (EU-LFS), and the US Current Population Survey (US-CPS). The occupation classification was mapped to ISCO-08 where necessary. More specifically, the ENOE SINCO occupation code was directly mapped to the ISCO-08 classification. The US-CPS occupation census code variable was first mapped to the SOC 2010 classification. Next, it was mapped to the ISCO-08 classification.

38. ^ Hours worked refer to the average of individuals' usual weekly hours, which include the number of hours worked during a normal week without any extra-ordinary events (such as leave, public holidays, strikes, sickness, or extra-ordinary overtime).

39. ^ 2012 is available in PIAAC for most countries except Hungary (2017), Lithuania (2014), and Mexico (2017).

40. ^ Estimated at the average over the sample (37.7 average usual weekly hours).

41. ^ Mexico is excluded from the analysis of working time due to lack of data.

42. ^ See Box 1 for more details on Burning Glass Technologies data. The Burning Glass Occupation job classification (derived from SOC 2010) was directly mapped to the ISCO-08 classification.

43. ^ United Kingdom and the United States are the only countries in the sample with 2012 Burning Glass Technologies data available, thereby allowing for the examination of trends over the past decade.

44. ^ Job postings that require AI-related technical skills are defined as those that include at least one keyword from the following list: Machine Learning, Computer Vision, Machine Vision, Deep Learning, Virtual Agents, Image Recognition, Natural Language Processing, Speech Recognition, Pattern Recognition, Object Recognition, Neural Networks, AI ChatBot, Supervised Learning, Text Mining, Support Vector Machines, Unsupervised Learning, Image Processing, Mahout, Recommender Systems, Support Vector Machines (SVM), Random Forests, Latent Semantic Analysis, Sentiment Analysis/Opinion Mining, Latent Dirichlet Allocation, Predictive Models, Kernel Methods, Keras, Gradient boosting, OpenCV, Xgboost, Libsvm, Word2Vec, Chatbot, Machine Translation, and Sentiment Classification.

45. ^ The analysis is performed at the 2-digit level of the International Standard Classification of Occupations 2008 (ISCO-08).

46. ^ In a second step, Y ij will stand for the percentage change in average weekly working hours and the percentage change in the share of part-time workers.

47. ^ To select software patents, Webb uses an algorithm developed by Bessen and Hunt (2007) which requires one of the keywords “software,” “computer,” or “programme” to be present, but none of the keywords “chip,” “semiconductor,” “bus,” “circuity,” or “circuitry.” To select patents in the field of industrial robots, Webb develops an algorithm that results in the following search criteria: the title and abstract should include “robot” or “manipulate,” and the patent should not fall within the categories: “medical or veterinary science; hygiene” or “physical or chemical processes or apparatus in general”.

48. ^ They reverse the sign to measure offshorability instead of non-offshorability.

49. ^ Firpo et al. (2011) define “face-to-face contact” as the average value between the O * NET variables “face-to-face discussions,” “establishing and maintaining interpersonal relationships,” “assisting and caring for others,” “performing for or working directly with the public”, and “coaching and developing others.” They define “on-site job” as the average between the O * NET variables “inspecting equipment, structures, or material,” “handling and moving objects,” “operating vehicles, mechanized devices, or equipment,” and the mean of “repairing and maintaining mechanical equipment” and “repairing and maintaining electronic equipment”.

50. ^ All three indices are available by occupation based on U.S. Census occupation codes. They were first mapped to the SOC 2010 6-digits classification and then to the ISCO-08 4-digit classification. They were finally aggregated at the 2-digit level using average scores weighted by the number of full-time equivalent employees in each occupation in the United-States, as provided by Webb (2020) and based on American Community Survey 2010 data.

51. ^ The tradable sectors considered are agriculture, industry, and financial and insurance activities.

52. ^ Partial worker substitution in an occupation may increase worker productivity and employment in the same occupation, but also in other occupations and sectors ( Autor and Salomons, 2018 ). These AI-induced productivity effects are relevant to the present cross-occupation analysis to the extent that they predominantly affect the same occupation where AI substitutes for workers. For example, although AI translation algorithms may substitute for part of the work of translators, they may increase the demand for translators by significantly reducing translation costs.

53. ^ Data are from 2012, with the exception of Hungary (2017), Lithuania (2014), and Mexico (2017).

54. ^ Low-skill occupations include the ISCO-08 1-digit occupation groups: Services and Sales Workers; and Elementary Occupations. Middle-skill occupations include the groups: Clerical Support Workers; Skilled Agricultural, Forestry, and Fishery Workers; Craft and Related Trades Workers; and Plant and Machine Operators and Assemblers. High-skill occupations include: Managers; Professionals, and Technicians; and Associate Professionals.

55. ^ In line with Nedelkoska and Quintini (2018) , creative tasks include: problem solving—simple problems, and problem solving—complex problems; and social tasks include: teaching, advising, planning for others, communicating, negotiating, influencing, and selling. For each measure, occupation-country cells are then classified into three categories depending on the average frequency with which these tasks are performed (low, medium, and high). These three categories are calculated by applying terciles across the full sample of occupation-country cells. Data are from 2012, with the exception of Hungary (2017), Lithuania (2014), and Mexico (2017).

56. ^ These results are not displayed but are available on request.

57. ^ Tables 2 , 3 correspond to unweighted regressions, but the results hold when each observation is weighted by the inverse of the number of country observations in the subsample considered, so that each country has the same weight. These results are not displayed but are available on request.

58. ^ The standard deviation of exposure to AI is 0.067 among high computer use occupations. Multiplying this by the coefficient in Column 4 gives 0.067 * 85.73 = 5.74.

59. ^ The Webb (2020) indicator is available by occupation based on U.S. Census occupation codes. It was first mapped to the SOC 2010 6-digits classification and then to the ISCO-08 4-digit classification. It was finally aggregated at the 2-digit level by using average scores weighted by the number of full-time equivalent employees in each occupation in the United States, as provided by Webb (2020) and based on American Community Survey 2010 data. The Tolan et al. (2021) indicator is available at the ISCO-08 3-digit level and was aggregated at the 2-digit level by taking average scores.

60. ^ Although statistically significant on aggregate, the relationships between employment growth and exposure to AI suggested by Table 2 are not visible for some countries.

61. ^ For productivity-enhancing technologies to have a positive effect on product and labour demand, product demand needs to be price elastic ( Bessen, 2019 ).

62. ^ The standard deviation of exposure to AI is 0.125 among low computer use occupations. Multiplying this by the coefficient in Column 2 gives 0.125 * (−4.823) = −0.60.

63. ^ Estimated at the average working hours among low computer use occupations (37.2 h).

64. ^ Tables 4 , 5 correspond to unweighted regressions, but most of the results hold when each observation is weighted by the inverse of the number of country observations in the subsample considered, so that each country has the same weight. These results are not displayed but are available on request.

65. ^ Part-time workers are defined as workers usually working 30 hours or less per week in their main job.

66. ^ As an additional robustness exercise, Table A A.4 in the Appendix replicates the analysis using the score of exposure to AI obtained when using O * NET scores of “prevalence” and “importance” of abilities within occupations instead of PIAAC-based measures. The results remain qualitatively unchanged, but the coefficients on exposure to AI are no longer statistically significant on the subsample of occupations where computer use is low, when using working hours as the variable of interest. Tables A A.5, A.6 replicate the analysis using the alternative indicators of exposure to AI constructed by Webb (2020) and Tolan et al. (2021) . When using the Webb (2020) indicator, the results hold on the entire sample but are not robust on the subsample of occupations where computer use is low. Using the Tolan et al. (2021) indicator, the results by subgroups hold qualitatively but the coefficients are not statistically significant.

67. ^ Involuntary part-time workers are defined as part-time workers (i.e., workers working 30 h or less per week) who report either that they could not find a full-time job or that they would like to work more hours.

68. ^ Although statistically significant on aggregate, the relationships between the percentage change in average usual weekly working hours and exposure to AI suggested by Table 4 are not visible for some countries.

69. ^ For example, personalised chatbots can partially substitute for travel attendants. Demand forecasting algorithms may facilitate the operation of hotels, including the work of housekeeping supervisors. Travel Attendants and Housekeeping Supervisors both fall into the Personal Service Workers category.

70. ^ The results of the regression equation (1) on the subsample (of only 26 observations) of high computer use occupations in the United Kingdom and the United States give a coefficient on exposure to AI equal to 151.4 when using percentage employment growth as the variable of interest, which is about forty times greater than the 4.1 obtained when using percentage point change in the share of job postings that require AI skills as the variable of interest.

Acemoglu, D., Autor, D., Hazell, J., and Restrepo, P. (2020). AI and Jobs: Evidence from Online Vacancies. Cambridge, MA: National Bureau of Economic Research. doi: 10.3386/w28257

CrossRef Full Text | Google Scholar

Acemoglu, D., and Restrepo, P. (2019a). The wrong kind of AI? Artificial intelligence and the future of labour demand. Cambridge J. Reg. Econ. Soc. 13, 25–35. doi: 10.1093/cjres/rsz022

Acemoglu, D., and Restrepo, P. (2019b). Automation and new tasks: How technology displaces and reinstates labor. J. Econom. Pers . 33, 3–30. Available online at: https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.33.2.3

Google Scholar

Acemoglu, D., and Restrepo, P. (2020). Robots and jobs: evidence from us labor markets. J. Polit. Econ. 128, 2188–2244. doi: 10.1086/705716

Agrawal, A., Gans, J., and Goldfarb, A, (eds.). (2019). 8. Artificial Intelligence, Automation, and Work. University of Chicago Press.

Autor, D., and Dorn, D. (2013). The growth of low-skill service jobs and the polarization of the US labor market. Am. Econ. Rev. 103, 1553–1597. doi: 10.1257/aer.103.5.1553

Autor, D., Levy, F., and Murnane, R. (2003). The skill content of recent technological change: an empirical exploration. Q. J. Econ. 118, 1279–1333. doi: 10.1162/003355303322552801

CrossRef Full Text

Autor, D., and Salomons, A. (2018). Is Automation Labor-Displacing? Productivity Growth, Employment, and the Labor Share. Cambridge, MA: National Bureau of Economic Research. doi: 10.3386/w24871

Baruffaldi, S., van Beuzekom, B., Dernis, H., Harhoff, D., Rao, N., Rosenfeld, D., et al. (2020). Identifying and Measuring Developments in Artificial Intelligence: Making the Impossible Possible . Available online at: https://www.oecd-ilibrary.org/science-and-technology/identifying-and-measuring-developments-in-artificial-intelligence_5f65ff7e-en

Bessen, J. (2016). How Computer Automation Affects Occupations: Technology, Jobs, and Skills. Boston University School of Law, Law and Economics Research Paper 15-49 . Available online at: https://scholarship.law.bu.edu/faculty_scholarship/813

Bessen, J. (2019). Automation and jobs: when technology boosts employment. Econ. Policy 34, 589–626. doi: 10.1093/epolic/eiaa001

Bessen, J., and Hunt, R. (2007). An empirical look at software patents. J. Econ. Manage. Strat. 16, 157–189. doi: 10.1111/j.1530-9134.2007.00136.x

Brynjolfsson, E., and Mitchell, T. (2017). What can machine learning do? Workforce implications. Science 358, 1530–1534. doi: 10.1126/science.aap8062

PubMed Abstract | CrossRef Full Text | Google Scholar

Brynjolfsson, E., Mitchell, T., and Rock, D. (2018). Replication Data For : What can machines learn and what does it mean for occupations and the economy? AEA Pap . Proc. 108, 43–47. doi: 10.1257/pandp.20181019

Cammeraat, E., and Squicciarini, M. (2020). Assessing the Properties of Burning Glass Technologies' Data to Inform Use in Policy Relevant Analysis . OECD.

Carnevale, A. P., Jayasundera, T., and Repnikov, D. (2014). Understanding online job ads data. Center on Education and the Workforce, Georgetown University, Washington, DC, United States . Available online at: https://cew.georgetown.edu/wp-content/uploads/2014/11/OCLM.Tech_.Web_.pdf

Dawson, N., Rizoiu, M., and Williams, M. (2021). Skill-driven recommendations for job transition pathways. PLoS ONE 16, e0254722. doi: 10.1371/journal.pone.0254722

Felten, E., Raj, M., and Seamans, R. (2018). A method to link advances in artificial intelligence to occupational abilities. AEA Pap. Proc. 108, 54–57. doi: 10.1257/pandp.20181021

Felten, E., Raj, M., and Seamans, R. (2021). Occupational, industry, and geographic exposure to artificial intelligence: a novel dataset and its potential uses. Strat. Manage. J. 42, 2195–2217. doi: 10.1002/smj.3286

Felten, E. W., Raj, M., and Seamans, R. (2019). The Occupational Impact of Artificial Intelligence: Labor, Skills, and Polarization . NYU Stern School of Business.

Fernández-Macías, E., and Bisello, M. (2020). A Taxonomy of Tasks for Assessing the Impact of New technologies on Work (No. 2020/04) . JRC Working Papers Series on Labour, Education and Technology.

Firpo, S., Fortin, N. M., and Lemieux, T. (2011). Occupational tasks and changes in the wage structure . Available online at: https://ftp.iza.org/dp5542.pdf

Fossen, F., and Sorgner, A. (2019). New Digital Technologies and Heterogeneous Employment and Wage Dynamics in the United States: Evidence From Individual-Level Data. IZA Discussion Paper 12242 . Available online at: https://www.iza.org/publications/dp/12242/new-digital-technologies-and-heterogeneous-employment-and-wage-dynamics-in-the-united-states-evidence-from-individual-level-data

Gibbons, R., and Waldman, M. (2004). Task-specific human capital. Am. Econ. Rev. 94, 203–207. doi: 10.1257/0002828041301579

Gibbons, R., and Waldman, M. (2006). Enriching a theory of wage and promotion dynamics inside firms. J. Lab. Econ. 24, 59–107. doi: 10.1086/497819

Goos, M., Manning, A., and Salomons, A. (2014). Explaining job polarization: routine-biased technological change and offshoring. Am. Econ. Rev. 104, 2509–2526. doi: 10.1257/aer.104.8.2509

Grennan, J., and Michaely, R. (2017). Artificial Intelligence and the Future of Work: Evidence From Analysts . Available online at: https://conference.nber.org/conf_papers/f130049.pdf

Hernández-Orallo, J. (2017). The Measure of all Minds: Evaluating Natural and Artificial Intelligence. Cambridge University Press . Available online at: https://www.cambridge.org/core/books/measure-of-all-minds/DC3DFD0C1D5B3A3AD6F56CD6A397ABCA

Hershbein, B., and Kahn, L. (2018). Do recessions accelerate routine-biased technological change? Evidence from vacancy postings. Am. Econ. Rev. 108, 1737–1772, doi: 10.1257/aer.20161570

Jin, X., and Waldman, M. (2019). Lateral moves, promotions, and task-specific human capital: theory and evidence. J. Law Econ. Organ. 36, 1–46. doi: 10.1093/jleo/ewz017

Lane, M., and Saint-Martin, A. (2021). The Impact of Artificial Intelligence on the Labour Market: What Do We Know So Far? OECD Social, Employment and Migration Working Papers, No. 256. Paris: OECD Publishing.

Nedelkoska, L., and Quintini, G. (2018). Automation, Skills Use and Training. OECD Social, Employment and Migration Working Papers, No. 202 . Paris: OECD Publishing.

Nolan, A. (2021). Making life easier, richer and healthier: Robots, their future and the roles of public policy.

Qian, M., Saunders, A., and Ahrens, M. (2020). “Mapping legaltech adoption and skill demand,” in The Legal Tech Book: The Legal Technology Handbook for Investors, Entrepreneurs and FinTech Visionaries , eds S. Chishti, S. A. Bhatti, A. Datoo, and D. Indjic (John Wiley and Sons), 211–214.

Raj, M., and Seamans, R. (2019). Primer on artificial intelligence and robotics. J. Organ. Des. 8, 11. doi: 10.1186/s41469-019-0050-0

Squicciarini, M., and Nachtigall, H. (2021). Demand for AI skills in jobs: Evidence from online job postings, OECD Science, Technology and Industry Working Papers, No. 2021/03 . Paris: OECD Publishing. doi: 10.1787/3ed32d94-en

Tolan, S., Pesole, A., Martínez-Plumed, F., Fernández-Macías, E., Hernández-Orallo, J., and Gómez, E. (2021). Measuring the occupational impact of AI: tasks, cognitive abilities and AI benchmarks. J. Artif. Intell. Res. 71, 191–236. doi: 10.1613/jair.1.12647

Webb, M. (2020). The Impact of Artificial Intelligence on the Labor Market. Working Paper. Standford University . Available online at: https://web.stanford.edu/

Keywords: J21, J23, J24, O33, artificial intelligence

Citation: Georgieff A and Hyee R (2022) Artificial Intelligence and Employment: New Cross-Country Evidence. Front. Artif. Intell. 5:832736. doi: 10.3389/frai.2022.832736

Received: 10 December 2021; Accepted: 05 April 2022; Published: 10 May 2022.

Reviewed by:

Copyright © 2022 Georgieff and Hyee. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alexandre Georgieff, alexandre.georgieff@oecd.org

This article is part of the Research Topic

Artificial Intelligence and the Future of Work: Humans in Control

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

Harvard SEAS Ph.D. student Lucas Monteiro Paes wearing a white shirt and black glasses

Ph.D. student Monteiro Paes named Apple Scholar in AI/ML

Monteiro Paes studies fairness and arbitrariness in machine learning models

AI / Machine Learning , Applied Mathematics , Awards , Graduate Student Profile

Four people standing in a line, one wearing a Harvard sweatshirt, two holding second-place statues

A new phase for Harvard Quantum Computing Club

SEAS students place second at MIT quantum hackathon

Computer Science , Quantum Engineering , Undergraduate Student Profile

Search form

Doing more, but learning less: the risks of ai in research.

Abstract illustration of data

(© stock.adobe.com)

Artificial intelligence (AI) is widely heralded for its potential to enhance productivity in scientific research. But with that promise come risks that could narrow scientists’ ability to better understand the world, according to a new paper co-authored by a Yale anthropologist.

Some future AI approaches, the authors argue, could constrict the questions researchers ask, the experiments they perform, and the perspectives that come to bear on scientific data and theories.

All told, these factors could leave people vulnerable to “illusions of understanding” in which they believe they comprehend the world better than they do.

The paper published March 7 in Nature .

“ There is a risk that scientists will use AI to produce more while understanding less,” said co-author Lisa Messeri, an anthropologist in Yale’s Faculty of Arts and Sciences. “We’re not arguing that scientists shouldn’t use AI tools, but we’re advocating for a conversation about how scientists will use them and suggesting that we shouldn’t automatically assume that all uses of the technology, or the ubiquitous use of it, will benefit science.”

The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study design through peer review.

“ We hope this paper offers a vocabulary for talking about AI’s potential epistemic risks,” Messeri said.

Added Crockett: “To understand these risks, scientists can benefit from work in the humanities and qualitative social sciences.”

Messeri and Crockett classified proposed visions of AI spanning the scientific process that are currently creating buzz among researchers into four archetypes:

  • In study design, they argue, “AI as Oracle” tools are imagined as being able to objectively and efficiently search, evaluate, and summarize massive scientific literatures, helping researchers to formulate questions in their project’s design stage.
  • In data collection, “AI as Surrogate” applications, it is hoped, allow scientists to generate accurate stand-in data points, including as a replacement for human study participants, when data is otherwise too difficult or expensive to obtain.
  • In data analysis, “AI as Quant” tools seek to surpass the human intellect’s ability to analyze vast and complex datasets.
  • And “AI as Arbiter” applications aim to objectively evaluate scientific studies for merit and replicability, thereby replacing humans in the peer-review process.   

The authors warn against treating AI applications from these four archetypes as trusted partners, rather than simply tools , in the production of scientific knowledge. Doing so, they say, could make scientists susceptible to illusions of understanding, which can crimp their perspectives and convince them that they know more than they do.

The efficiencies and insights that AI tools promise can weaken the production of scientific knowledge by creating “monocultures of knowing,” in which researchers prioritize the questions and methods best suited to AI over other modes of inquiry, Messeri and Crockett state. A scholarly environment of that kind leaves researchers vulnerable to what they call “illusions of exploratory breadth,” where scientists wrongly believe that they are exploring all testable hypotheses, when they are only examining the narrower range of questions that can be tested through AI.

For example, “Surrogate” AI tools that seem to accurately mimic human survey responses could make experiments that require measurements of physical behavior or face-to-face interactions increasingly unpopular because they are slower and more expensive to conduct, Crockett said.

The authors also describe the possibility that AI tools become viewed as more objective and reliable than human scientists, creating a “monoculture of knowers” in which AI systems are treated as a singular, authoritative, and objective knower in place of a diverse scientific community of scientists with varied backgrounds, training, and expertise. A monoculture, they say, invites “illusions of objectivity” where scientists falsely believe that AI tools have no perspective or represent all perspectives when, in truth, they represent the standpoints of the computer scientists who developed and trained them.

“ There is a belief around science that the objective observer is the ideal creator of knowledge about the world,” Messeri said. “But this is a myth. There has never been an objective ‘knower,’ there can never be one, and continuing to pursue this myth only weakens science.”  

There is substantial evidence that human diversity makes science more robust and creative, the authors add.

“ Acknowledging that science is a social practice that benefits from including diverse standpoints will help us realize its full potential,” Crockett said. “Replacing diverse standpoints with AI tools will set back the clock on the progress we’ve made toward including more perspectives in scientific work.”

It is important to remember AI’s social implications, which extend far beyond the laboratories where it is being used in research, Messeri said.

“ We train scientists to think about technical aspects of new technology,” she said. “We don’t train them nearly as well to consider the social aspects, which is vital to future work in this domain.”

Science & Technology

Social Sciences

Media Contact

Bess Connolly : [email protected] ,

ai research papers by country

Rwanda policy dialogue with President Salovey and EGC affiliates’ research

ai research papers by country

PODCAST: Digital inclusion and economic development

ai research papers by country

Match Day is a success at Yale School of Medicine

A group of people at symposium in Johannesburg, South Africa

Yale IPCH fellowship supports preservation of cultural heritage in Africa

  • Show More Articles

A logo that says Generative AI at Harvard

Generative AI @ Harvard

Generative AI tools are changing the way we teach, learn, research, and work. Explore Harvard's work on the frontier of GenAI.

Resources for the Harvard community

Teach with genai, learn with genai, research with genai, work with genai.

a broader scope

AI @ Harvard

Generative AI is only part of the fascinating world of artificial intelligence.

ai research papers by country

Rise of the machines?

ai research papers by country

Why virtual isn’t actual, especially when it comes to friends

ai research papers by country

AI may be just what the dentist ordered

ai research papers by country

A DEEPer (squared) dive into AI

ai research papers by country

An AI tool that can help forecast viral variants

ai research papers by country

Time for teachers to get moving on ChatGPT

ai research papers by country

If it wasn’t created by a human artist, is it still art?

The outsourced mind: ai, democracy, and the future of human control.

The pace of change in the development of Artificial Intelligence is breathtaking, and we are rapidly delegating more and more tasks to it. In this talk two philosophers explore some aspects of these trends: the role of AI in democratic decision making, and its role in a range of areas where human control has so far seemed essential, such as in the military and in criminal justice.

Harvard Health Systems Innovation Lab Hackathon 2024

The Harvard Health Systems Innovation Lab is organising its 5th Health Systems Innovation Hackathon. This year’s theme is "Building High-Value Health Systems: Harnessing Digital Health and Artificial Intelligence."

Kempner Institute Open House

An open house to celebrate the opening of the Kempner Institute's new space on the 6th floor of the SEC. Open to everyone in the Harvard community; please use your Harvard email address to register.

Evaluating the Science of Geospatial AI

Hosted by the Center for Geographic Analysis, Harvard University, this conference will bring together GIScientists with expertise and interest in AI to examine the current conditions, opportunities, and connections between Artificial Intelligence and Geographic Information Science.

Connect with us

Submit website feedback or sign up for updates

Help | Advanced Search

Computer Science > Computation and Language

Title: dialect prejudice predicts ai decisions about people's character, employability, and criminality.

Abstract: Hundreds of millions of people now interact with language models, with uses ranging from serving as a writing aid to informing hiring decisions. Yet these language models are known to perpetuate systematic racial prejudices, making their judgments biased in problematic ways about groups like African Americans. While prior research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice: we extend research showing that Americans hold raciolinguistic stereotypes about speakers of African American English and find that language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement. By contrast, the language models' overt stereotypes about African Americans are much more positive. We demonstrate that dialect prejudice has the potential for harmful consequences by asking language models to make hypothetical decisions about people, based only on how they speak. Language models are more likely to suggest that speakers of African American English be assigned less prestigious jobs, be convicted of crimes, and be sentenced to death. Finally, we show that existing methods for alleviating racial bias in language models such as human feedback training do not mitigate the dialect prejudice, but can exacerbate the discrepancy between covert and overt stereotypes, by teaching language models to superficially conceal the racism that they maintain on a deeper level. Our findings have far-reaching implications for the fair and safe employment of language technology.

Submission history

Access paper:.

  • Download PDF
  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 21 August 2019

Will China lead the world in AI by 2030?

  • Sarah O’Meara

You can also search for this author in PubMed   Google Scholar

China not only has the world’s largest population and looks set to become the largest economy — it also wants to lead the world when it comes to artificial intelligence (AI).

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 572 , 427-428 (2019)

doi: https://doi.org/10.1038/d41586-019-02360-7

Additional research by Kevin Schoenmakers.

Liang, H. et al. Nature Med. 25 , 433–438 (2019).

Article   PubMed   Google Scholar  

Download references

Reprints and permissions

Related Articles

ai research papers by country

  • Computer science

No installation required: how WebAssembly is changing scientific computing

No installation required: how WebAssembly is changing scientific computing

Technology Feature 11 MAR 24

Why scientists trust AI too much — and what to do about it

Why scientists trust AI too much — and what to do about it

Editorial 06 MAR 24

Is ChatGPT making scientists hyper-productive? The highs and lows of using AI

Is ChatGPT making scientists hyper-productive? The highs and lows of using AI

News Explainer 28 FEB 24

China–US climate collaboration concerns as Xie and Kerry step down

China–US climate collaboration concerns as Xie and Kerry step down

News 12 MAR 24

Biden seeks to boost science funding — but his budget faces an ominous future

Biden seeks to boost science funding — but his budget faces an ominous future

‘Despair’: Argentinian researchers protest as president begins dismantling science

‘Despair’: Argentinian researchers protest as president begins dismantling science

News 07 MAR 24

Tenure-Track Assistant Professor to the rank of Associate Professor in computational biology

UNIL is a leading international teaching and research institution, with over 5,000 employees and 15,500 students split between its Dorigny campus, ...

Lausanne, Canton of Vaud (CH)

University of Lausanne (UNIL)

ai research papers by country

Assistant Scientist/Professor in Rare Disease Research, Sanford Research

Assistant Scientist/Professor in Rare Disease Research, Sanford Research Sanford Research invites applications for full-time faculty at the rank of...

Sioux Falls, South Dakota

Sanford Research

ai research papers by country

Junior Group Leader

The Cancer Research UK Manchester Institute seeks to appoint an outstanding scientist to a new Junior Group Leader position.

United Kingdom

ai research papers by country

Research projects in all fields of the humanities, social sciences, and natural sciences are welcome

The aim of fostering future world-class researchers at Kyoto University.

Hakubi Center for Advanced Research, Kyoto University

ai research papers by country

Staff Scientist (Virology)

Staff Scientist position available at Scripps Research

La Jolla, California

The Scripps Research Institute (TSRI)

ai research papers by country

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

A generative AI reset: Rewiring to turn potential into value in 2024

It’s time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI’s enormous potential value is harder than expected .

With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI transformations: competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale—in effect, rewiring the business  for distributed digital and AI innovation.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Companies looking to score early wins with gen AI should move quickly. But those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work actually gets done.

Let’s briefly look at what this has meant for one Pacific region telecommunications company. The company hired a chief data and AI officer with a mandate to “enable the organization to create value with data and AI.” The chief data and AI officer worked with the business to develop the strategic vision and implement the road map for the use cases. After a scan of domains (that is, customer journeys or functions) and use case opportunities across the enterprise, leadership prioritized the home-servicing/maintenance domain to pilot and then scale as part of a larger sequencing of initiatives. They targeted, in particular, the development of a gen AI tool to help dispatchers and service operators better predict the types of calls and parts needed when servicing homes.

Leadership put in place cross-functional product teams with shared objectives and incentives to build the gen AI tool. As part of an effort to upskill the entire enterprise to better work with data and gen AI tools, they also set up a data and AI academy, which the dispatchers and service operators enrolled in as part of their training. To provide the technology and data underpinnings for gen AI, the chief data and AI officer also selected a large language model (LLM) and cloud provider that could meet the needs of the domain as well as serve other parts of the enterprise. The chief data and AI officer also oversaw the implementation of a data architecture so that the clean and reliable data (including service histories and inventory databases) needed to build the gen AI tool could be delivered quickly and responsibly.

Our book Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI (Wiley, June 2023) provides a detailed manual on the six capabilities needed to deliver the kind of broad change that harnesses digital and AI technology. In this article, we will explore how to extend each of those capabilities to implement a successful gen AI program at scale. While recognizing that these are still early days and that there is much more to learn, our experience has shown that breaking open the gen AI opportunity requires companies to rewire how they work in the following ways.

Figure out where gen AI copilots can give you a real competitive advantage

The broad excitement around gen AI and its relative ease of use has led to a burst of experimentation across organizations. Most of these initiatives, however, won’t generate a competitive advantage. One bank, for example, bought tens of thousands of GitHub Copilot licenses, but since it didn’t have a clear sense of how to work with the technology, progress was slow. Another unfocused effort we often see is when companies move to incorporate gen AI into their customer service capabilities. Customer service is a commodity capability, not part of the core business, for most companies. While gen AI might help with productivity in such cases, it won’t create a competitive advantage.

To create competitive advantage, companies should first understand the difference between being a “taker” (a user of available tools, often via APIs and subscription services), a “shaper” (an integrator of available models with proprietary data), and a “maker” (a builder of LLMs). For now, the maker approach is too expensive for most companies, so the sweet spot for businesses is implementing a taker model for productivity improvements while building shaper applications for competitive advantage.

Much of gen AI’s near-term value is closely tied to its ability to help people do their current jobs better. In this way, gen AI tools act as copilots that work side by side with an employee, creating an initial block of code that a developer can adapt, for example, or drafting a requisition order for a new part that a maintenance worker in the field can review and submit (see sidebar “Copilot examples across three generative AI archetypes”). This means companies should be focusing on where copilot technology can have the biggest impact on their priority programs.

Copilot examples across three generative AI archetypes

  • “Taker” copilots help real estate customers sift through property options and find the most promising one, write code for a developer, and summarize investor transcripts.
  • “Shaper” copilots provide recommendations to sales reps for upselling customers by connecting generative AI tools to customer relationship management systems, financial systems, and customer behavior histories; create virtual assistants to personalize treatments for patients; and recommend solutions for maintenance workers based on historical data.
  • “Maker” copilots are foundation models that lab scientists at pharmaceutical companies can use to find and test new and better drugs more quickly.

Some industrial companies, for example, have identified maintenance as a critical domain for their business. Reviewing maintenance reports and spending time with workers on the front lines can help determine where a gen AI copilot could make a big difference, such as in identifying issues with equipment failures quickly and early on. A gen AI copilot can also help identify root causes of truck breakdowns and recommend resolutions much more quickly than usual, as well as act as an ongoing source for best practices or standard operating procedures.

The challenge with copilots is figuring out how to generate revenue from increased productivity. In the case of customer service centers, for example, companies can stop recruiting new agents and use attrition to potentially achieve real financial gains. Defining the plans for how to generate revenue from the increased productivity up front, therefore, is crucial to capturing the value.

Upskill the talent you have but be clear about the gen-AI-specific skills you need

By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop. Those with existing AI and machine learning (ML) capabilities have a strong head start. Data engineers, for example, can learn multimodal processing and vector database management, MLOps (ML operations) engineers can extend their skills to LLMOps (LLM operations), and data scientists can develop prompt engineering, bias detection, and fine-tuning skills.

A sample of new generative AI skills needed

The following are examples of new skills needed for the successful deployment of generative AI tools:

  • data scientist:
  • prompt engineering
  • in-context learning
  • bias detection
  • pattern identification
  • reinforcement learning from human feedback
  • hyperparameter/large language model fine-tuning; transfer learning
  • data engineer:
  • data wrangling and data warehousing
  • data pipeline construction
  • multimodal processing
  • vector database management

The learning process can take two to three months to get to a decent level of competence because of the complexities in learning what various LLMs can and can’t do and how best to use them. The coders need to gain experience building software, testing, and validating answers, for example. It took one financial-services company three months to train its best data scientists to a high level of competence. While courses and documentation are available—many LLM providers have boot camps for developers—we have found that the most effective way to build capabilities at scale is through apprenticeship, training people to then train others, and building communities of practitioners. Rotating experts through teams to train others, scheduling regular sessions for people to share learnings, and hosting biweekly documentation review sessions are practices that have proven successful in building communities of practitioners (see sidebar “A sample of new generative AI skills needed”).

It’s important to bear in mind that successful gen AI skills are about more than coding proficiency. Our experience in developing our own gen AI platform, Lilli , showed us that the best gen AI technical talent has design skills to uncover where to focus solutions, contextual understanding to ensure the most relevant and high-quality answers are generated, collaboration skills to work well with knowledge experts (to test and validate answers and develop an appropriate curation approach), strong forensic skills to figure out causes of breakdowns (is the issue the data, the interpretation of the user’s intent, the quality of metadata on embeddings, or something else?), and anticipation skills to conceive of and plan for possible outcomes and to put the right kind of tracking into their code. A pure coder who doesn’t intrinsically have these skills may not be as useful a team member.

While current upskilling is largely based on a “learn on the job” approach, we see a rapid market emerging for people who have learned these skills over the past year. That skill growth is moving quickly. GitHub reported that developers were working on gen AI projects “in big numbers,” and that 65,000 public gen AI projects were created on its platform in 2023—a jump of almost 250 percent over the previous year. If your company is just starting its gen AI journey, you could consider hiring two or three senior engineers who have built a gen AI shaper product for their companies. This could greatly accelerate your efforts.

Form a centralized team to establish standards that enable responsible scaling

To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move. The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs. The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources.

While developing Lilli, our team had its mind on scale when it created an open plug-in architecture and setting standards for how APIs should function and be built.  They developed standardized tooling and infrastructure where teams could securely experiment and access a GPT LLM , a gateway with preapproved APIs that teams could access, and a self-serve developer portal. Our goal is that this approach, over time, can help shift “Lilli as a product” (that a handful of teams use to build specific solutions) to “Lilli as a platform” (that teams across the enterprise can access to build other products).

For teams developing gen AI solutions, squad composition will be similar to AI teams but with data engineers and data scientists with gen AI experience and more contributors from risk management, compliance, and legal functions. The general idea of staffing squads with resources that are federated from the different expertise areas will not change, but the skill composition of a gen-AI-intensive squad will.

Set up the technology architecture to scale

Building a gen AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely. We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months. That’s why, our experience shows, the actual model costs may be less than 10 to 15 percent of the total costs of the solution.

Building for scale doesn’t mean building a new technology architecture. But it does mean focusing on a few core decisions that simplify and speed up processes without breaking the bank. Three such decisions stand out:

  • Focus on reusing your technology. Reusing code can increase the development speed of gen AI use cases by 30 to 50 percent. One good approach is simply creating a source for approved tools, code, and components. A financial-services company, for example, created a library of production-grade tools, which had been approved by both the security and legal teams, and made them available in a library for teams to use. More important is taking the time to identify and build those capabilities that are common across the most priority use cases. The same financial-services company, for example, identified three components that could be reused for more than 100 identified use cases. By building those first, they were able to generate a significant portion of the code base for all the identified use cases—essentially giving every application a big head start.
  • Focus the architecture on enabling efficient connections between gen AI models and internal systems. For gen AI models to work effectively in the shaper archetype, they need access to a business’s data and applications. Advances in integration and orchestration frameworks have significantly reduced the effort required to make those connections. But laying out what those integrations are and how to enable them is critical to ensure these models work efficiently and to avoid the complexity that creates technical debt  (the “tax” a company pays in terms of time and resources needed to redress existing technology issues). Chief information officers and chief technology officers can define reference architectures and integration standards for their organizations. Key elements should include a model hub, which contains trained and approved models that can be provisioned on demand; standard APIs that act as bridges connecting gen AI models to applications or data; and context management and caching, which speed up processing by providing models with relevant information from enterprise data sources.
  • Build up your testing and quality assurance capabilities. Our own experience building Lilli taught us to prioritize testing over development. Our team invested in not only developing testing protocols for each stage of development but also aligning the entire team so that, for example, it was clear who specifically needed to sign off on each stage of the process. This slowed down initial development but sped up the overall delivery pace and quality by cutting back on errors and the time needed to fix mistakes.

Ensure data quality and focus on unstructured data to fuel your models

The ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its own data. As with technology, targeted upgrades to existing data architecture  are needed to maximize the future strategic benefits of gen AI:

  • Be targeted in ramping up your data quality and data augmentation efforts. While data quality has always been an important issue, the scale and scope of data that gen AI models can use—especially unstructured data—has made this issue much more consequential. For this reason, it’s critical to get the data foundations right, from clarifying decision rights to defining clear data processes to establishing taxonomies so models can access the data they need. The companies that do this well tie their data quality and augmentation efforts to the specific AI/gen AI application and use case—you don’t need this data foundation to extend to every corner of the enterprise. This could mean, for example, developing a new data repository for all equipment specifications and reported issues to better support maintenance copilot applications.
  • Understand what value is locked into your unstructured data. Most organizations have traditionally focused their data efforts on structured data (values that can be organized in tables, such as prices and features). But the real value from LLMs comes from their ability to work with unstructured data (for example, PowerPoint slides, videos, and text). Companies can map out which unstructured data sources are most valuable and establish metadata tagging standards so models can process the data and teams can find what they need (tagging is particularly important to help companies remove data from models as well, if necessary). Be creative in thinking about data opportunities. Some companies, for example, are interviewing senior employees as they retire and feeding that captured institutional knowledge into an LLM to help improve their copilot performance.
  • Optimize to lower costs at scale. There is often as much as a tenfold difference between what companies pay for data and what they could be paying if they optimized their data infrastructure and underlying costs. This issue often stems from companies scaling their proofs of concept without optimizing their data approach. Two costs generally stand out. One is storage costs arising from companies uploading terabytes of data into the cloud and wanting that data available 24/7. In practice, companies rarely need more than 10 percent of their data to have that level of availability, and accessing the rest over a 24- or 48-hour period is a much cheaper option. The other costs relate to computation with models that require on-call access to thousands of processors to run. This is especially the case when companies are building their own models (the maker archetype) but also when they are using pretrained models and running them with their own data and use cases (the shaper archetype). Companies could take a close look at how they can optimize computation costs on cloud platforms—for instance, putting some models in a queue to run when processors aren’t being used (such as when Americans go to bed and consumption of computing services like Netflix decreases) is a much cheaper option.

Build trust and reusability to drive adoption and scale

Because many people have concerns about gen AI, the bar on explaining how these tools work is much higher than for most solutions. People who use the tools want to know how they work, not just what they do. So it’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers.

One insurance company, for example, created a gen AI tool to help manage claims. As part of the tool, it listed all the guardrails that had been put in place, and for each answer provided a link to the sentence or page of the relevant policy documents. The company also used an LLM to generate many variations of the same question to ensure answer consistency. These steps, among others, were critical to helping end users build trust in the tool.

Part of the training for maintenance teams using a gen AI tool should be to help them understand the limitations of models and how best to get the right answers. That includes teaching workers strategies to get to the best answer as fast as possible by starting with broad questions then narrowing them down. This provides the model with more context, and it also helps remove any bias of the people who might think they know the answer already. Having model interfaces that look and feel the same as existing tools also helps users feel less pressured to learn something new each time a new application is introduced.

Getting to scale means that businesses will need to stop building one-off solutions that are hard to use for other similar use cases. One global energy and materials company, for example, has established ease of reuse as a key requirement for all gen AI models, and has found in early iterations that 50 to 60 percent of its components can be reused. This means setting standards for developing gen AI assets (for example, prompts and context) that can be easily reused for other cases.

While many of the risk issues relating to gen AI are evolutions of discussions that were already brewing—for instance, data privacy, security, bias risk, job displacement, and intellectual property protection—gen AI has greatly expanded that risk landscape. Just 21 percent of companies reporting AI adoption say they have established policies governing employees’ use of gen AI technologies.

Similarly, a set of tests for AI/gen AI solutions should be established to demonstrate that data privacy, debiasing, and intellectual property protection are respected. Some organizations, in fact, are proposing to release models accompanied with documentation that details their performance characteristics. Documenting your decisions and rationales can be particularly helpful in conversations with regulators.

In some ways, this article is premature—so much is changing that we’ll likely have a profoundly different understanding of gen AI and its capabilities in a year’s time. But the core truths of finding value and driving change will still apply. How well companies have learned those lessons may largely determine how successful they’ll be in capturing that value.

Eric Lamarre

The authors wish to thank Michael Chui, Juan Couto, Ben Ellencweig, Josh Gartner, Bryce Hall, Holger Harreis, Phil Hudelson, Suzana Iacob, Sid Kamath, Neerav Kingsland, Kitti Lakner, Robert Levin, Matej Macak, Lapo Mori, Alex Peluffo, Aldo Rosales, Erik Roth, Abdul Wahab Shaikh, and Stephen Xu for their contributions to this article.

This article was edited by Barr Seitz, an editorial director in the New York office.

Explore a career with us

Related articles.

Light dots and lines evolve into a pattern of a human face and continue to stream off the the side in a moving grid pattern.

The economic potential of generative AI: The next productivity frontier

A yellow wire shaped into a butterfly

Rewired to outcompete

A digital construction of a human face consisting of blocks

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

  • Mobile Site
  • Staff Directory
  • Advertise with Ars

Filter by topic

  • Biz & IT
  • Gaming & Culture

Front page layout

"We decide when and how to use AI tools in our work." —

Producing more but understanding less: the risks of ai for scientific research, a psychologist and an anthropologist ponder the epistemic risks ai could pose for science..

Jennifer Ouellette - Mar 6, 2024 6:08 pm UTC

3d illustration of brain with wires

Last month, we witnessed the viral sensation of several egregiously bad AI-generated figures published in a peer-reviewed article in Frontiers, a reputable scientific journal. Scientists on social media expressed equal parts shock and ridicule at the images, one of which featured a rat with grotesquely large and bizarre genitals.

As Ars Senior Health Reporter Beth Mole reported , looking closer only revealed more flaws, including the labels "dissilced," "Stemm cells," "iollotte sserotgomar," and "dck." Figure 2 was less graphic but equally mangled, rife with nonsense text and baffling images. Ditto for Figure 3, a collage of small circular images densely annotated with gibberish.

The paper has since been retracted, but that eye-popping rat penis image will remain indelibly imprinted on our collective consciousness. The incident reinforces a growing concern that the increasing use of AI will make published scientific research less trustworthy, even as it increases productivity. While the proliferation of errors is a valid concern, especially in the early days of AI tools like ChatGPT, two researchers argue in a new perspective published in the journal Nature that AI also poses potential long-term epistemic risks to the practice of science.

Molly Crockett is a psychologist at Princeton University who routinely collaborates with researchers from other disciplines in her research into how people learn and make decisions in social situations. Her co-author, Lisa Messeri , is an anthropologist at Yale University whose research focuses on science and technology studies (STS), analyzing the norms and consequences of scientific and technological communities as they forge new fields of knowledge and invention—like AI.

Further Reading

The original impetus for their new paper was a 2019 study published in the Proceedings of the National Academy of Sciences claiming that researchers could use machine learning to predict the replicability of studies based only on an analysis of their texts. Crockett and Messeri co-wrote a letter to the editor disputing that claim, but shortly thereafter, several more studies appeared, claiming that large language models could replace humans in psychological research. The pair realized this was a much bigger issue and decided to work together on an in-depth analysis of how scientists propose to use AI tools throughout the academic pipeline.

They came up with four categories of visions for AI in science. The first is AI as Oracle, in which such tools can help researchers search, evaluate, and summarize the vast scientific literature, as well as generate novel hypotheses. The second is AI as Surrogate, in which AI tools generate surrogate data points, perhaps even replacing human subjects. The third is AI as Quant. In the age of big data, AI tools can overcome the limits of human intellect by analyzing vast and complex datasets. Finally, there is AI as Arbiter, relying on such tools to more efficiently evaluate the scientific merit and replicability of submitted papers, as well as assess funding proposals.

Each category brings undeniable benefits in the form of increased productivity—but also certain risks. Crockett and Messeri particularly caution against three distinct "illusions of understanding" that may arise from over-reliance on AI tools, which can exploit our cognitive limitations. For instance, a scientist may use an AI tool to model a given phenomenon and believe they, therefore, understand that phenomenon more than they actually do (an illusion of explanatory depth). Or a team might think they are exploring all testable hypotheses when they are only really exploring those hypotheses that are testable using AI (an illusion of exploratory breadth). Finally, there is the illusion of objectivity: the belief that AI tools are truly objective and do not have biases or a point of view, unlike humans.

This error-ridden AI-generated image, published in the journal Frontiers, is supposed to show spermatogonial stem cells, isolated, purified, and cultured from rat testes.

The paper's tagline is "producing more while understanding less," and that is the central message the pair hopes to convey. "The goal of scientific knowledge is to understand the world and all of its complexity, diversity, and expansiveness," Messeri told Ars. "Our concern is that even though we might be writing more and more papers, because they are constrained by what AI can and can't do, in the end, we're really only asking questions and producing a lot of papers that are within AI's capabilities."

Neither Crockett nor Messeri are opposed to any use of AI tools by scientists. "It's genuinely useful in my research, and I expect to continue using it in my research," Crockett told Ars. Rather, they take a more agnostic approach. "It's not for me and Molly to say, 'This is what AI ought or ought not to be,'" Messeri said. "Instead, we're making observations of how AI is currently being positioned and then considering the realm of conversation we ought to have about the associated risks."

Ars spoke at length with Crockett and Messeri to learn more.

reader comments

Channel ars technica.

IMAGES

  1. AI Index Report 2023

    ai research papers by country

  2. Which country leads the world in publishing scientific research

    ai research papers by country

  3. AI skills are global, and the countries with the highest penetration of

    ai research papers by country

  4. Global evolution of research in artificial intelligence in medicine

    ai research papers by country

  5. Growth of AI research in 2020? Steady on the exponential path in times

    ai research papers by country

  6. State of AI in 10 Charts

    ai research papers by country

VIDEO

  1. This new AI DEEP FAKE will change movies FOREVER

  2. AI in Government

  3. Read documents and write research papers with this AI

  4. Mintay Papers

  5. How to find research papers and related research literature articles

  6. Paper Brain-An AI Research Tool

COMMENTS

  1. Global AI publications by country 2020

    AI-related paper publications share by country 1997-2017; AI-related paper publications worldwide 1997-2017, by government agency; Countries with the most AI research institutions with ...

  2. Annual scholarly publications on artificial intelligence

    Views about the safety of riding in a self-driving car. Views of Americans about robot vs. human intelligence. English- and Chinese-language scholarly publications related to the development and application of AI. This includes journal articles, conference papers, repository publications (such as arXiv), books, and theses.

  3. AI Index Report 2023

    This year's report included new analysis on foundation models, including their geopolitics and training costs, the environmental impact of AI systems, K-12 AI education, and public opinion trends in AI. The AI Index also broadened its tracking of global AI legislation from 25 countries in 2022 to 127 in 2023.

  4. Global AI paper publications by country 1997-2017

    AI-related paper publications worldwide 1997-2017, by country. Published by. Bergur Thormundsson , Mar 17, 2022. The statistic shows the number of papers published in the field of artificial ...

  5. The race to the top among the world's leaders in artificial intelligence

    In each year from 2016 to 2019, China produced more AI-related papers than any other nation, according to Dimensions. Over this period, China's output of AI-related research increased by just ...

  6. AI Papers by Country

    AI Papers by Country Artificial Intelligence (AI) research has been rapidly growing over recent years, with numerous countries contributing to this development. In this article, we will examine the number of AI papers published by different countries and gain insights into their respective contributions to this field. Key Takeaways AI research is a global phenomenon, […]

  7. AI Technologies and Motives for AI Adoption by Countries and Firms: A

    The distribution of selected papers considering countries, in which the AI was adopted, indicates that most research papers on AI adoption focus on emerging and developing countries. Countries like India, China, South-Africa, and Philippines have many research articles on AI adoption.

  8. PDF Chapter 1: Research and Development

    A re-based FWCI of 0.85 indicates that the papers are 15% less cited than the world average for AI. While Europe has the largest number of annually published AI papers in Scopus, Europe's FWCI has remained relatively flat and on-par with the world average. In contrast, China has increased its FWCI considerably.

  9. AI publications by country, 1980-2020

    It highlights how OECD countries and partner economies are taking advantage of information and communication technologies (ICTs) and the Internet to meet their public policy objectives. ... AI publications by country, 1980-2020 For top 50% quality rankings, numbers English. More On ... featuring its books, papers, podcasts and statistics and is ...

  10. Artificial intelligence

    Data-driven AI illuminates the future. Supported by the positive spiral of research and development at Hitachi's Lumada Data Science Lab, top data scientists are developing artificial ...

  11. Live data from OECD.AI

    Live data. This section leverages live data to show timely trends about where, how and at what rate AI is being developed and used and in which sectors. Keep up with the latest AI developments nationally and globally. This live feed from Event Registry provides real-time AI-related news around the world. Articles are classified as positive ...

  12. Artificial intelligence national strategy in a developing country

    Sixty countries worldwide published their AI national strategies. The majority of these countries with more than 70% are developed countries. The approach of AI national strategies differentiates between developed and developing countries in several aspects including scientific research, education, talent development, and ethics. This paper ...

  13. Leading 20 AI research countries 2023

    Sep 6, 2023. The United States had the strongest capacity for research among the leading 20 AI nations worldwide in 2023. It has a ranking of 100, compared with its nearest competitor China at ...

  14. Is China Emerging as the Global Leader in AI?

    Chinese researchers now publish more papers on AI and secure more patents than U.S. researchers do. The country seems poised to become a leader in AI-empowered businesses, such as speech and image ...

  15. Research index

    Research Papers. Feb 15, 2024 February 15, 2024. Video generation models as world simulators. ... Safety & Alignment. Read paper. Dec 14, 2023 December 14, 2023. Practices for Governing Agentic AI Systems. Responsible AI, ... Aug 1, 2023 August 1, 2023. Confidence-Building Measures for Artificial Intelligence: Workshop proceedings.

  16. The Global AI Index

    This is the fourth iteration of the index. The Global AI Index aims to make sense of artificial intelligence in 62 countries, scoring nations based on three pillars of analysis - Investment, Innovation and Implementation. Each dot represents a country in the Index. The US scored 100 out of 100, taking first place on all three main pillars ...

  17. Infographic: AI Research and Development in the U.S., EU and China

    AI Research. The U.S. was a leader in AI research in the early 2000s. Its institutions published more AI-related papers than those in any other country. But in 2006, China took the lead as the source of 58,067 AI publications. The U.S. and the EU trailed with, respectively, 52,671 and 49,540.

  18. Artificial Intelligence and Employment: New Cross-Country Evidence

    Recent years have seen impressive advances in artificial intelligence (AI) and this has stoked renewed concern about the impact of technological progress on the labor market, including on worker displacement. This paper looks at the possible links between AI and employment in a cross-country context. It adapts the AI occupational impact measure developed by Felten, Raj and Seamans—an ...

  19. Beijing's central role in global artificial intelligence research

    The science of endowing machines with artificial intelligence (AI) has propelled the pace of innovation 1,2,3,4 and introduced positive shifts in various aspects of life, such as labor markets 5,6 ...

  20. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  21. Artificial intelligence in developing countries: The impact of

    Lund BD, Wang T, Mannuru NR, et al. (2023b) ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology 74(5): 570-581.

  22. Modeling the global economic impact of AI

    Third, the research examines the dynamics of AI for a wide range of countries—clustered into groups with similar characteristics—with the aim of giving a more global view. The analysis should be seen as a guide to the potential economic impact of AI based on the best knowledge available at this stage.

  23. [2403.04190] Generative AI for Synthetic Data Generation: Methods

    The recent surge in research focused on generating synthetic data from large language models (LLMs), especially for scenarios with limited data availability, marks a notable shift in Generative Artificial Intelligence (AI). Their ability to perform comparably to real-world data positions this approach as a compelling solution to low-resource challenges. This paper delves into advanced ...

  24. Doing more, but learning less: The risks of AI in research

    The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study design through peer review. " We hope this paper offers a vocabulary for talking about AI's potential epistemic risks," Messeri said.

  25. Generative AI @ Harvard

    The pace of change in the development of Artificial Intelligence is breathtaking, and we are rapidly delegating more and more tasks to it. In this talk two philosophers explore some aspects of these trends: the role of AI in democratic decision making, and its role in a range of areas where human control has so far seemed essential, such as in the military and in criminal justice.

  26. [2403.00742] Dialect prejudice predicts AI decisions about people's

    Download a PDF of the paper titled Dialect prejudice predicts AI decisions about people's character, employability, and criminality, by Valentin Hofmann and 3 other authors. Download PDF HTML (experimental) ... While prior research has focused on overt racism in language models, social scientists have argued that racism with a more subtle ...

  27. Will China lead the world in AI by 2030?

    The country's artificial-intelligence research is growing in quality, but the field still plays catch up to the United States in terms of high-impact papers, people and ethics.

  28. A generative AI reset: Rewiring to turn potential into value in 2024

    It's time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI's enormous potential value is harder than expected.. With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI ...

  29. Producing more but understanding less: The risks of AI for scientific

    Cell Dev. Biol., Guo, Dong, Hao. The paper's tagline is "producing more while understanding less," and that is the central message the pair hopes to convey. "The goal of scientific knowledge is to ...

  30. Cultural Bias in Explainable AI Research: A Systematic Analysis

    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant ...