• Write my thesis
  • Thesis writers
  • Buy thesis papers
  • Bachelor thesis
  • Master's thesis
  • Thesis editing services
  • Thesis proofreading services
  • Buy a thesis online
  • Write my dissertation
  • Dissertation proposal help
  • Pay for dissertation
  • Custom dissertation
  • Dissertation help online
  • Buy dissertation online
  • Cheap dissertation
  • Dissertation editing services
  • Write my research paper
  • Buy research paper online
  • Pay for research paper
  • Research paper help
  • Order research paper
  • Custom research paper
  • Cheap research paper
  • Research papers for sale
  • Thesis subjects
  • How It Works

177 Great Artificial Intelligence Research Paper Topics to Use

artificial intelligence topics

In this top-notch post, we will look at the definition of artificial intelligence, its applications, and writing tips on how to come up with AI topics. Finally, we shall lock at top artificial intelligence research topics for your inspiration.

What Is Artificial Intelligence?

It refers to intelligence as demonstrated by machines, unlike that which animals and humans display. The latter involves emotionality and consciousness. The field of AI has gained proliferation in recent days, with many scientists investing their time and effort in research.

How To Develop Topics in Artificial Intelligence

Developing AI topics is a critical thinking process that also incorporates a lot of creativity. Due to the ever-dynamic nature of the discipline, most students find it hard to develop impressive topics in artificial intelligence. However, here are some general rules to get you started:

Read widely on the subject of artificial intelligence Have an interest in news and other current updates about AI Consult your supervisor

Once you are ready with these steps, nothing is holding you from developing top-rated topics in artificial intelligence. Now let’s look at what the pros have in store for you.

Artificial Intelligence Research Paper Topics

  • The role of artificial intelligence in evolving the workforce
  • Are there tasks that require unique human abilities apart from machines?
  • The transformative economic impact of artificial intelligence
  • Managing a global autonomous arms race in the face of AI
  • The legal and ethical boundaries of artificial intelligence
  • Is the destructive role of AI more than its constructive role in society?
  • How to build AI algorithms to achieve the far-reaching goals of humans
  • How privacy gets compromised with the everyday collection of data
  • How businesses and governments can suffer at the hands of AI
  • Is it possible for AI to devolve into social oppression?
  • Augmentation of the work humans do through artificial intelligence
  • The role of AI in monitoring and diagnosing capabilities

Artificial Intelligence Topics For Presentation

  • How AI helps to uncover criminal activity and solve serial crimes
  • The place of facial recognition technologies in security systems
  • How to use AI without crossing an individual’s privacy
  • What are the disadvantages of using a computer-controlled robot in performing tasks?
  • How to develop systems endowed with intellectual processes
  • The challenge of programming computers to perform complex tasks
  • Discuss some of the mathematical theorems for artificial intelligence systems
  • The role of computer processing speed and memory capacity in AI
  • Can computer machines achieve the performance levels of human experts?
  • Discuss the application of artificial intelligence in handwriting recognition
  • A case study of the key people involved in developing AI systems
  • Computational aesthetics when developing artificial intelligence systems

Topics in AI For Tip-Top Grades

  • Describe the necessities for artificial programming language
  • The impact of American companies possessing about 2/3 of investments in AI
  • The relationship between human neural networks and A.I
  • The role of psychologists in developing human intelligence
  • How to apply past experiences to analogous new situations
  • How machine learning helps in achieving artificial intelligence
  • The role of discernment and human intelligence in developing AI systems
  • Discuss the various methods and goals in artificial intelligence
  • What is the relationship between applied AI, strong AI, and cognitive simulation
  • Discuss the implications of the first AI programs
  • Logical reasoning and problem-solving in artificial intelligence
  • Challenges involved in controlled learning environments

AI Research Topics For High School Students

  • How quantum computing is affecting artificial intelligence
  • The role of the Internet of Things in advancing artificial intelligence
  • Using Artificial intelligence to enable machines to perform programming tasks
  • Why do machines learn automatically without human hand holding
  • Implementing decisions based on data processing in the human mind
  • Describe the web-like structure of artificial neural networks
  • Machine learning algorithms for optimal functions through trial and error
  • A case study of Google’s AlphaGo computer program
  • How robots solve problems in an intelligent manner
  • Evaluate the significant role of M.I.T.’s artificial intelligence lab
  • A case study of Robonaut developed by NASA to work with astronauts in space
  • Discuss natural language processing where machines analyze language and speech

Argument Debate Topics on AI

  • How chatbots use ML and N.L.P. to interact with the users
  • How do computers use and understand images?
  • The impact of genetic engineering on the life of man
  • Why are micro-chips not recommended in human body systems?
  • Can humans work alongside robots in a workplace system?
  • Have computers contributed to the intrusion of privacy for many?
  • Why artificial intelligence systems should not be made accessible to children
  • How artificial intelligence systems are contributing to healthcare problems
  • Does artificial intelligence alleviate human problems or add to them?
  • Why governments should put more stringent measures for AI inventions
  • How artificial intelligence is affecting the character traits of children born
  • Is virtual reality taking people out of the real-world situation?

Quality AI Topics For Research Paper

  • The use of recommender systems in choosing movies and series
  • Collaborative filtering in designing systems
  • How do developers arrive at a content-based recommendation
  • Creation of systems that can emulate human tasks
  • How IoT devices generate a lot of data
  • Artificial intelligence algorithms convert data to useful, actionable results.
  • How AI is progressing rapidly with the 5G technology
  • How to develop robots with human-like characteristics
  • Developing Google search algorithms
  • The role of artificial intelligence in developing autonomous weapons
  • Discuss the long-term goal of artificial intelligence
  • Will artificial intelligence outperform humans at every cognitive task?

Computer Science AI Topics

  • Computational intelligence magazine in computer science
  • Swarm and evolutionary computation procedures for college students
  • Discuss computational transactions on intelligent transportation systems
  • The structure and function of knowledge-based systems
  • A review of the artificial intelligence systems in developing systems
  • Conduct a review of the expert systems with applications
  • Critique the various foundations and trends in information retrieval
  • The role of specialized systems in transactions on knowledge and data engineering
  • An analysis of a journal on ambient intelligence and humanized computing
  • Discuss the various computer transactions on cognitive communications and networking
  • What is the role of artificial intelligence in medicine?
  • Computer engineering applications of artificial intelligence

AI Ethics Topics

  • How the automation of jobs is going to make many jobless
  • Discuss inequality challenges in distributing wealth created by machines
  • The impact of machines on human behavior and interactions
  • How artificial intelligence is going to affect how we act accordingly
  • The process of eliminating bias in Artificial intelligence: A case of racist robots
  • Measures that can keep artificial intelligence safe from adversaries
  • Protecting artificial intelligence discoveries from unintended consequences
  • How a man can stay in control despite the complex, intelligent systems
  • Robot rights: A case of how man is mistreating and misusing robots
  • The balance between mitigating suffering and interfering with set ethics
  • The role of artificial intelligence in negative outcomes: Is it worth it?
  • How to ethically use artificial intelligence for bettering lives

Advanced AI Topics

  • Discuss how long it will take until machines greatly supersede human intelligence
  • Is it possible to achieve superhuman artificial intelligence in this century?
  • The impact of techno-skeptic prediction on the performance of A.I
  • The role of quarks and electrons in the human brain
  • The impact of artificial intelligence safety research institutes
  • Will robots be disastrous for humanity shortly?
  • Robots: A concern about consciousness and evil
  • Discuss whether a self-driving car has a subjective experience or not
  • Should humans worry about machines turning evil in the end?
  • Discuss how machines exhibit goal-oriented behavior in their functions
  • Should man continue to develop lethal autonomous weapons?
  • What is the implication of machine-produced wealth?

AI Essay Topics Technology

  • Discuss the implication of the fourth technological revelation in cloud computing
  • Big database technologies used in sensors
  • The combination of technologies typical of the technological revolution
  • Key determinants of the civilization process of industry 4.0
  • Discuss some of the concepts of technological management
  • Evaluate the creation of internet-based companies in the U.S.
  • The most dominant scientific research in the field of artificial intelligence
  • Discuss the application of artificial intelligence in the literature
  • How enterprises use artificial intelligence in blockchain business operations
  • Discuss the various immersive experiences as a result of digital AI
  • Elaborate on various enterprise architects and technology innovations
  • Mega-trends that are future impacts on business operations

Interesting Topics in AI

  • The role of the industrial revolution of the 18 th century in A.I
  • The electricity era of the late 19 th century and its contribution to the development of robots
  • How the widespread use of the internet contributes to the AI revolution
  • The short-term economic crisis as a result of artificial intelligence business technologies
  • Designing and creating artificial intelligence production processes
  • Analyzing large collections of information for technological solutions
  • How biotechnology is transforming the field of agriculture
  • Innovative business projects that work using artificial intelligence systems
  • Process and marketing innovations in the 21 st century
  • Medical intelligence in the era of smart cities
  • Advanced data processing technologies in developed nations
  • Discuss the development of stelliform technologies

Good Research Topics For AI

  • Development of new technological solutions in I.T
  • Innovative organizational solutions that develop machine learning
  • How to develop branches of a knowledge-based economy
  • Discuss the implications of advanced computerized neural network systems
  • How to solve complex problems with the help of algorithms
  • Why artificial intelligence systems are predominating over their creator
  • How to determine artificial emotional intelligence
  • Discuss the negative and positive aspects of technological advancement
  • How internet technology companies like Facebook are managing large social media portals
  • The application of analytical business intelligence systems
  • How artificial intelligence improves business management systems
  • Strategic and ongoing management of artificial intelligence systems

Graduate AI NLP Research Topics

  • Morphological segmentation in artificial intelligence
  • Sentiment analysis and breaking machine language
  • Discuss input utterance for language interpretation
  • Festival speech synthesis system for natural language processing
  • Discuss the role of the Google language translator
  • Evaluate the various analysis methodologies in N.L.P.
  • Native language identification procedure for deep analytics
  • Modular audio recognition framework
  • Deep linguistic processing techniques
  • Fact recognition and extraction techniques
  • Dialogue and text-based applications
  • Speaker verification and identification systems

Controversial Topics in AI

  • Ethical implication of AI in movies: A case study of The Terminator
  • Will machines take over the world and enslave humanity?
  • Does human intelligence paint a dark future for humanity?
  • Ethical and practical issues of artificial intelligence
  • The impact of mimicking human cognitive functions
  • Why the integration of AI technologies into society should be limited
  • Should robots get paid hourly?
  • What if AI is a mistake?
  • Why did Microsoft shut down chatbots immediately?
  • Should there be AI systems for killing?
  • Should machines be created to do what they want?
  • Is the computerized gun ethical?

Hot AI Topics

  • Why predator drones should not exist
  • Do the U.S. laws restrict meaningful innovations in AI
  • Why did the campaign to stop killer robots fail in the end?
  • Fully autonomous weapons and human safety
  • How to deal with rogues artificial intelligence systems in the United States
  • Is it okay to have a monopoly and control over artificial intelligence innovations?
  • Should robots have human rights or citizenship?
  • Biases when detecting people’s gender using Artificial intelligence
  • Considerations for the adoption of a particular artificial intelligence technology

Are you a university student seeking research paper writing services or dissertation proposal help ? We offer custom help for college students in any field of artificial intelligence.

Leave a Reply Cancel reply

research questions of ai

  • Onsite training

3,000,000+ delegates

15,000+ clients

1,000+ locations

  • KnowledgePass
  • Log a ticket

01344203999 Available 24/7

12 Best Artificial Intelligence Topics for Research in 2024

Explore the "12 Best Artificial Intelligence Topics for Research in 2024." Dive into the top AI research areas, including Natural Language Processing, Computer Vision, Reinforcement Learning, Explainable AI (XAI), AI in Healthcare, Autonomous Vehicles, and AI Ethics and Bias. Stay ahead of the curve and make informed choices for your AI research endeavours.

stars

Exclusive 40% OFF

Training Outcomes Within Your Budget!

We ensure quality, budget-alignment, and timely delivery by our expert instructors.

Share this Resource

  • AI Tools in Performance Marketing Training
  • Deep Learning Course
  • Natural Language Processing (NLP) Fundamentals with Python
  • Machine Learning Course
  • Duet AI for Workspace Training

course

Table of Contents  

1) Top Artificial Intelligence Topics for Research 

     a) Natural Language Processing 

     b) Computer vision 

     c) Reinforcement Learning 

     d) Explainable AI (XAI) 

     e) Generative Adversarial Networks (GANs) 

     f) Robotics and AI 

     g) AI in healthcare 

     h) AI for social good 

     i) Autonomous vehicles 

     j) AI ethics and bias 

2) Conclusion 

Top Artificial Intelligence Topics for Research   

This section of the blog will expand on some of the best Artificial Intelligence Topics for research.

Top Artificial Intelligence Topics for Research

Natural Language Processing   

Natural Language Processing (NLP) is centred around empowering machines to comprehend, interpret, and even generate human language. Within this domain, three distinctive research avenues beckon: 

1) Sentiment analysis: This entails the study of methodologies to decipher and discern emotions encapsulated within textual content. Understanding sentiments is pivotal in applications ranging from brand perception analysis to social media insights. 

2) Language generation: Generating coherent and contextually apt text is an ongoing pursuit. Investigating mechanisms that allow machines to produce human-like narratives and responses holds immense potential across sectors. 

3) Question answering systems: Constructing systems that can grasp the nuances of natural language questions and provide accurate, coherent responses is a cornerstone of NLP research. This facet has implications for knowledge dissemination, customer support, and more. 

Computer Vision   

Computer Vision, a discipline that bestows machines with the ability to interpret visual data, is replete with intriguing avenues for research: 

1) Object detection and tracking: The development of algorithms capable of identifying and tracking objects within images and videos finds relevance in surveillance, automotive safety, and beyond. 

2) Image captioning: Bridging the gap between visual and textual comprehension, this research area focuses on generating descriptive captions for images, catering to visually impaired individuals and enhancing multimedia indexing. 

3) Facial recognition: Advancements in facial recognition technology hold implications for security, personalisation, and accessibility, necessitating ongoing research into accuracy and ethical considerations. 

Reinforcement Learning   

Reinforcement Learning revolves around training agents to make sequential decisions in order to maximise rewards. Within this realm, three prominent Artificial Intelligence Topics emerge: 

1) Autonomous agents: Crafting AI agents that exhibit decision-making prowess in dynamic environments paves the way for applications like autonomous robotics and adaptive systems. 

2) Deep Q-Networks (DQN): Deep Q-Networks, a class of reinforcement learning algorithms, remain under active research for refining value-based decision-making in complex scenarios. 

3) Policy gradient methods: These methods, aiming to optimise policies directly, play a crucial role in fine-tuning decision-making processes across domains like gaming, finance, and robotics.  

Introduction To Artificial Intelligence Training

Explainable AI (XAI)   

The pursuit of Explainable AI seeks to demystify the decision-making processes of AI systems. This area comprises Artificial Intelligence Topics such as: 

1) Model interpretability: Unravelling the inner workings of complex models to elucidate the factors influencing their outputs, thus fostering transparency and accountability. 

2) Visualising neural networks: Transforming abstract neural network structures into visual representations aids in comprehending their functionality and behaviour. 

3) Rule-based systems: Augmenting AI decision-making with interpretable, rule-based systems holds promise in domains requiring logical explanations for actions taken. 

Generative Adversarial Networks (GANs)   

The captivating world of Generative Adversarial Networks (GANs) unfolds through the interplay of generator and discriminator networks, birthing remarkable research avenues: 

1) Image generation: Crafting realistic images from random noise showcases the creative potential of GANs, with applications spanning art, design, and data augmentation. 

2) Style transfer: Enabling the transfer of artistic styles between images, merging creativity and technology to yield visually captivating results. 

3) Anomaly detection: GANs find utility in identifying anomalies within datasets, bolstering fraud detection, quality control, and anomaly-sensitive industries. 

Robotics and AI   

The synergy between Robotics and AI is a fertile ground for exploration, with Artificial Intelligence Topics such as: 

1) Human-robot collaboration: Research in this arena strives to establish harmonious collaboration between humans and robots, augmenting industry productivity and efficiency. 

2) Robot learning: By enabling robots to learn and adapt from their experiences, Researchers foster robots' autonomy and the ability to handle diverse tasks. 

3) Ethical considerations: Delving into the ethical implications surrounding AI-powered robots helps establish responsible guidelines for their deployment. 

AI in healthcare   

AI presents a transformative potential within healthcare, spurring research into: 

1) Medical diagnosis: AI aids in accurately diagnosing medical conditions, revolutionising early detection and patient care. 

2) Drug discovery: Leveraging AI for drug discovery expedites the identification of potential candidates, accelerating the development of new treatments. 

3) Personalised treatment: Tailoring medical interventions to individual patient profiles enhances treatment outcomes and patient well-being. 

AI for social good   

Harnessing the prowess of AI for Social Good entails addressing pressing global challenges: 

1) Environmental monitoring: AI-powered solutions facilitate real-time monitoring of ecological changes, supporting conservation and sustainable practices. 

2) Disaster response: Research in this area bolsters disaster response efforts by employing AI to analyse data and optimise resource allocation. 

3) Poverty alleviation: Researchers contribute to humanitarian efforts and socioeconomic equality by devising AI solutions to tackle poverty. 

Unlock the potential of Artificial Intelligence for effective Project Management with our Artificial Intelligence (AI) for Project Managers Course . Sign up now!  

Autonomous vehicles   

Autonomous Vehicles represent a realm brimming with potential and complexities, necessitating research in Artificial Intelligence Topics such as: 

1) Sensor fusion: Integrating data from diverse sensors enhances perception accuracy, which is essential for safe autonomous navigation. 

2) Path planning: Developing advanced algorithms for path planning ensures optimal routes while adhering to safety protocols. 

3) Safety and ethics: Ethical considerations, such as programming vehicles to make difficult decisions in potential accident scenarios, require meticulous research and deliberation. 

AI ethics and bias   

Ethical underpinnings in AI drive research efforts in these directions: 

1) Fairness in AI: Ensuring AI systems remain impartial and unbiased across diverse demographic groups. 

2) Bias detection and mitigation: Identifying and rectifying biases present within AI models guarantees equitable outcomes. 

3) Ethical decision-making: Developing frameworks that imbue AI with ethical decision-making capabilities aligns technology with societal values. 

Future of AI  

The vanguard of AI beckons Researchers to explore these horizons: 

1) Artificial General Intelligence (AGI): Speculating on the potential emergence of AI systems capable of emulating human-like intelligence opens dialogues on the implications and challenges. 

2) AI and creativity: Probing the interface between AI and creative domains, such as art and music, unveils the coalescence of human ingenuity and technological prowess. 

3) Ethical and regulatory challenges: Researching the ethical dilemmas and regulatory frameworks underpinning AI's evolution fortifies responsible innovation. 

AI and education   

The intersection of AI and Education opens doors to innovative learning paradigms: 

1) Personalised learning: Developing AI systems that adapt educational content to individual learning styles and paces. 

2) Intelligent tutoring systems: Creating AI-driven tutoring systems that provide targeted support to students. 

3) Educational data mining: Applying AI to analyse educational data for insights into learning patterns and trends. 

Unleash the full potential of AI with our comprehensive Introduction to Artificial Intelligence Training . Join now!  

Conclusion  

The domain of AI is ever-expanding, rich with intriguing topics about Artificial Intelligence that beckon Researchers to explore, question, and innovate. Through the pursuit of these twelve diverse Artificial Intelligence Topics, we pave the way for not only technological advancement but also a deeper understanding of the societal impact of AI. By delving into these realms, Researchers stand poised to shape the trajectory of AI, ensuring it remains a force for progress, empowerment, and positive transformation in our world. 

Unlock your full potential with our extensive Personal Development Training Courses. Join today!  

Frequently Asked Questions

Upcoming data, analytics & ai resources batches & dates.

Fri 26th Apr 2024

Fri 2nd Aug 2024

Fri 15th Nov 2024

Get A Quote

WHO WILL BE FUNDING THE COURSE?

My employer

By submitting your details you agree to be contacted in order to respond to your enquiry

  • Business Analysis
  • Lean Six Sigma Certification

Share this course

Our biggest spring sale.

red-star

We cannot process your enquiry without contacting you, please tick to confirm your consent to us for contacting you about your enquiry.

By submitting your details you agree to be contacted in order to respond to your enquiry.

We may not have the course you’re looking for. If you enquire or give us a call on 01344203999 and speak to our training experts, we may still be able to help with your training requirements.

Or select from our popular topics

  • ITIL® Certification
  • Scrum Certification
  • Change Management Certification
  • Business Analysis Courses
  • Microsoft Azure Certification
  • Microsoft Excel & Certification Course
  • Microsoft Project
  • Explore more courses

Press esc to close

Fill out your  contact details  below and our training experts will be in touch.

Fill out your   contact details   below

Thank you for your enquiry!

One of our training experts will be in touch shortly to go over your training requirements.

Back to Course Information

Fill out your contact details below so we can get in touch with you regarding your training requirements.

* WHO WILL BE FUNDING THE COURSE?

Preferred Contact Method

No preference

Back to course information

Fill out your  training details  below

Fill out your training details below so we have a better idea of what your training requirements are.

HOW MANY DELEGATES NEED TRAINING?

HOW DO YOU WANT THE COURSE DELIVERED?

Online Instructor-led

Online Self-paced

WHEN WOULD YOU LIKE TO TAKE THIS COURSE?

Next 2 - 4 months

WHAT IS YOUR REASON FOR ENQUIRING?

Looking for some information

Looking for a discount

I want to book but have questions

One of our training experts will be in touch shortly to go overy your training requirements.

Your privacy & cookies!

Like many websites we use cookies. We care about your data and experience, so to give you the best possible experience using our site, we store a very limited amount of your data. Continuing to use this site or clicking “Accept & close” means that you agree to our use of cookies. Learn more about our privacy policy and cookie policy cookie policy .

We use cookies that are essential for our site to work. Please visit our cookie policy for more information. To accept all cookies click 'Accept & close'.

Grad Coach

Research Topics & Ideas

Artifical Intelligence (AI) and Machine Learning (ML)

Research topics and ideas about AI and machine learning

If you’re just starting out exploring AI-related research topics for your dissertation, thesis or research project, you’ve come to the right place. In this post, we’ll help kickstart your research topic ideation process by providing a hearty list of research topics and ideas , including examples from past studies.

PS – This is just the start…

We know it’s exciting to run through a list of research topics, but please keep in mind that this list is just a starting point . To develop a suitable research topic, you’ll need to identify a clear and convincing research gap , and a viable plan  to fill that gap.

If this sounds foreign to you, check out our free research topic webinar that explores how to find and refine a high-quality research topic, from scratch. Alternatively, if you’d like hands-on help, consider our 1-on-1 coaching service .

Research topic idea mega list

AI-Related Research Topics & Ideas

Below you’ll find a list of AI and machine learning-related research topics ideas. These are intentionally broad and generic , so keep in mind that you will need to refine them a little. Nevertheless, they should inspire some ideas for your project.

  • Developing AI algorithms for early detection of chronic diseases using patient data.
  • The use of deep learning in enhancing the accuracy of weather prediction models.
  • Machine learning techniques for real-time language translation in social media platforms.
  • AI-driven approaches to improve cybersecurity in financial transactions.
  • The role of AI in optimizing supply chain logistics for e-commerce.
  • Investigating the impact of machine learning in personalized education systems.
  • The use of AI in predictive maintenance for industrial machinery.
  • Developing ethical frameworks for AI decision-making in healthcare.
  • The application of ML algorithms in autonomous vehicle navigation systems.
  • AI in agricultural technology: Optimizing crop yield predictions.
  • Machine learning techniques for enhancing image recognition in security systems.
  • AI-powered chatbots: Improving customer service efficiency in retail.
  • The impact of AI on enhancing energy efficiency in smart buildings.
  • Deep learning in drug discovery and pharmaceutical research.
  • The use of AI in detecting and combating online misinformation.
  • Machine learning models for real-time traffic prediction and management.
  • AI applications in facial recognition: Privacy and ethical considerations.
  • The effectiveness of ML in financial market prediction and analysis.
  • Developing AI tools for real-time monitoring of environmental pollution.
  • Machine learning for automated content moderation on social platforms.
  • The role of AI in enhancing the accuracy of medical diagnostics.
  • AI in space exploration: Automated data analysis and interpretation.
  • Machine learning techniques in identifying genetic markers for diseases.
  • AI-driven personal finance management tools.
  • The use of AI in developing adaptive learning technologies for disabled students.

Research topic evaluator

AI & ML Research Topic Ideas (Continued)

  • Machine learning in cybersecurity threat detection and response.
  • AI applications in virtual reality and augmented reality experiences.
  • Developing ethical AI systems for recruitment and hiring processes.
  • Machine learning for sentiment analysis in customer feedback.
  • AI in sports analytics for performance enhancement and injury prevention.
  • The role of AI in improving urban planning and smart city initiatives.
  • Machine learning models for predicting consumer behaviour trends.
  • AI and ML in artistic creation: Music, visual arts, and literature.
  • The use of AI in automated drone navigation for delivery services.
  • Developing AI algorithms for effective waste management and recycling.
  • Machine learning in seismology for earthquake prediction.
  • AI-powered tools for enhancing online privacy and data protection.
  • The application of ML in enhancing speech recognition technologies.
  • Investigating the role of AI in mental health assessment and therapy.
  • Machine learning for optimization of renewable energy systems.
  • AI in fashion: Predicting trends and personalizing customer experiences.
  • The impact of AI on legal research and case analysis.
  • Developing AI systems for real-time language interpretation for the deaf and hard of hearing.
  • Machine learning in genomic data analysis for personalized medicine.
  • AI-driven algorithms for credit scoring in microfinance.
  • The use of AI in enhancing public safety and emergency response systems.
  • Machine learning for improving water quality monitoring and management.
  • AI applications in wildlife conservation and habitat monitoring.
  • The role of AI in streamlining manufacturing processes.
  • Investigating the use of AI in enhancing the accessibility of digital content for visually impaired users.

Recent AI & ML-Related Studies

While the ideas we’ve presented above are a decent starting point for finding a research topic in AI, they are fairly generic and non-specific. So, it helps to look at actual studies in the AI and machine learning space to see how this all comes together in practice.

Below, we’ve included a selection of AI-related studies to help refine your thinking. These are actual studies,  so they can provide some useful insight as to what a research topic looks like in practice.

  • An overview of artificial intelligence in diabetic retinopathy and other ocular diseases (Sheng et al., 2022)
  • HOW DOES ARTIFICIAL INTELLIGENCE HELP ASTRONOMY? A REVIEW (Patel, 2022)
  • Editorial: Artificial Intelligence in Bioinformatics and Drug Repurposing: Methods and Applications (Zheng et al., 2022)
  • Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities, and Challenges (Mukhamediev et al., 2022)
  • Will digitization, big data, and artificial intelligence – and deep learning–based algorithm govern the practice of medicine? (Goh, 2022)
  • Flower Classifier Web App Using Ml & Flask Web Framework (Singh et al., 2022)
  • Object-based Classification of Natural Scenes Using Machine Learning Methods (Jasim & Younis, 2023)
  • Automated Training Data Construction using Measurements for High-Level Learning-Based FPGA Power Modeling (Richa et al., 2022)
  • Artificial Intelligence (AI) and Internet of Medical Things (IoMT) Assisted Biomedical Systems for Intelligent Healthcare (Manickam et al., 2022)
  • Critical Review of Air Quality Prediction using Machine Learning Techniques (Sharma et al., 2022)
  • Artificial Intelligence: New Frontiers in Real–Time Inverse Scattering and Electromagnetic Imaging (Salucci et al., 2022)
  • Machine learning alternative to systems biology should not solely depend on data (Yeo & Selvarajoo, 2022)
  • Measurement-While-Drilling Based Estimation of Dynamic Penetrometer Values Using Decision Trees and Random Forests (García et al., 2022).
  • Artificial Intelligence in the Diagnosis of Oral Diseases: Applications and Pitfalls (Patil et al., 2022).
  • Automated Machine Learning on High Dimensional Big Data for Prediction Tasks (Jayanthi & Devi, 2022)
  • Breakdown of Machine Learning Algorithms (Meena & Sehrawat, 2022)
  • Technology-Enabled, Evidence-Driven, and Patient-Centered: The Way Forward for Regulating Software as a Medical Device (Carolan et al., 2021)
  • Machine Learning in Tourism (Rugge, 2022)
  • Towards a training data model for artificial intelligence in earth observation (Yue et al., 2022)
  • Classification of Music Generality using ANN, CNN and RNN-LSTM (Tripathy & Patel, 2022)

As you can see, these research topics are a lot more focused than the generic topic ideas we presented earlier. So, in order for you to develop a high-quality research topic, you’ll need to get specific and laser-focused on a specific context with specific variables of interest.  In the video below, we explore some other important things you’ll need to consider when crafting your research topic.

Get 1-On-1 Help

If you’re still unsure about how to find a quality research topic, check out our Research Topic Kickstarter service, which is the perfect starting point for developing a unique, well-justified research topic.

Research Topic Kickstarter - Need Help Finding A Research Topic?

You Might Also Like:

Topic Kickstarter: Research topics in education

can one come up with their own tppic and get a search

can one come up with their own title and get a search

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Caltech

Artificial Intelligence

Since the 1950s, scientists and engineers have designed computers to "think" by making decisions and finding patterns like humans do. In recent years, artificial intelligence has become increasingly powerful, propelling discovery across scientific fields and enabling researchers to delve into problems previously too complex to solve. Outside of science, artificial intelligence is built into devices all around us, and billions of people across the globe rely on it every day. Stories of artificial intelligence—from friendly humanoid robots to SkyNet—have been incorporated into some of the most iconic movies and books.

But where is the line between what AI can do and what is make-believe? How is that line blurring, and what is the future of artificial intelligence? At Caltech, scientists and scholars are working at the leading edge of AI research, expanding the boundaries of its capabilities and exploring its impacts on society. Discover what defines artificial intelligence, how it is developed and deployed, and what the field holds for the future.

View Artificial Intelligence Terms to Know >

Orange and blue filtered illustration of a robot with star shapes covering the top of the frame

What Is AI ?

Artificial intelligence is transforming scientific research as well as everyday life, from communications to transportation to health care and more. Explore what defines AI, how it has evolved since the Turing Test, and the future of artificial intelligence.

READ MORE >

Orange and blue filtered illustration of a face made of digital particles.

What Is the Difference Between "Artificial Intelligence" and "Machine Learning"?

The term "artificial intelligence" is older and broader than "machine learning." Learn how the terms relate to each other and to the concepts of "neural networks" and "deep learning."

Blue and orange filtered illustration of a robot holding a balloon and speaking to a human. Robot has thought bubbles showing comparisons of animals, fooods, and paws.

How Do Computers Learn?

Machine learning applications power many features of modern life, including search engines, social media, and self-driving cars. Discover how computers learn to make decisions and predictions in this illustration of two key machine learning models.

Orange and blue filtered cartoon drawing of vehicle intersection

How Is AI Applied in Everyday Life?

While scientists and engineers explore AI's potential to advance discovery and technology, smart technologies also directly influence our daily lives. Explore the sometimes surprising examples of AI applications.

Orange and blue filtered illustration of big data analytics stream

What Is Big Data?

The increase in available data has fueled the rise of artificial intelligence. Find out what characterizes big data, where big data comes from, and how it is used.

Read More >

Orange and blue filtered illustration of robot head and human head looking at each other

Will Machines Become More Intelligent Than Humans?

Whether or not artificial intelligence will be able to outperform human intelligence—and how soon that could happen—is a common question fueled by depictions of AI in movies and other forms of popular culture. Learn the definition of "singularity" and see a timeline of advances in AI over the past 75 years.

Blue and orange filtered illustration of a self driving car

How Does AI Drive Autonomous Systems?

Learn the difference between automation and autonomy, and hear from Caltech faculty who are pushing the limits of AI to create autonomous technology, from self-driving cars to ambulance drones to prosthetic devices.

Blue and orange filtered image of a human hand touching with robot

Can We Trust AI?

As AI is further incorporated into everyday life, more scholars, industries, and ordinary users are examining its effects on society. The Caltech Science Exchange spoke with AI researchers at Caltech about what it might take to trust current and future technologies.

blue and yellow filtered image of a robot hand using a paintbrush

What is Generative AI?

Generative AI applications such as ChatGPT, a chatbot that answers questions with detailed written responses; and DALL-E, which creates realistic images and art based on text prompts; became widely popular beginning in 2022 when companies released versions of their applications that members of the public, not just experts, could easily use.

Orange and blue filtered photo of a glass building with trees, shrubs, and empty tables and chairs in the foreground

Ask a Caltech Expert

Where can you find machine learning in finance? Could AI help nature conservation efforts? How is AI transforming astronomy, biology, and other fields? What does an autonomous underwater vehicle have to do with sustainability? Find answers from Caltech researchers.

Terms to Know

A set of instructions or sequence of steps that tells a computer how to perform a task or calculation. In some AI applications, algorithms tell computers how to adapt and refine processes in response to data, without a human supplying new instructions.

Artificial intelligence describes an application or machine that mimics human intelligence.

A system in which machines execute repeated tasks based on a fixed set of human-supplied instructions.

A system in which a machine makes independent, real-time decisions based on human-supplied rules and goals.

The massive amounts of data that are coming in quickly and from a variety of sources, such as internet-connected devices, sensors, and social platforms. In some cases, using or learning from big data requires AI methods. Big data also can enhance the ability to create new AI applications.

An AI system that mimics human conversation. While some simple chatbots rely on pre-programmed text, more sophisticated systems, trained on large data sets, are able to convincingly replicate human interaction.

A subset of machine learning . Deep learning uses machine learning algorithms but structures the algorithms in layers to create "artificial neural networks." These networks are modeled after the human brain and are most likely to provide the experience of interacting with a real human.

An approach that includes human feedback and oversight in machine learning systems. Including humans in the loop may improve accuracy and guard against bias and unintended outcomes of AI.

A computer-generated simplification of something that exists in the real world, such as climate change , disease spread, or earthquakes . Machine learning systems develop models by analyzing patterns in large data sets. Models can be used to simulate natural processes and make predictions.

Interconnected sets of processing units, or nodes, modeled on the human brain, that are used in deep learning to identify patterns in data and, on the basis of those patterns, make predictions in response to new data. Neural networks are used in facial recognition systems, digital marketing, and other applications.

A hypothetical scenario in which an AI system develops agency and grows beyond human ability to control it.

The data used to " teach " a machine learning system to recognize patterns and features. Typically, continual training results in more accurate machine learning systems. Likewise, biased or incomplete datasets can lead to imprecise or unintended outcomes.

An interview-based method proposed by computer pioneer Alan Turing to assess whether a machine can think.

Dive Deeper

A human sits at a table flexing his hand. Sensors are attached to the skin of his forearm. A robotic hand next to him mimics his motion.

Artificial Skin Gives Robots Sense of Touch and Beyond

Professor Yaser Abu-Mostafa

Artificial Intelligence: The Good, the Bad, and the Ugly

Person in red dress sitting at an engineering lab bench

The AI Researcher Giving Her Field Its Bitter Medicine

More Caltech Computer and Information Sciences Research Coverage

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

Harvard SEAS Ph.D. student Lucas Monteiro Paes wearing a white shirt and black glasses

Ph.D. student Monteiro Paes named Apple Scholar in AI/ML

Monteiro Paes studies fairness and arbitrariness in machine learning models

AI / Machine Learning , Applied Mathematics , Awards , Graduate Student Profile

Four people standing in a line, one wearing a Harvard sweatshirt, two holding second-place statues

A new phase for Harvard Quantum Computing Club

SEAS students place second at MIT quantum hackathon

Computer Science , Quantum Engineering , Undergraduate Student Profile

One Hundred Year Study on Artificial Intelligence (AI100)

AI Research Trends

Main navigation, related documents.

2015 Study Panel Charge

June 2016 Interim Summary

Download Full Report

[ go to the annotated version ]

Until the turn of the millennium, AI’s appeal lay largely in its promise to deliver, but in the last fifteen years, much of that promise has been redeemed. [15]  AI already pervades our lives. And as it becomes a central force in society, the field is now shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.

Several factors have fueled the AI revolution. Foremost among them is the maturing of machine learning, supported in part by cloud computing resources and wide-spread, web-based data gathering. Machine learning has been propelled dramatically forward by “deep learning,” a form of adaptive artificial neural networks trained using a method called backpropagation. [16]  This leap in the performance of information processing algorithms has been accompanied by significant progress in hardware technology for basic operations such as sensing, perception, and object recognition. New platforms and markets for data-driven products, and the economic incentives to find new products and markets, have also contributed to the advent of AI-driven technology.

All these trends drive the “hot” areas of research described below. This compilation is meant simply to reflect the areas that, by one metric or another, currently receive greater attention than others. They are not necessarily more important or valuable than other ones. Indeed, some of the currently “hot” areas were less popular in past years, and it is likely that other areas will similarly re-emerge in the future.

Large-scale machine learning

Many of the basic problems in machine learning (such as supervised and unsupervised learning) are well-understood. A major focus of current efforts is to scale existing algorithms to work with extremely large data sets. For example, whereas traditional methods could afford to make several passes over the data set, modern ones are designed to make only a single pass; in some cases, only sublinear methods (those that only look at a fraction of the data) can be admitted.

Deep learning

The ability to successfully train convolutional neural networks has most benefited the field of computer vision, with applications such as object recognition, video labeling, activity recognition, and several variants thereof. Deep learning is also making significant inroads into other areas of perception, such as audio, speech, and natural language processing.

Reinforcement learning

Whereas traditional machine learning has mostly focused on pattern mining, reinforcement learning shifts the focus to decision making, and is a technology that will help AI to advance more deeply into the realm of learning about and executing actions in the real world. It has existed for several decades as a framework for experience-driven sequential decision-making, but the methods have not found great success in practice, mainly owing to issues of representation and scaling. However, the advent of deep learning has provided reinforcement learning with a “shot in the arm.” The recent success of AlphaGo, a computer program developed by Google Deepmind that beat the human Go champion in a five-game match, was due in large part to reinforcement learning. AlphaGo was trained by initializing an automated agent with a human expert database, but was subsequently refined by playing a large number of games against itself and applying reinforcement learning.

Robotic navigation, at least in static environments, is largely solved. Current efforts consider how to train a robot to interact with the world around it in generalizable and predictable ways. A natural requirement that arises in interactive environments is manipulation, another topic of current interest. The deep learning revolution is only beginning to influence robotics, in large part because it is far more difficult to acquire the large labeled data sets that have driven other learning-based areas of AI. Reinforcement learning (see above), which obviates the requirement of labeled data, may help bridge this gap but requires systems to be able to safely explore a policy space without committing errors that harm the system itself or others. Advances in reliable machine perception, including computer vision, force, and tactile perception, much of which will be driven by machine learning, will continue to be key enablers to advancing the capabilities of robotics.

Computer vision

Computer vision is currently the most prominent form of machine perception. It has been the sub-area of AI most transformed by the rise of deep learning. Until just a few years ago, support vector machines were the method of choice for most visual classification tasks. But the confluence of large-scale computing, especially on GPUs, the availability of large datasets, especially via the internet, and refinements of neural network algorithms has led to dramatic improvements in performance on benchmark tasks (e.g., classification on ImageNet [17] ). For the first time, computers are able to perform some (narrowly defined) visual classification tasks better than people. Much current research is focused on automatic image and video captioning.

Natural Language Processing

Often coupled with automatic speech recognition, Natural Language Processing is another very active area of machine perception. It is quickly becoming a commodity for mainstream languages with large data sets. Google announced that 20% of current mobile queries are done by voice, [18]  and recent demonstrations have proven the possibility of real-time translation. Research is now shifting towards developing refined and capable systems that are able to interact with people through dialog, not just react to stylized requests.

Collaborative systems

Research on collaborative systems investigates models and algorithms to help develop autonomous systems that can work collaboratively with other systems and with humans. This research relies on developing formal models of collaboration, and studies the capabilities needed for systems to become effective partners. There is growing interest in applications that can utilize the complementary strengths of humans and machines—for humans to help AI systems to overcome their limitations, and for agents to augment human abilities and activities.

Crowdsourcing and human computation

Since human abilities are superior to automated methods for accomplishing many tasks, research on crowdsourcing and human computation investigates methods to augment computer systems by utilizing human intelligence to solve problems that computers alone cannot solve well. Introduced only about fifteen years ago, this research now has an established presence in AI. The best-known example of crowdsourcing is Wikipedia, a knowledge repository that is maintained and updated by netizens and that far exceeds traditionally-compiled information sources, such as encyclopedias and dictionaries, in scale and depth. Crowdsourcing focuses on devising innovative ways to harness human intelligence. Citizen science platforms energize volunteers to solve scientific problems, while paid crowdsourcing platforms such as Amazon Mechanical Turk provide automated access to human intelligence on demand. Work in this area has facilitated advances in other subfields of AI, including computer vision and NLP, by enabling large amounts of labeled training data and/or human interaction data to be collected in a short amount of time. Current research efforts explore ideal divisions of tasks between humans and machines based on their differing capabilities and costs.

Algorithmic game theory and computational social choice

New attention is being drawn to the economic and social computing dimensions of AI, including incentive structures. Distributed AI and multi-agent systems have been studied since the early 1980s, gained prominence starting in the late 1990s, and were accelerated by the internet. A natural requirement is that systems handle potentially misaligned incentives, including self-interested human participants or firms, as well as automated AI-based agents representing them. Topics receiving attention include computational mechanism design (an economic theory of incentive design, seeking incentive-compatible systems where inputs are truthfully reported), computational social choice (a theory for how to aggregate rank orders on alternatives), incentive aligned information elicitation (prediction markets, scoring rules, peer prediction) and algorithmic game theory (the equilibria of markets, network games, and parlor games such as Poker—a game where significant advances have been made in recent years through abstraction techniques and no-regret learning).

Internet of Things (IoT)

A growing body of research is devoted to the idea that a wide array of devices can be interconnected to collect and share their sensory information. Such devices can include appliances, vehicles, buildings, cameras, and other things. While it's a matter of technology and wireless networking to connect the devices, AI can process and use the resulting huge amounts of data for intelligent and useful purposes. Currently, these devices use a bewildering array of incompatible communication protocols. AI could help tame this Tower of Babel.

Neuromorphic Computing

Traditional computers implement the von Neumann model of computing, which separates the modules for input/output, instruction-processing, and memory. With the success of deep neural networks on a wide array of tasks, manufacturers are actively pursuing alternative models of computing—especially those that are inspired by what is known about biological neural networks—with the aim of improving the hardware efficiency and robustness of computing systems. At the moment, such “neuromorphic” computers have not yet clearly demonstrated big wins, and are just beginning to become commercially viable. But it is possible that they will become commonplace (even if only as additions to their von Neumann cousins) in the near future. Deep neural networks have already created a splash in the application landscape. A larger wave may hit when these networks can be trained and executed on dedicated neuromorphic hardware, as opposed to simulated on standard von Neumann architectures, as they are today.

[15]  Appendix I offers a short history of AI, including a description of some of the traditionally core areas of research, which have shifted over the past six decades.

[16]  Backpropogation is an abbreviation for "backward propagation of errors,” a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network.

[17]  ImageNet, Stanford Vision Lab, Stanford University, Princeton University, 2016, accessed August 1, 2016,  www.image-net.org/ .

[18]  Greg Sterling, "Google says 20% of mobile queries are voice searches,"  Search Engine Land , May 18, 2016, accessed August 1, 2016,  http://searchengineland.com/google-reveals-20-percent-queries-voice-queries-249917 .

In this section

Overall Trends and the Future of AI Research

Cite This Report

Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller.  "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc:  http://ai100.stanford.edu/2016-report . Accessed:  September 6, 2016.

Report Authors

AI100 Standing Committee and Study Panel 

© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International):  https://creativecommons.org/licenses/by-nd/4.0/ .

Tackling the most challenging problems in computer science

Our teams aspire to make discoveries that positively impact society. Core to our approach is sharing our research and tools to fuel progress in the field, to help more people more quickly. We regularly publish in academic journals, release projects as open source, and apply research to Google products to benefit users at scale.

Featured research developments

research questions of ai

Mitigating aviation’s climate impact with Project Contrails

research questions of ai

Consensus and subjectivity of skin tone annotation for ML fairness

research questions of ai

A toolkit for transparency in AI dataset documentation

research questions of ai

Building better pangenomes to improve the equity of genomics

research questions of ai

A set of methods, best practices, and examples for designing with AI

research questions of ai

Learn more from our research

Researchers across Google are innovating across many domains. We challenge conventions and reimagine technology so that everyone can benefit.

research questions of ai

Publications

Google publishes over 1,000 papers annually. Publishing our work enables us to collaborate and share ideas with, as well as learn from, the broader scientific community.

research questions of ai

Research areas

From conducting fundamental research to influencing product development, our research teams have the opportunity to impact technology used by billions of people every day.

research questions of ai

Tools and datasets

We make tools and datasets available to the broader research community with the goal of building a more collaborative ecosystem.

research questions of ai

Meet the people behind our innovations

research questions of ai

Our teams collaborate with the research and academic communities across the world

Mobile Navigation

Research index, filter and sort, filter selections, sort options, research papers.

Young Tiger

Academia Insider

The best AI tools for research papers and academic research (Literature review, grants, PDFs and more)

As our collective understanding and application of artificial intelligence (AI) continues to evolve, so too does the realm of academic research. Some people are scared by it while others are openly embracing the change. 

Make no mistake, AI is here to stay!

Instead of tirelessly scrolling through hundreds of PDFs, a powerful AI tool comes to your rescue, summarizing key information in your research papers. Instead of manually combing through citations and conducting literature reviews, an AI research assistant proficiently handles these tasks.

These aren’t futuristic dreams, but today’s reality. Welcome to the transformative world of AI-powered research tools!

The influence of AI in scientific and academic research is an exciting development, opening the doors to more efficient, comprehensive, and rigorous exploration.

This blog post will dive deeper into these tools, providing a detailed review of how AI is revolutionizing academic research. We’ll look at the tools that can make your literature review process less tedious, your search for relevant papers more precise, and your overall research process more efficient and fruitful.

I know that I wish these were around during my time in academia. It can be quite confronting when trying to work out what ones you should and shouldn’t use. A new one seems to be coming out every day!

Here is everything you need to know about AI for academic research and the ones I have personally trialed on my Youtube channel.

Best ChatGPT interface – Chat with PDFs/websites and more

I get more out of ChatGPT with HeyGPT . It can do things that ChatGPT cannot which makes it really valuable for researchers.

Use your own OpenAI API key ( h e re ). No login required. Access ChatGPT anytime, including peak periods. Faster response time. Unlock advanced functionalities with HeyGPT Ultra for a one-time lifetime subscription

AI literature search and mapping – best AI tools for a literature review – elicit and more

Harnessing AI tools for literature reviews and mapping brings a new level of efficiency and precision to academic research. No longer do you have to spend hours looking in obscure research databases to find what you need!

AI-powered tools like Semantic Scholar and elicit.org use sophisticated search engines to quickly identify relevant papers.

They can mine key information from countless PDFs, drastically reducing research time. You can even search with semantic questions, rather than having to deal with key words etc.

With AI as your research assistant, you can navigate the vast sea of scientific research with ease, uncovering citations and focusing on academic writing. It’s a revolutionary way to take on literature reviews.

  • Elicit –  https://elicit.org
  • Supersymmetry.ai: https://www.supersymmetry.ai
  • Semantic Scholar: https://www.semanticscholar.org
  • Connected Papers –  https://www.connectedpapers.com/
  • Research rabbit – https://www.researchrabbit.ai/
  • Laser AI –  https://laser.ai/
  • Litmaps –  https://www.litmaps.com
  • Inciteful –  https://inciteful.xyz/
  • Scite –  https://scite.ai/
  • System –  https://www.system.com

If you like AI tools you may want to check out this article:

  • How to get ChatGPT to write an essay [The prompts you need]

AI-powered research tools and AI for academic research

AI research tools, like Concensus, offer immense benefits in scientific research. Here are the general AI-powered tools for academic research. 

These AI-powered tools can efficiently summarize PDFs, extract key information, and perform AI-powered searches, and much more. Some are even working towards adding your own data base of files to ask questions from. 

Tools like scite even analyze citations in depth, while AI models like ChatGPT elicit new perspectives.

The result? The research process, previously a grueling endeavor, becomes significantly streamlined, offering you time for deeper exploration and understanding. Say goodbye to traditional struggles, and hello to your new AI research assistant!

  • Bit AI –  https://bit.ai/
  • Consensus –  https://consensus.app/
  • Exper AI –  https://www.experai.com/
  • Hey Science (in development) –  https://www.heyscience.ai/
  • Iris AI –  https://iris.ai/
  • PapersGPT (currently in development) –  https://jessezhang.org/llmdemo
  • Research Buddy –  https://researchbuddy.app/
  • Mirror Think – https://mirrorthink.ai

AI for reading peer-reviewed papers easily

Using AI tools like Explain paper and Humata can significantly enhance your engagement with peer-reviewed papers. I always used to skip over the details of the papers because I had reached saturation point with the information coming in. 

These AI-powered research tools provide succinct summaries, saving you from sifting through extensive PDFs – no more boring nights trying to figure out which papers are the most important ones for you to read!

They not only facilitate efficient literature reviews by presenting key information, but also find overlooked insights.

With AI, deciphering complex citations and accelerating research has never been easier.

  • Open Read –  https://www.openread.academy
  • Chat PDF – https://www.chatpdf.com
  • Explain Paper – https://www.explainpaper.com
  • Humata – https://www.humata.ai/
  • Lateral AI –  https://www.lateral.io/
  • Paper Brain –  https://www.paperbrain.study/
  • Scholarcy – https://www.scholarcy.com/
  • SciSpace Copilot –  https://typeset.io/
  • Unriddle – https://www.unriddle.ai/
  • Sharly.ai – https://www.sharly.ai/

AI for scientific writing and research papers

In the ever-evolving realm of academic research, AI tools are increasingly taking center stage.

Enter Paper Wizard, Jenny.AI, and Wisio – these groundbreaking platforms are set to revolutionize the way we approach scientific writing.

Together, these AI tools are pioneering a new era of efficient, streamlined scientific writing.

  • Paper Wizard –  https://paperwizard.ai/
  • Jenny.AI https://jenni.ai/ (20% off with code ANDY20)
  • Wisio – https://www.wisio.app

AI academic editing tools

In the realm of scientific writing and editing, artificial intelligence (AI) tools are making a world of difference, offering precision and efficiency like never before. Consider tools such as Paper Pal, Writefull, and Trinka.

Together, these tools usher in a new era of scientific writing, where AI is your dedicated partner in the quest for impeccable composition.

  • Paper Pal –  https://paperpal.com/
  • Writefull –  https://www.writefull.com/
  • Trinka –  https://www.trinka.ai/

AI tools for grant writing

In the challenging realm of science grant writing, two innovative AI tools are making waves: Granted AI and Grantable.

These platforms are game-changers, leveraging the power of artificial intelligence to streamline and enhance the grant application process.

Granted AI, an intelligent tool, uses AI algorithms to simplify the process of finding, applying, and managing grants. Meanwhile, Grantable offers a platform that automates and organizes grant application processes, making it easier than ever to secure funding.

Together, these tools are transforming the way we approach grant writing, using the power of AI to turn a complex, often arduous task into a more manageable, efficient, and successful endeavor.

  • Granted AI – https://grantedai.com/
  • Grantable – https://grantable.co/

Free AI research tools

There are many different tools online that are emerging for researchers to be able to streamline their research processes. There’s no need for convience to come at a massive cost and break the bank.

The best free ones at time of writing are:

  • Elicit – https://elicit.org
  • Connected Papers – https://www.connectedpapers.com/
  • Litmaps – https://www.litmaps.com ( 10% off Pro subscription using the code “STAPLETON” )
  • Consensus – https://consensus.app/

Wrapping up

The integration of artificial intelligence in the world of academic research is nothing short of revolutionary.

With the array of AI tools we’ve explored today – from research and mapping, literature review, peer-reviewed papers reading, scientific writing, to academic editing and grant writing – the landscape of research is significantly transformed.

The advantages that AI-powered research tools bring to the table – efficiency, precision, time saving, and a more streamlined process – cannot be overstated.

These AI research tools aren’t just about convenience; they are transforming the way we conduct and comprehend research.

They liberate researchers from the clutches of tedium and overwhelm, allowing for more space for deep exploration, innovative thinking, and in-depth comprehension.

Whether you’re an experienced academic researcher or a student just starting out, these tools provide indispensable aid in your research journey.

And with a suite of free AI tools also available, there is no reason to not explore and embrace this AI revolution in academic research.

We are on the precipice of a new era of academic research, one where AI and human ingenuity work in tandem for richer, more profound scientific exploration. The future of research is here, and it is smart, efficient, and AI-powered.

Before we get too excited however, let us remember that AI tools are meant to be our assistants, not our masters. As we engage with these advanced technologies, let’s not lose sight of the human intellect, intuition, and imagination that form the heart of all meaningful research. Happy researching!

Thank you to Ivan Aguilar – Ph.D. Student at SFU (Simon Fraser University), for starting this list for me!

research questions of ai

Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.

Thank you for visiting Academia Insider.

We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!

research questions of ai

2024 © Academia Insider

research questions of ai

Enago Academy

AI-Driven Hypotheses: Real world examples exploring the potential and challenges of AI-generated hypotheses in science

' src=

Artificial intelligence (AI) is no longer confined to mere automation; it is now an active participant in the pursuit of knowledge and understanding. However, in the realm of scientific research, the integration of AI marks a significant paradigm shift, ushering in an era where machines and human actively collaborate to formulate research hypotheses and questions. While AI systems have traditionally served as powerful tools for data analysis, their evolution now allows them to go beyond analysis and generate hypotheses, prompting researchers to explore uncharted domains of research.

Let’s delve deeper in understanding this transformative capability of AI and the challenges established in research hypothesis formation, emphasizing the crucial role of human intervention throughout the AI integration process.

Table of Contents

Potential of AI-Generated Research Hypothesis: Is it enough?

The discerning ability of AI, particularly through machine learning algorithms, has demonstrated a unique capacity to identify patterns across vast datasets. This has given rise to AI systems not only proficient in analyzing existing data but also in formulating hypotheses based on patterns that may elude human observation alone. The synergy between machine-driven hypothesis generation and human expertise represents a promising frontier for scientific discovery, underscoring the importance of human oversight and interpretation.

The capability of AI to generate hypotheses raises thought-provoking questions about the nature of creativity in the research process. However, although AI can identify patterns within data, the question remains: can they exhibit true creativity in proposing hypotheses, or are they limited to recognizing patterns within existing data?

Furthermore, the intersection of AI and research transcends the generation of hypotheses to include the formulation of research questions. By actively engaging with data and recognizing gaps in knowledge, AI systems can propose insightful questions that guide researchers toward unexplored avenues. This collaborative approach between machines and researchers enhances the scope and depth of scientific inquiry, emphasizing the indispensable role of human insight in shaping the research agenda.

Challenges in AI-Driven Hypothesis Formation

Despite the immense potential, the integration of AI in hypothesis formation is not without its challenges. One significant concern is the “black box” nature of many advanced AI algorithms. As these systems become more complex, understanding the reasoning behind their generated hypotheses becomes increasingly challenging for human researchers. This lack of interpretability can hinder the acceptance of AI-driven hypotheses in the scientific community.

Moreover, biases inherent in the training data of AI models can influence the hypotheses generated. If not carefully addressed, this bias could lead to skewed perspectives and reinforce existing stereotypes. It is crucial to recognize that while AI can process vast amounts of information, it lacks the nuanced understanding and contextual awareness that human researchers bring to the table.

Real Concerns of the Scholarly Community Clouding the Integration of AI in Research Hypothesis Generation

Instance 1 In a paper published in JAMA Ophthalmology, researchers utilized GPT-4, the latest version of the language model powering ChatGPT, in conjunction with Advanced Data Analysis (ADA), a model incorporating Python for statistical analysis. The AI-generated data incorrectly suggested the superiority of one surgical procedure over another in treating keratoconus. The study aimed to demonstrate the ease with which AI could create seemingly authentic but fabricated datasets. Despite flaws detectable under scrutiny, the authors expressed concern over the potential misuse of AI in generating convincing yet false data, raising issues of research integrity. Experts emphasize the need for updated quality checks and automated tools to identify AI-generated synthetic data in scientific publishing.

Instance 2 In early October, a gathering of researchers, including a past Nobel laureate, convened in Stockholm to explore the evolving role of AI in scientific processes. Led by Hiroaki Kitano, a biologist and CEO of Sony AI, the workshop considered introducing awards for AIs and AI-human collaborations producing outstanding scientific contributions. The discussion revolved around the potential of AI in various scientific tasks, including hypothesis generation, a process traditionally requiring human creativity. While AI has long been involved in literature-based discovery and knowledge graph analysis, recent advancements, particularly in large language models, are enabling the generation of hypotheses and the exploration of unconventional ideas.

The potential for AI to make ‘alien’ hypotheses—those unlikely to be conceived by humans—has been demonstrated, raising questions about the interpretability and clarity of AI-generated hypotheses.

Ethical Considerations in AI-Driven Research Hypothesis

As AI takes on a more active role in hypothesis formation, ethical considerations become paramount. The responsible use of AI requires continuous vigilance to prevent unintended consequences. Researchers must be vigilant in identifying and mitigating biases in training data, ensuring that AI systems are not perpetuating or exacerbating existing inequalities.

Additionally, the ethical implications of AI-generated hypotheses, particularly in sensitive areas such as genetics or social sciences, demand careful scrutiny. Transparency in the decision-making process of AI algorithms is essential to build trust within the scientific community and society at large. Striking the right balance between innovation and ethical responsibility is a challenge that requires constant attention as the collaboration between humans and AI evolves.

The Human Touch: A crucial element in AI-driven research

Nuanced thinking, creativity, and contextual understanding that humans possess play a vital role in refining and validating the hypotheses generated by AI. Researchers must act as critical evaluators, questioning the assumptions made by AI algorithms and ensuring that the proposed hypotheses align with existing knowledge. Furthermore, the interpretability challenge can be addressed through interdisciplinary collaboration. Scientists working closely with experts in AI ethics, philosophy, and computer science can develop frameworks to enhance the transparency of AI-generated hypotheses. This not only fosters a better understanding of the underlying processes but also ensures that the generated hypotheses align with ethical and scientific standards.

What the Future Holds

The integration of AI in hypothesis formation is an ongoing journey with vast potential. The collaborative efforts of humans and machines hold the promise of accelerating scientific discovery, unlocking new insights, and addressing complex challenges facing humanity. However, this journey requires a balanced approach, acknowledging the strengths of AI while respecting the unique capabilities and ethical considerations that humans bring to the table.

To Conclude…

The transformative capability of AI in hypothesis formation is reshaping the landscape of scientific research. But this is not possible without the collaborative partnership between humans and machines, having the potential to drive unprecedented progress. Thus, it is imperative to navigate the challenges associated with AI integration and embrace a symbiotic relationship between human intellect and AI; with which we can unlock the full potential of this dynamic collaboration and usher in a new era of scientific exploration and understanding.

' src=

Interesting article

Rate this article Cancel Reply

Your email address will not be published.

research questions of ai

Enago Academy's Most Popular Articles

Measure to Authenticate AI-Generated Scholarly Output

  • AI in Academia
  • Thought Leadership
  • Trending Now

Authenticating AI-Generated Scholarly Outputs: Practical approaches to take

The rapid advancements in artificial intelligence (AI) have given rise to a new era of…

AI Summarization Tools

Simplifying the Literature Review Journey — A comparative analysis of 6 AI summarization tools

Imagine having to skim through and read mountains of research papers and books, only to…

AI Detection

6 Leading AI Detection Tools for Academic Writing — A comparative analysis

The advent of AI content generators, exemplified by advanced models like ChatGPT, Claude AI, and…

research questions of ai

  • Industry News

Controversy Erupts Over AI-Generated Visuals in Scientific Publications

In recent months, the scientific community has been grappling with a growing trend: the integration…

How To Write A Lab Report | Traditional vs. AI-Assisted Approach

  • Reporting Research

How to Improve Lab Report Writing: Best practices to follow with and without AI-assistance

Imagine you’re a scientist who just made a ground-breaking discovery! You want to share your…

Simplifying the Literature Review Journey — A comparative analysis of 6 AI…

research questions of ai

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research questions of ai

What should universities' stance be on AI tools in research and academic writing?

logo

AI Research Tools

research questions of ai

Julius is an AI data analysis tool that helps you visualize, analyze, and get insights from all kinds of data. With Julius, you can simply

research questions of ai

Instabooks AI

Instabooks AI instantly generates customized textbooks on any topic you want to explore in depth. Simply type a detailed description of the information you want

research questions of ai

Grok AI is a large language model chatbot developed by xAI that is currently in early access. It’s designed to be a resourceful AI assistant

research questions of ai

ResearchRabbit

ResearchRabbit is an AI-powered research app that makes discovering and organizing academic papers incredibly easy. It allows you to view interactive visualizations and create collections

research questions of ai

Lumina Chat

Lumina Chat is an AI-powered search engine that lets you instantly get detailed answers from over 1 million journal articles and research papers. It allows

research questions of ai

scite is an AI-powered research tool that helps researchers discover and evaluate scientific articles. It analyzes millions of citations and shows how each article has

research questions of ai

SciSpace is an AI research assistant that simplifies researching papers through AI-generated explanations and a network showing connections between relevant papers. It aims to automate

research questions of ai

Avidnote is an AI-powered research tool that helps you organize, write, and analyze your academic work more efficiently. With this tool, you can easily upload

research questions of ai

Genei is a research tool that automates the process of summarizing background reading and can also generate blogs, articles, and reports. It allows you to

research questions of ai

Ai Summary Generator

Ai Summary Generator is a text summarization tool that can instantly summarize lengthy texts or turn them into bullet point lists. It uses AI to

research questions of ai

Kahubi is an AI assistant that helps researchers write, read, and analyze more effectively. It enables you to draft parts of papers, summarize text, do

research questions of ai

ChatPDF allows you to talk to your PDF documents as if they were human. It’s perfect for quickly extracting information or answering questions from large

Discover the latest AI research tools to accelerate your studies and academic research. Search through millions of research papers, summarize articles, view citations, and more.

  • Privacy Policy
  • Terms & Conditions

Copyright © 2024 EasyWithAI.com

Top AI Tools

  • Best Free AI Image Generators
  • Best AI Video Editors
  • Best AI Meeting Assistants
  • Best AI Tools for Students
  • Top 5 Free AI Text Generators
  • Top 5 AI Image Upscalers

Readers like you help support Easy With AI. When you make a purchase using links on our site, we may earn an affiliate commission at no extra cost to you.

Subscribe to our weekly newsletter for the latest AI tools !

We don’t spam! Read our privacy policy for more info.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Please check your inbox or spam folder to confirm your subscription. Thank you!

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Artif Intell
  • PMC10358356

Logo of frontai

Specific challenges posed by artificial intelligence in research ethics

Sarah bouhouita-guermech.

1 School of Public Health, Université de Montréal, Montréal, QC, Canada

Patrick Gogognon

2 Centre de recherche, CHU Sainte-Justine, Montréal, QC, Canada

Jean-Christophe Bélisle-Pipon

3 Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada

Associated Data

The twenty first century is often defined as the era of Artificial Intelligence (AI), which raises many questions regarding its impact on society. It is already significantly changing many practices in different fields. Research ethics (RE) is no exception. Many challenges, including responsibility, privacy, and transparency, are encountered. Research ethics boards (REB) have been established to ensure that ethical practices are adequately followed during research projects. This scoping review aims to bring out the challenges of AI in research ethics and to investigate if REBs are equipped to evaluate them.

Three electronic databases were selected to collect peer-reviewed articles that fit the inclusion criteria (English or French, published between 2016 and 2021, containing AI, RE, and REB). Two instigators independently reviewed each piece by screening with Covidence and then coding with NVivo.

From having a total of 657 articles to review, we were left with a final sample of 28 relevant papers for our scoping review. The selected literature described AI in research ethics (i.e., views on current guidelines, key ethical concept and approaches, key issues of the current state of AI-specific RE guidelines) and REBs regarding AI (i.e., their roles, scope and approaches, key practices and processes, limitations and challenges, stakeholder perceptions). However, the literature often described REBs ethical assessment practices of projects in AI research as lacking knowledge and tools.

Ethical reflections are taking a step forward while normative guidelines adaptation to AI's reality is still dawdling. This impacts REBs and most stakeholders involved with AI. Indeed, REBs are not equipped enough to adequately evaluate AI research ethics and require standard guidelines to help them do so.

1. Introduction

The twenty first century is often defined as the era of artificial intelligence (AI) Brynjolfsson and Andrew, 2017 ). For a long time, humans have been conceptualizing an autonomous entity capable of human-like functions and more. Many innovations have preceded what we now know as AI (Stark and Pylyshyn, 2020 ). The mathematical and computational progress has had a significant impact on what made today's AI possible and flourish so quickly in the span of the last few years (Calmet and John, 1997 ; Xu et al., 2021 ). Many place their bet on AI's potential to revolutionize most fields. As ubiquitous as it seems, AI's role in our society remains ambiguous. Although Artificial Intelligence comes in different forms, essentially, it is predisposed to simulate human intelligence (Mintz and Brodie, 2019 ). AI has many forms: voice or facial recognition applications or even medical diagnosis systems (radiology, dermatology, etc.), algorithms that increase user service, and more (Copeland, 2022 ). AI is mainly used to increase productivity and make tasks less burdensome. It has proven to absorb and analyze more data in a shorter period than humans. Indeed, some have noticed patients' satisfaction increasing, better financial performance, and better data management in healthcare (Davenport and Rajeev, 2018 ). Many innovations emanated from AI's ability to collect large sets of data which resulted in better predictions on different issues, helping to understand information collected throughout history, or depicting puzzling phenomena more efficiently (The Royal Society, The Alan Turing Institute, 2019 ).

However, advances made in AI come with concerns about ethical, legal, and social issues (Bélisle-Pipon et al., 2021 ). AI systems (AIS) are part of professionals' decision-making and occasionally take over that role, making us wonder how responsibilities and functions are divided between each participating party (Dignum, 2018 ). Another issue worth investigating is data bias. A group of individuals initially programs AI to adhere to a set of pre-established data. This data could already be biased (i.e., favoring one group of people over another based on their race or social-economic status) by having one specific group represented and marginalizing the rest (Müller, 2021 ). Another fundamental issue to consider is data privacy. People are worried about using their data, which has become easier to access by big companies (Mazurek and Karolina, 2019 ). It is now much more strenuous to track where all the existing information goes. The lack of transparency has decreased public's trust. Many, such as industry representatives, governments, academics, and civil society, are working toward building better frameworks and regulations to design, develop and implement AI efficiently (Cath, 2018 ). Considering the multidisciplinary aspect of AI, different experts are called to provide their knowledge and expertise on the matter (Bélisle-Pipon et al., 2022 ). Many fields must leave room to adjust their standard of practice. One field that will be discussed in this study is research ethics.

Research ethics boards (REBs; the term REB is used for simplicity and includes REC, Research ethics committees, and IRB, Institutional review boards) have been created to ensure that ethical practices are adequately followed during research projects to ensure participant protection and that advantages outweigh the induced harms (Bonnet and Bénédicte, 2009 ). To achieve this, they follow existent codes and regulations. For instance, REBs in Canada turn to the Canadian Tri-Council Policy Statement (TCPS2) to build their framework in research ethics. In contrast, the US uses the US Common Rule as a model (Page and Jeffrey, 2017 ). Many countries have a set of guidelines and laws that are used as a starting point to set boundaries for AI use. However, ordinances and regulations regarding AI are limited (O'Sullivan et al., 2019 ). The lack of tools makes it harder for REBs to adjust to the new challenges created by AI. This gap reflects the need to understand better the current state of knowledge and findings in research ethics regarding AI.

To inform and assist REBs in their challenges with AI, we conducted a scoping review of the literature on REBs' current practices and the challenges AI may pose during their evaluation. Specifically, this article aims to raise the issues and good practices to support REBs' mission in research involving AI. To our knowledge, this is the first review on this topic. After gathering and analyzing the relevant articles, we will discuss the critical elements in research ethics AI while considering REBs' role.

2. Methodology

To better understand the REBs' current practices toward AI in research, we conducted a scoping review on articles generated from PubMed, Ovid, and Web of Science. Since the literature behind our research question is still preliminary, opting for a scoping review seemed like the better approach to garner the existing and important papers related to our topic (Colquhoun et al., 2014 ). A scoping review was preferred over a systematic review since the studied field is not yet clearly defined, and the literature behind it is still very limited (Munn et al., 2018 ). After a preliminary overview of relevant articles which showcased the limited literature on the matter, we opted for a scoping review for a more exploratory approach. A scoping review will allow us to collect and assess essential information from the emerging literature and gather it into one place to help advance future studies. We focused on two concepts: AI and REB. Table 1 of this article presents equations for each concept that differ from one search engine to another. We sought to use general terms frequently used in the literature to define both concepts. After validating the research strategy with a librarian, the subsequent articles were imported to Covidence. The criteria exclusion to determine whether studies were not eligible for the review were: articles published before 2016, articles published in a language other than English or French, studies found in books, book chapters, or conferences, and studies that did not contain AI, REB, and research ethics. The criteria inclusion to determine whether studies were eligible for the review were (as seen in Table 2 ): articles published between 2016 and 2021, articles published in English or French, studies published in a peer-reviewed article, a commentary, an editorial, a review or a discussion paper and studies containing AI, REB and research ethics. We have chosen 2016 as the starting year of the review because while it was a year that showed significant advancement in AI, many were concerned about its ethical implications (Mills, 2016 ; Stone et al., 2016 ; Greene et al., 2019 ). Since AI is fast evolving, the literature from the recent years was used to obtain the emergent and most recent results (Nittas et al., 2023 ; Sukums et al., 2023 ). Figure 1 presents our review flowchart following PRISMA's guidelines (Moher et al., 2009 ). The initial total number of studies subject to review was 657. For the first step of the review, two investigators screened all 657 articles by carefully reviewing their titles and abstracts and considering the inclusion and exclusion criteria. That resulted in excluding 589 irrelevant studies leaving us with 68 studies. In the next review step, two investigators did a full-text reading of the studies assessed for eligibility. This Full-Text review excluded 40 studies (21 articles with no “research ethics” or “research ethics committee,” eight papers with no “REB,” “RE,” and “AI,” five articles with no “Artificial Intelligence,” five pieces that were not research papers and one unavailable full text). With NVivo (Braun and Victoria, 2006 ), each article was analyzed according to a set of different themes that aimed to answer the questions of the current topic. “REB” is used throughout the article as an umbrella term to include all the variations that are used to label research ethics boards in different countries.

Search strategy.

PB, PubMed; EMB, Embase; WoS, Web of Science.

Selection criteria.

An external file that holds a picture, illustration, etc.
Object name is frai-06-1149082-g0001.jpg

PRISMA Flowchart. AI, Artificial intelligence; REB, Review ethics board; REC, Research ethics committees; IRB, Institutional review boards; RE, Research ethics.

The following section includes the results based on the thematic coding grid used to create the different sections relevant to our topic (see Figure 2 ). The results come from our final sample of articles.

An external file that holds a picture, illustration, etc.
Object name is frai-06-1149082-g0002.jpg

Architecture that illustrates the article's results structure starting with the two main domains: (A) AI and research ethics and (B) research ethics boards.

3.1. AI and research ethics

Researchers are faced with several ethical quandaries while navigating research projects. They are urged to safeguard human research participants' protection when working with human research participants. However, it is not always simple to balance the common good (i.e., develop solutions for the wider population) and the individual interest (i.e., research participants' safety) (Ford et al., 2020 ; Battistuzzi et al., 2021 ). Researchers are responsible for anticipating and preventing risks from harming participants while advancing scientific knowledge, which requires maintaining an adequate risk-benefit ratio (Sedenberg et al., 2016 ; Ford et al., 2020 ). With AI's fast growth, another set of issues is added to the existing ones: data governance, consent, responsibility, justice, transparency, privacy, safety, reliability, and more (Samuel and Derrick, 2020 ; Gooding and Kariotis, 2021 ). This section will describe the views on current guidelines to regulate AI, key principles and ethical approaches, and the main issues. In the current climate, we expect continuity on the following concepts: responsibility, explainability, validity, transparency, informed consent, justice, privacy, data governance, benefits and risks assessment, safety, and justice.

3.1.1. Views on current guidelines

3.1.1.1. existent guidelines that can be used to regulate ai.

The current normative guidelines do not make up for the few AI-related guidelines (Aymerich-Franch and Fosch-Villaronga, 2020 ). However, in addition to the ethical standards used as a basis for AI use guidelines, the UN published a first set of guidelines to regulate AI (Chassang et al., 2021 ). Many projects, like the Human Brain Project (HBP), took the initiative to encourage discussions from different parties to anticipate issues that could result from their research (Stahl and Coeckelbergh, 2016 ; Aicardi et al., 2018 , 2020 ). Researchers and developers can access tools that help orient their reflections on their responsible use of technology (Aymerich-Franch and Fosch-Villaronga, 2020 ). Furthermore, the implementation of ethical approval committees (i.e., Human Research Ethics Committees in Australia) that uses a soft-governance model, which leans toward ethical regulation and is less restrictive than legal regulations, would help prevent studies or companies abuse their participants or users (Andreotta et al., 2021 ). Many are contemplating using digital health ELSI to encourage the implementation of ethical standards in AI when laws and regulations are lacking in it (Nebeker et al., 2019 ).

Articles have mentioned many leading countries in AI research. Supplementary Table 1 showcases the progress and effort the European Union (EU) and other countries have made regarding AI regulation. The countries, alongside the EU, that were often mentioned throughout our final sample were the following: Australia, Canada, China, the European Union, the United Kingdom, and the United States. Since this information is strictly from our selected articles, some information was unavailable. While noticeable progress is being made regarding AI development and regulation, most countries have shown little indication, if any, of AI research ethics.

3.1.1.2. Moral status and rights

While guidelines and norms are shifting to fit AI standards, many questions on moral status and rights are raised to adapt to this new reality. Authors argue that we cannot assign moral agency to AI and robots. There are multiple reasons for it: robots do not seem capable of solving problems ethically (Stahl and Coeckelbergh, 2016 ), AI's lack of explanation regarding its generated results, and the absence of willingness to choose (Farisco et al., 2020 ) which might impact decision making in research ethics.

Rights are attributed to different living entities. For instance, in the EU, the law protects animals as sentient living organisms and unique tangible goods. Their legal status also obliges researchers not to harm animals during research projects, making us question the status and rights we should assign AIS or robots (Chassang et al., 2021 ). Indeed, Miller pointed out that having a machine at one's disposal raises questions on human-machine relationships and the hierarchical power it might induce (Miller, 2020 ).

3.1.2. Key principles and norms of AI systems in research ethics

We have seen that the lexicon and the language used invoke both classical theories and contextualization of AI ethics benchmarks within the practices and ethos of research ethics.

3.1.2.1. Ethical approaches in terms of AI research ethics

The literature invoked the following classic theories: the Kantian-inspired model, utilitarianism, principlism (autonomy, beneficence, justice, and non-maleficence), and the precautionary principle. Table 3 illustrates these essential ethical approaches found in our final sample, along with their description in terms of AI research ethics.

Critical key ethical approaches that were raised in the present scoping review and their description in terms of AI research ethics.

3.1.2.2. Responsibility in AI research ethics

Public education and ethical training implementation could help governments spread awareness and sensitize people regarding research ethics in AI (Cath et al., 2018 ). Accountability of AI regulation and decision-making should not strictly fall into stakeholders' hands but also be based on solid legal grounds (Chassang et al., 2021 ). Digital mental health apps and other institutions will now be attributed responsibilities that have usually been acclaimed to professionals or researchers using the technology (i.e., decision-making, providing users with enough tools to understand and use products, being able to help when needed, etc.) (Gooding and Kariotis, 2021 ). Scientists and AI developers must not throw caution to the wind regarding the possibility that biased algorithms could be fed to AI models (Ienca and Ignatiadis, 2020 ). Clinicians will have to tactfully manage to inform patients of the results generated by machine learning (ML) models while considering their risk of error and bias (Jacobson et al., 2020 ). It is still vague to attribute responsibility to specific actors. However, it is necessary to have different groups work together to tackle the problem (Meszaros and Ho, 2021 ; Samuel and Gemma, 2021 ). Some consider that validity, explainability, and open-source AI systems are some of the defining points that lead to responsibility. With the advancement of technologies and its gain of interest, the sense of social responsibility also increased. Indeed, every actor must contribute to making sure that these novel technologies are developed and used in an ethical matter (Nebeker et al., 2019 ; Aicardi et al., 2020 ).

3.1.2.3. Explainability and validity

An important issue with AIS usually raised is the explainability of results. Deep learning (DL) is another type of ML with more extensive algorithms that encloses data with a broader array of interpretations (Chassang et al., 2021 ). This makes it harder to explain how DL and AI models reached a particular conclusion (Ienca and Ignatiadis, 2020 ; Jacobson et al., 2020 ). This poses transparency issues that are challenging to participants (Grote, 2021 ).

Since AI is known for its ‘black-box' aspect, where results are difficult to justify, it is difficult to fully validate a model with certainty (Ienca and Ignatiadis, 2020 ). Deciding to monitor research participants closely could help validate results which, in theory, would bring more accurate results. However, close monitoring could also have a negative effect by influencing participants' decisions based on whether they mind being monitored or not. This event could, as a result, produce more inaccurate results (Jacobson et al., 2020 ). Furthermore, it could be more challenging in certain contexts to promote validity when journals and funding bodies favor new and innovative studies over ethical research on AI, even if the latter is being promoted (Samuel and Gemma, 2021 ).

3.1.2.4. Transparency and informed consent

According to the White House Office of Science and Technology Policy (OSTP), transparency would help solve many ethical issues (Cath et al., 2018 ). Transparency allows research participants to be aware of a study's different outlooks and comprehend them (Sedenberg et al., 2016 ; Grote, 2021 ). The same goes for new device users (Chassang et al., 2021 ). AI models (i.e., products, services, apps, sensor-equipped wearable systems, etc.) produce a great deal of data that does not always come from consenting users (Ienca and Ignatiadis, 2020 ; Meszaros and Ho, 2021 ). Furthermore, AI's black-box imposes a challenge to obtain informed consent since the lack of explainability of AI-generated results might not allow participants to have enough information to give out their informed consent (Jacobson et al., 2020 ; Andreotta et al., 2021 ). Thus, it is essential to make consent forms easy to understand for the targeted audience (Nebeker et al., 2019 ).

However, the requirement to get informed consent could lead to other less desirable implications. Some argue that requiring authorization for all data, especially studies that hold a vast set of data, might lead to data bias and a decrease in data quality because it only entices a specific group of people to give out consent which leaves out a significant part of the population (Ford et al., 2020 ).

3.1.2.5. Privacy

While the levels of privacy differ from one scholar to another, the concept of privacy remains a fundamental value to human beings (Andreotta et al., 2021 ). Through AI and robotics, data can be seen as attractive commodities which could compromise privacy (Cath et al., 2018 ). Researchers are responsible for keeping participants unidentifiable while using their data (Ford et al., 2020 ). However, data collected from many sources can induce a higher risk of identifying people. While pursuing their research study, ML researchers still struggle to comply with privacy guidelines.

3.1.2.5.1. Data protection

According to a study, most people do not think data protection is an issue. One reason to explain this phenomenon is that people might not fully grasp the magnitude of its impact (Coeckelbergh et al., 2016 ). Indeed, the effect could be very harmful to some people. For instance, data found about a person could decrease their chances of employment or even of getting insurance (Jacobson et al., 2020 ). Instead of focusing on data minimization, data protection should be prioritized to ensure ML models get the most relevant data, ensuring data quality while maintaining privacy (McCradden et al., 2020b ). Another point worth mentioning is that the GDPR allows the reuse of personal data for research purposes, which might allow companies who wish to pursue commercial research to bypass certain ethical requirements (Meszaros and Ho, 2021 ).

3.1.2.5.2. Privacy vs. science advancement dilemmas

Some technology-based studies face a dichotomy between safeguarding participants' data and making scientific advancements. This does not always come easily since ensuring privacy can compromise data quality, while studies with more accurate data usually lead to riskier privacy settings (Gooding and Kariotis, 2021 ). Indeed, with new data collection methods in public and digital environments, consent and transparency might be overlooked for better research results (Jacobson et al., 2020 ).

3.1.3. Key issues of the current state of AI-specific RE guidelines

Many difficulties have arisen with the soaring evolution of AI. There has been a gap between research ethics and AI research, inconsistent standards regarding AI regulation and guidelines, and a lack of knowledge and training in these new technologies has been widely noticed. Medical researchers are more familiar with research ethics than computer science researchers and technologists (Nebeker et al., 2019 ; Ford et al., 2020 ). This shows a disparity in knowledge between different fields.

With new technologies comes the difficulty in assessing them (Aicardi et al., 2018 ; Aymerich-Franch and Fosch-Villaronga, 2020 ; Chassang et al., 2021 ). Research helps follow AI's progress and ensures it does so responsibly and ethically (Cath et al., 2018 ). Unfortunately, applied and research ethics are not always in sync (Gooding and Kariotis, 2021 ). AI standards mostly rely on ethical values rather than concrete normative and legal regulations, which have become insufficient (Samuel and Derrick, 2020 ; Meszaros and Ho, 2021 ). The societal aspects of AI are more discussed amongst researchers than the ethics part of research (Samuel and Derrick, 2020 ; Samuel and Gemma, 2021 ).

Many countries have taken the initiative to regulate AI using ethical standards. However, guidelines vary from one region to another. It has become a strenuous task to establish a consensus of strategies, turn principles into laws, and make them practical (Chassang et al., 2021 ). It does not only come down to countries that have differing points of views but journals as well. Indeed, validation for an AI research project publication could differ from one journal to another (Samuel and Gemma, 2021 ). Even though ethical, legal, and social implications (ELSI) are used to help oversee AI, regulations and AI-specific guidelines remain scarce (Nebeker et al., 2019 ).

3.1.4. When research ethics guidelines are applied to AI

While there is a usual emphasis that is being made on ethical approbation for research projects, there are other projects that are not required to follow an ethics guideline. In the United Kingdom, some research projects do not require ethics approval (i.e., social media data, geolocation data, anonymous secondary health data with an agreement) (Samuel and Derrick, 2020 ). A study highlighted that most papers gathered that used available data from social media did not have an ethical approbation (Ford et al., 2020 ). Some technology-based research projects ask for consent from their participants but skip requesting ethical approval from a committee (Gooding and Kariotis, 2021 ). Some non-clinical research projects are exempt from an ethics evaluation (Samuel and Gemma, 2021 ). Tools do not always undergo robust testing before validation either (Nebeker et al., 2019 ). Of course, ethical evaluation remains essential in multiple other settings: when minors or people lacking capacity to make an informed decision are involved, when users are recognizable, when researchers seek users' data directly (Ford et al., 2020 ), when clinical data or applications are used (Samuel and Gemma, 2021 ), etc.

3.2. Research ethics board

Historically REBs have focused on protecting human participants in research (e.g., therapeutic, nursing, psychological, or social research) from complying with the requirements of funding or federal agencies like NIH or FDA (Durand, 2005 ). This approach has continued, and in many countries, REBs are fundamentally essential to ensure that research involving human participants is conducted in compliance with ethics guidelines and national and international regulations.

3.2.1. Roles of REB

The primary goal of REBs focuses on reviewing and overseeing research to provide the necessary protection for research participants. REBs consist of groups of experts and stakeholders (clinicians, scientists, community members) who review research protocols with an eye toward ethical concerns. They ensure that protocols comply with regulatory guidelines and can withhold approval until such matters have been addressed. Also, they were designed to play an anticipatory role, predicting what risks might arise within research and ironing out ethical issues before they appeared (Friesen et al., 2021 ). Accordingly, REBs aim to assess whether the proposed research project meets specific ethical standards regarding the foreseeable impacts on human subjects. However, REBs are less concerned with the broader consequences of research and its downstream applications. Instead, they focus on the direct effects on human subjects during or after the research process (Prunkl et al., 2021 ). Within their established jurisdiction, REBs can develop a review process independently. Considering the specific context of AI research, REBs would aim to mitigate the risks of potential harm possibly caused by technology. This could be done by reviewing scientific questions relating to the origin and quality of the data, algorithms, and artificial intelligence; confirming the validation steps conducted to ensure the prediction models work; requesting further validation to be carried out if required (Samuel and Derrick, 2020 ).

3.2.2. Scope and approaches

AI technologies are rapidly changing health research; these mutations might lead to significant gaps in REB oversight. Some authors who analyzed these challenges suggest an adaptative scope and approach. To achieve an AI-appropriate research ethics review, it is necessary to clearly define the thresholds and characteristics of cardinal research ethics considerations, including what constitutes a “ human participant , what is a treatment , what is a benefit , what is a risk , what is considered a publicly available information , what is considered an intervention in the public domain , what is a medical data , but also what is AI research” (Friesen et al., 2021 ).

There is an urgent need to tailor the technology and its development, evaluation, and use contexts (i.e., digital mental health) (Gooding and Kariotis, 2021 ). Health research involving AI features requires intersectoral and interdisciplinary participatory efforts to develop dynamic, adaptive, and relevant normative guidance. It also requires practice navigating the ethical, legal, and social complexities of patient data collection, sharing, analysis, interpretation, and transfer for decision-making in a natural context (Gooding and Kariotis, 2021 ). Also, these studies imply multi-stakeholder participation (such as regulatory actors, education, and social media).

This diversity of actors seems to be a key aspect in this case. Still, it requires transparent, inclusive, and transferable normative guidance and norms to ensure that all understand each other and meet the normative demands regarding research ethics. Furthermore, bringing together diverse stakeholders and experts is worthwhile, especially when the impact of research can be significant, difficult to foresee, and unlikely to be understood by any single expert, as with AI-driven medical research (Friesen et al., 2021 ). In this stake, several factors are beneficial to promote cooperation between academic research and industry: inter-organizational trust, collaboration experience, and the breadth of interaction channels. Partnership strategies like collaborative research, knowledge transfer, and research support may be essential to embolden this in much broader terms than strict technology transfer (Aicardi et al., 2020 ).

3.2.3. AI research ethics, practices, and governance oversight

According to the results of our review, REBs must assess the following seven considerations of importance during AI research ethics review: (1) Informed consent, (2) benefit-risks assessment, (3) safety and security, (4) validity and effectiveness, (5) user-centric approach and design, (6) transparency. In the literature, some authors have pointed out specific questions about considerations REBs should be aware of. The following Table 4 reports the main highlights REBs might rely on.

Main highlights for the reviewed body of literature (divided by key salient ethical considerations).

3.2.3.1. Informed consent

Some authors argue that the priority might be to consider whether predictions from a specific machine learning model are appropriate for informing decisions about a particular intervention (Jacobson et al., 2020 ). Others advocate carefully constructing the planned interventions so research participants can understand them (Grote, 2021 ).

The extent to which researchers should provide extensive information to participants is not evident among stakeholders. So far, research suggests that there is no clear consensus among patients on whether they would want to know this kind of information about themselves (Jacobson et al., 2020 ). Hence the question remains whether patients want to see if they are at risk, mainly if they cannot be told why, as factors included in machine learning models generally cannot be interpreted as having a causal impact on outcomes (Jacobson et al., 2020 ). Therefore, sharing information from an uninterpretable model may adversely affect a patient's perception of their illness, confuse them, and immediate concerns about transparency.

3.2.3.2. Benefits/risks assessment

The analysis of harms and potential benefits is critical when assessing human research. REBs are well concerned with this assessment to prevent unnecessary risks and promote benefits. Considerations of the potential benefits and harms to patient-participants are necessary for future clinical research, and REBs are optimally positioned to perform this assessment (McCradden et al., 2020c ). Additional considerations like benefit/risk ratio or effectiveness and the systematic process described previously are necessary. Risk assessments could have a considerable impact in research involving mobile devices or robotics because preventive action and safety measures may be required in the case of imminent risks. Thus, REB risk assessment seems very important (Jacobson et al., 2020 ).

Approaching AI research ethics through user-centered design can represent an interesting avenue to understand better how REB can conduct risk/benefices assessment. For researchers, involving users in the design of AI research is likely to promote better research outcomes. Hence, this can be reached by investigating how AI research is actually meeting users' needs and how this may generate intended and unintended impacts on them (Chassang et al., 2021 ; Gooding and Kariotis, 2021 ). Indeed there is insufficient reason to believe that AI research will produce positive benefits unless it is evaluated with a focus on patients and situated in the context of clinical decision-making (McCradden et al., 2020c ). Consequently, REBs might focus on the broader societal impact of this research (Chassang et al., 2021 ).

3.2.3.3. Safety and security

Safety and security are significant concerns for AI and robotics, and their assessment may rely on end-users' perspectives. To address the safety issue, it is not sufficient for robotics researchers to say that their robot is safe based on literature and experimental tests. It is crucial to find out about the perception and opinions of end-users of robots and other stakeholders (Coeckelbergh et al., 2016 ). Testing technology in real-life scenarios is vital for identifying and adequately assessing technology's risks, anticipating unforeseen problems, and clarifying effective monitoring mechanisms (Cath et al., 2018 ). On the other hand, there is a potential risk that an AIS misleads the user in realizing a legal act.

3.2.3.4. Validity and effectiveness

Validity is a crucial consideration and one on which there is consensus to appreciate the normative implications of AI technologies. To this end, it is necessary for research ethics that researchers' protocols be explicit about many elements and describe their validation model and performance metrics in a way that allows for assessment of the clinical applicability of their developing technology (McCradden et al., 2020b ). In addition, in terms of validity, simulative models have yet to be appropriately compared with standard medical research models (including in vitro, in vivo , and clinical models) to ensure they are correctly validated and effective (Ienca and Ignatiadis, 2020 ). Considering many red flags raised in recent years, AI systems may not work equally well with all sub-populations (racial, ethnic, etc.). Therefore, AI systems must be validated for different subpopulations of patients (McCradden et al., 2020b ).

Demonstration of value is essential to ensure the scientific validity of the claims made for technology but also to attest to the proven effectiveness once deployed in a real-world setting and the social utility of a technology (Nebeker et al., 2019 ). When conducting a trial for the given AI system, the main interest should be to assess its overall reliability, while the interaction with the clinician might be less critical (Grote, 2021 ).

3.2.3.5. Transparency

Transparency entails understanding how technology behaves and establishing thresholds for permissible (and impermissible) usages of AI-driven health research. Transparency requires clarifying the reasons and rationales for the technology's design, operation, and impacts (Friesen et al., 2021 ). Identified risks should be accompanied by detailed measures intended to avoid, reduce, or eliminate the risks. The efficiency of such efforts should be assessed upstream and downstream as part of the quality management process. As far as possible, testing methods, data, and assessment results should be public. Transparent communication is essential to make research participants, as well as future users aware of the technology's logic and functioning (Chassang et al., 2021 ).

The implications presented in Table 4 . Seem to encourage REBs to adopt a more collaborative approach to grasp a better sense of reality in different fields. The analysis also showed that data bias is a flagrant problem whether AI is used or not and that this discriminatory component should be taken care of to avoid emphasizing the problem with AI. Informed consent is another value that REBs prioritize and will have to be adapted to AI because new information might have to be disclosed to participants. Safety and security are always essential to consider. However, other measures will be implemented with AI to ensure that participants are not set in danger. One of the main aspects of AI is data sharing and the risk that this might breach participants' privacy. The methods put in place now might not be suitable for AI's fast evolution. The questions of justice, equality, and fairness that have not been resolved in our current society will also have to be instigated in the AI era. Finally, the importance of validity was raised numerous times. Unfortunately, REBs do not have the right tools to evaluate AI. It will be necessary for AI to meet the population's needs. Furthermore, definitions of specific values and principles that REBs usually respond to will have to be reviewed and adapted according to AI.

3.2.4. Limitations and challenges

Our results point to several discrepancies between the critical considerations for AI research ethics and REB review of health research and AI/ML data.

3.2.4.1. Consent forms

According to our review, there is a disproportionate focus on consent before other ethical issues. Authors argue that the big piece the REBs ask for relies on consent, not the AI aspect of the project. This finding suggests that narrowing AI research ethics around consent concerns remains problematic. In some stances, the disproportionate focus on consent, along with the importance REBs place on consent forms and participant information sheets, has settled how research ethics is defined, e.g., viewed as a proxy for ethics best practice, or in some cases, as an ethics panacea (Samuel and Gemma, 2021 ).

3.2.4.2. Safety, security, and validity

Authors report a lack of knowledge for safety review. It appears clear that REBs may not have the experience or expertise to conduct a risk assessment to evaluate the probability or magnitude of potential harm. Similarly, the training data used to inform the algorithm development are often not considered to qualify as human subjects research, which – even in a regulated environment – makes a prospective review for safety potentially unavailable (Nebeker et al., 2019 ).

On the other hand, REBs lack appropriate assessing processes for assessing whether AI systems are valid, effective, and apposite. The requirement to evaluate the evidence of effectiveness adds to a range of other considerations with which REBs must deal (i.e., the protection of participants and the fairness in the distribution of benefits and burdens). Therefore, there is still much to be done to equip REBs to evaluate the effectiveness of AI technologies, interventions, and research (Friesen et al., 2021 ).

3.2.4.3. Privacy and confidentiality

Researchers point to a disproportionate focus on data privacy and governance before other ethical issues in medical health research with AI tools. Focus on privacy and data governance issues warrants further attention, as privacy issues may overshadow other issues. Indeed it seems problematic and led to a narrowing of ethics and responsibility debates being perpetuated throughout the ethics ecosystem, often at the expense of other ethical issues, such as questions around justice and fairness (Samuel and Gemma, 2021 ). REBs appear to be less concerned about the results themselves. One explained that when reviewing their AI-associated research ethics applications, REBs focus more on questions of data privacy than other ethical issues, such as those related to the research and the research finding. Others painted a similar picture of how data governance issues were a centralized focus when discussing their interactions with their REB. According to these stakeholders, REBs focus less on the actual algorithm than how the data is handled, and the issue remains around data access and not about the software (Samuel and Gemma, 2021 ).

3.2.4.4. Governance, oversight, and process

Lack of expertise appears to be a significant concern in our results. Indeed, even when there is oversight from a research ethics committee, authors observe that REB members often lack the experience or confidence regarding particular issues associated with digital research (Samuel and Derrick, 2020 ).

Some authors advocate that ML researchers should complement the membership of REBs since they are better situated to evaluate the specific risks and potential unintended harms linked to the methodology of ML. On the other hand, REBs should be empowered to fulfill their role in protecting the interests of patients and participants and enable the ethical translation of healthcare ML (McCradden et al., 2020c ). However, we can notice that researchers expressed different views about REBs' expertise. While most acknowledged a lack of AI-specific proficiency, for many, this remains straightforward because the ethical issues of their AI research were nonexceptional compared to other ethics issues raised by “big data” (Samuel and Gemma, 2021 ).

Limits of process and regulation are another concern faced by REBs, including a lack of consistency in decision-making within and across REBs, a lack of transparency, poor representation of the participants and public they are meant to represent, insufficient training, and a lack of measures to examine their effectiveness (Friesen et al., 2021 ). There are several opinions on the need for and the effectiveness of REBs, with critics lamenting excessive bureaucracy, lack of reliability, inefficiency, and, importantly, high variance in outcomes (Prunkl et al., 2021 ). To address the existing gap of knowledge between different fields, training could be used to help rebalance this and ensure sufficient expertise for all research experts to pursue responsible innovation (Stahl and Coeckelbergh, 2016 ).

Researchers described the lack of standards and regulations for governing AI at the level of societal impact; the way that ethics committees in institutions work is still acceptable. But there is a need for another level of thinking that combines everything and does not look at one project simultaneously (Samuel and Gemma, 2021 ).

Finally, researchers have acknowledged the lack of ethical guidance, and some REBs report feeling ill-equipped to keep pace with rapidly changing technologies used in research (Ford et al., 2020 ).

3.2.5. Stakeholder perceptions and engagement

Researchers' perspectives on AI research ethics may vary. While some claim that researchers often take action to counteract the adverse outcomes created by their research projects (Stahl and Coeckelbergh, 2016 ), others promulgate that researchers do not always notice these outcomes (Aymerich-Franch and Fosch-Villaronga, 2020 ). When the latter occurs, researchers are pressed to find solutions to deal with those outcomes (Jacobson et al., 2020 ).

Furthermore, researchers are expected to engage more in AI research ethics. Researchers must demonstrate cooperation with certain institutions (i.e., industries and governments) (Cath et al., 2018 ). Researchers are responsible for ensuring that their research project is conducted responsibly by considering participants' needs (Jacobson et al., 2020 ). Usually, research ethics consist of different researchers coming from multidisciplinary fields who are better equipped to answer further ethical and societal questions (Aicardi et al., 2018 ). However, there could be a clash of interests between parties while setting goals for a research project (Battistuzzi et al., 2021 ).

A lot of the time, different stakeholders do not necessarily understand other groups' realities. Therefore, research is vital to ensure that stakeholders can understand one another and be in the same scheme of things. This will help advance AI research ethics (Nebeker et al., 2019 ).

Responsibility for ensuring a responsible utilization of AI lies within various groups of stakeholders (Chassang et al., 2021 ). Figure 3 portrays some of these groups often mentioned throughout the literature. This figure aims to illustrate the amount and variety of stakeholders needed to collaborate to ensure using AI in a responsible matter.

An external file that holds a picture, illustration, etc.
Object name is frai-06-1149082-g0003.jpg

Overview of the stakeholders involved in regulation regarding AI in research ethics: the main active stakeholders (dark blue) and the main passive stakeholders (light blue).

Many others, such as the private sector, can be added to the list. Studies have shown that private companies' main interest is profit over improving health with the data collected using AI (McCradden et al., 2020a ). Another problematic element with the private sector: they do not often fall under the regulation of ethical oversight boards, which means that AI systems or robots that come from private companies do not necessarily follow an accepted ethical guideline (Sedenberg et al., 2016 ). This goes beyond ethical research concerns.

3.2.6. Key practices and processes for AI research

REBs may face new challenges in the context of research involving AI tools. Authors are calling for specific oversight mechanisms, especially for medical research projects.

3.2.6.1. Avoid bias in AI data

While AI tools provide new opportunities to enhance medical health research, there is an emerging consensus among stakeholders regarding bias concerns in AI data, particularly in clinical trials. Since bias can worsen pre-existing disparities, researchers should proactively target a wide range of participants to establish sufficient evidence of an AI system's clinical benefit across different populations. To mitigate selection bias, REBs may require randomization in AI clinical trials. To achieve this, researchers must start by collecting more and better data from social minority groups (Grote, 2021 ). Also, awareness of biases concerns should be taken into account in the validation phase, where the performance of the AI system gets measured in a benchmark data set. Hence it is crucial to test AI systems for different subpopulations. Therefore, affirmative action in recruiting research participants in AI RCTs deems us ethically permissible (Grote, 2021 ). However, authors reported that stakeholders might encounter challenges accessing needed data in a context where severe legal constraints are imposed on sharing medical data (Grote, 2021 ).

3.2.6.2. Attention to vulnerable populations

Vulnerable populations require excellent protection against risks they may face in research.

When involving vulnerable populations, such as those with a mental health diagnosis, in AI medical health research, additional precautions should be considered to ensure that those involved in the study are duly protected from harm – including stigma and economic and legal implications. In addition, it is essential to consider whether access barriers might exclude some people (Nebeker et al., 2019 ).

3.2.6.3. Diversity, inclusion, and fairness

Another issue, which needs to be raised when considering critical practices and scope in AI research, relates to fair representation, diversity, and inclusion. According to Grote, one should explore concerns for the distribution of research participants and representatives for the state, country, or even world region in which the AI system gets tested. Here the author advocates if we should instead aim for a parity distribution of different gender, racial and ethnic groups. Hence, he raised several questions to support the reflection of REB on diversity, inclusion, and fairness issues: How should the reference classes for the different subpopulations be determined? Also, what conditions must be met for fair subject selection in AI RCTs? And finally, when, if ever, is it morally justifiable to randomize research participants in medical AI trials? (Grote, 2021 ).

3.2.6.4. Guidance to assess ethical issues in research involving robotics

The aging population and scarcity of health resources are significant challenges healthcare systems face today. Consequently, people with disabilities, especially elders with cognitive and mental impairments, are the most affected. The evolving field of research with assistive robots may be useful in providing care and assistance to these people. However, robotics research requires specific guidance when participants have physical or cognitive impairments. Indeed particular challenges are related to informed consent, confidentiality, and participant rights (Battistuzzi et al., 2021 ). According to some authors, REBs should ask several questions to address these issues: Is the research project expected to enhance the quality of care for the research participants? What is/are the ethical issue/s illustrated in this study? What are the facts? Is any important information not available in the research? Who are the stakeholders? Which course of action best fits with the recommendations and requirements set out in the “Ethical Considerations” section of the study protocol? How can that course of action be implemented in practice? Could the ethical issue/s presented in the case be prevented? If so, how? (Battistuzzi et al., 2021 ).

Which ethical and social issues may neurorobotics raise, and are mechanisms currently implemented sufficiently to identify and address these? Is the notion that we may analyze, understand and reproduce what makes us human rooted in something other than reason (Aicardi et al., 2020 )?

3.2.6.5. Understanding of the process behind AI/ML data

A good understanding of the process behind AI/ML tools might be of interest to REBs when assessing the risk/benefit ratio of medical research involving AI. However, there seems to be a lack of awareness of how AI researchers gain results. Authors argue that it would not be impossible to induce perception about the external environment in the neuron culture and to interpret the signals from the neuron culture as motor commands without a basic understanding of this neural code (Bentzen, 2017 ). Indeed, when using digital health technologies, the first step is to ask whether the tools, be they apps or sensors, or AI applied to large data sets, have demonstrated value for outcomes. One should ask whether they are clinically effective, or if they measure what they purport to measure (validity) consistently (reliability), and finally, if these innovations also improve access to those at the highest risk of health disparities (Nebeker et al., 2019 ).

Indeed, the ethical issues of AI research raise major questions within the literature. What may seem surprising at first sight is that the body of literature is still relatively small and appears to be in an embryonic state regarding the ethics of the development and use of AI (outside the scope of academic research). The literature is thus more concerned with the broad questions of what constitutes research ethics in AI-specific research and with pointing out the gaps in normative guidelines, procedures, and infrastructures adapted to the oversight of responsible and ethical research in AI. Perhaps unsurprisingly, most of the questions related to study within the health sector. This is to be expected given the ascendancy of health in general within the research ethics field (Faden et al., 2013 ). Thus, most considerations relate to applied health research, the implications for human participants (whether in digital health issues, research protocols, or interactions with different forms of robots), and whether projects should be subject to ethics review.

Specifically in AI-specific research ethics, interestingly, traditional issues of participant protection (including confidentiality, consent, and autonomy in general) and research involving digital technologies intersect and are furthered by the uses of AI. Indeed, as AI requires big data and behaves very distinctly from other technologies, the primary considerations raised by the body of literature studied were predominantly classical in AI ethics but contextualized and exacerbated within research ethics practices. For instance, one of the most prevalent ethical considerations raised and discussed was privacy and the new challenges regarding the massive amount of data collected and its use. If a breach of confidentiality were to happen or data collection would lead to discovering further information, this would raise the possibility of harming individuals (Ford et al., 2020 ; Jacobson et al., 2020 ). In addition, informed consent was widely mentioned and focused on transparency and explainability when the issues were AI-specific. Indeed, AI's black-box issue of explainability was raised many times. This is a challenge because it is not always easy to justify the results generated by AI (Jacobson et al., 2020 ; Andreotta et al., 2021 ). This then poses a problem with transparency. Indeed, participants expect to have the necessary information relevant to the trial to make an informed and conscious decision regarding their participation. Not having adequate knowledge to share with participants might not align with informed consent.

Furthermore, another principle was brought up many times, which was responsibility. Responsibility is shared chiefly between the researcher and the participant (Gooding and Kariotis, 2021 ). Now that AI is added to the equation, it has become harder to determine who strictly should be held accountable for the occurrence of certain events (i.e., data error) and in what context (Meszaros and Ho, 2021 ; Samuel and Gemma, 2021 ). While shared responsibility is an idea many share and wish to implement, it is not easy. Indeed, as seen in Figure 3 , many stakeholders (e.g., lawmakers, AI developers, AI users) may participate in responsibility sharing. However, much work will have to be put into finding a fair way to share responsibility between each party involved.

4. Discussion

Our results have implications mainly on three levels as shown in Figure 4 . Indeed, AI-specific implications for research ethics is first addressed followed by REBs who take on these challenges. Finally, new research avenues are discussed before ending with the limitations.

An external file that holds a picture, illustration, etc.
Object name is frai-06-1149082-g0004.jpg

Line of progression on AI ethics resolution in research.

4.1. AI-specific implications for research ethics

The issues raised by AI are eminently global. It is interesting to see in the articles presented in the scoping review that researchers in different countries are asking questions colored by the jurisdictional, social, and normative context in which the authors work. However, there appears to be heterogeneity in the advancement of AI research ethics thinking; this is particularly evident in the progress of research ethics initiatives within countries (see Supplementary Table 1 ). A striking finding is that very little development has been done regarding AI-specific standards and guidelines to frame and support research ethics worldwide.

At this point, the literature does not discuss the content of norms and their application to AI research. Instead, it makes initial observations about AI's issues and challenges to research ethics. In this sense, it is possible to see that the authors indicate new challenges posed by the emergence of AI in research ethics. AI makes many principles more challenging to assess (it seems quite difficult to use the current guidelines to balance the risks and benefits). One example is that it has become unclear which level of transparency is adequate (Geis et al., 2019 ). AI validity, on the other hand, is not always done in an optimal manner throughout AI's lifecycle (Vollmer et al., 2020 ). Accountability remains a continuing issue since it is still unclear who to hold accountable and to what extent with AI in play (Greatbatch et al., 2019 ). In addition, AI is also known to amplify certain traditional issues in research ethics. For example, AI blurs the notion of free and informed consent since the information a patient or participant needs regarding AI is yet to be determined (Gerke and Timo Minssen, 2020 ). Privacy's getting harder to manage because it has become possible with AI to identify individuals by analyzing all the data available, even after deidentification (Ahuja, 2019 ). Data bias is another leading example where AI would not necessarily detect data bias it's being fed but could also generate more biased results (Auger et al., 2020 ).

Interestingly, the very distinction between the new AI-related issues and the old, amplified ones is still not entirely clear to researchers. For instance, while AI is quickly targeted for generating biased results, the source of the problem could come from biased data fed to AI (Cath et al., 2018 ; Chassang et al., 2021 ; Grote, 2021 ). Another issue is the lack of robustness, where it is challenging to rely entirely on AI to always give accurate results (Grote, 2021 ). However, this issue is also found in human-based decision-making. Thus, the most efficient use of AI could depend on context. The final decision could be reserved for humans limiting AI's role as an assistive tool (Ienca and Ignatiadis, 2020 ). Therefore, drawing a picture of what is new and less so is difficult. However, there is no doubt that AI is disrupting the field of research ethics, its processes, practices, and standards. This also points to the fact that no AI-specific Research Ethics Guidelines can help give a sense of how best to evaluate AI in a compatible way with RE guidance.

Another observation is that research ethics (and a fortiori research ethics committees) are very limited in their approach to AI development and research. This means that research ethics only comes into play at a specific point in developing AI technologies, interventions, and knowledge, i.e., after developing an AIS and before its implementation in a real context. Thus, research ethics, understood as it has been developed in most countries, focuses on what happens within public organizations and when human participants are involved. This excludes technological developments developed by industry and does not require ethical certification. Therefore, the vast majority of AIS outside the health and social services sector will not be subject to research ethics reviews, such as data found in social media or geolocation (Samuel and Derrick, 2020 ). But even within the health sector, AIS that do not directly interact with patients could largely be excluded from the scope of research ethics and the mandate of REBs. This makes the field of AI research ethics very new and small compared to responsible AI innovation.

4.2. What this means for REBs

No author seems to indicate that REBs are prepared and equipped to evaluate research projects involving AI as rigorously, confidently, and consistently as for more traditional research protocols (i.e., not involving AI). One paper from Sedenberg et al. ( 2016 ) expressively indicates that the current REB model should be replicated in the private sector to help oversee and guide AI development (Sedenberg et al., 2016 ). Arguably the call is mostly about adding an appraising actor to private sector technology developments than praising REBs for their mastery and competence in AI research ethics review. Yet, it still holds a relatively positive perception of the current readiness and relevance of REBs to research ethics. This may also reflect a lack of awareness (from uninformed stakeholders) of the limitations faced by REBs, which on paper can probably be seen as being able to evaluate research protocols involving AI and other projects. This is, however, disputed or refuted by the rest of the literature studied.

The bulk of the body of literature reviewed was more circumspect about the capacity of REBs. Not that they are not competent, but rather that they do not have the tools to start with a normative framework relevant to AI research, conceptually rigorous and comprehensive, and performative and appropriate to the mandates, processes, and practices of REBs. Over the last several decades, REBs have primarily relied on somewhat comprehensive and, to some extent, harmonized, regulations and sets of frameworks to inform and guide their ethical evaluation. The lack therefore, REBs face new challenges without any tools to support them with their decisions on AI dilemmas. The authors of our body of literature thus seem to indicate a higher expectation on all stakeholders to find solutions to address the specificities and challenges of AI in research ethics.

One of the first points is quite simple: determining when research involving AI should be subject to research ethics review. This simple point and observation is not consensual. Then, we can raise some serious concerns about the current mandate of REBs and the ability to evaluate AI with their current means and framework. Not only are they missing clear guidelines to do any kind of standard assessments on AI in research ethics, but they are also missing clearly defined roles on their account. Indeed, should their role be extended to look not just at research but also at the use of downstream technology? Or does this require another ethics oversight body that would look more at the technology in real life? This raises the question of how a lifecycle evaluative process can best be structured and how a continuum of evaluation can be developed that is adapted to this adaptive technology.

4.3. New research avenues

After looking at the heterogeneity of norms and regulations regarding AI in different countries, there should be an interest in initiating an international comparative analysis. The aim would be to investigate how REBs have adapted their practices in evaluating AI-based research projects without much input and support from norms. This analysis could raise many questions (i.e., could there be key issues that are impossible to universalize?).

4.3.1. The scope and approach of ethics review by REBs must be revisited in light of the specificities of research using AI

The primary considerations we discuss above raise new challenges on the scope and approaches of REB practices when reviewing research with AI. Furthermore, applications developed within the research framework often rely on a population-based system, leading REBs to question whether their assessment should focus on a systematic individual approach rather than societal considerations and their underlying foundations.

However, AI research is still emerging, underlining the difficulties of completing such a debate. Finally, one can wonder about the importance of current guidelines in AI within the process of ethical evaluation by the REBs. Should this reflection be limited only to the REBs? Or should it include other actors meaning scientists or civil society?

AI ethics is not limited to research. While it is less discussed, AI ethics raises many existential questions. Dynamics such as the patient-physician relationship will have to adapt to a new reality (Chassang et al., 2021 ). With human tasks being delegated to AI, notions of personhood (Aymerich-Franch and Fosch-Villaronga, 2020 ), autonomy (Aicardi et al., 2020 ), and human status in our society (Farisco et al., 2020 ) are threatened. This leads to delving into the question of “what it is to be human?”. Robots used in therapies aimed to care for patients (i.e., autistic children) could induce attachment issues and other psychological impacts (Coeckelbergh et al., 2016 ). This projects another issue: AI overreliance, a similar problem brought up by current technological tools (i.e., cell phones) (Holte and Richard, 2021 ).

4.3.2. Updating and adapting processes in ethics committees

AI ethics is still an emerging field. The REBs ensure the application of ethical frameworks, laws, and regulations. Our results suggest that research in AI involves complex issues that emerge around these new research strategies and methodologies using technologies such as computer science, mathematics, or digital technology. Thus, REBs' concerns remain on recognizing and assessing ethical issues that arise from these studies and adapting to rapid changes in this emerging field.

In research ethics, respect for a person's dignity is essential. In several normative frameworks, i.e., the TCPS in Canada, it means respect for persons, concern for wellbeing, and justice. In AI research, REBs might need to reassess the notion of consent or the participant's place in the study. As with all research, REBs must ensure informed consent. However, there does not seem to be a clear consensus on the standard for providing informed consent in AI research. For example, REBs should consider the issue of AI's interpretability in a research consent form; to translate transparent and intelligible results.

Another issue that REBs consider here is the role of participants in AI research. Indeed, active participant involvement is not always necessary in AI research to complete the data collection to meet the research objectives. It is often the case when data collection is completed from connected digital devices or by querying databases. However, the consequences amplified the phenomenon of dematerialization of research participation while facilitating data circulation.

Furthermore, AI research and the use of these new technologies call on REBs to be aware of the changes this implies for the research participant, particularly concerns such as the continuous consent process, management of withdrawal, or the duration of participation in the research.

While protecting the individual participant takes center stage in the evaluation of REBs, research with AI may focus more on using data obtained from databases held by governments, private bodies, institutions, or academics. In this context, should concerns for societal wellbeing prevail over the wellbeing of the individual? There does not appear to be a clear consensus on what principles should be invoked to address this concern.

4.4. Limitations

The focal point of AI evaluation was often about privacy protection and data governance, not AI's ethics. While data protection and governance are massively essential issues, it should be equally important to investigate AI issues, not to leave out concerns that should be dealt with, such as AI validity, explainability, and transparency. In addition, FAIR and the ethics of care, which are starting to become standard approaches in the field, were not invoked in the articles to inform AI ethics in research. This might be due to the study's lack of literature on AI ethics compared to research ethics in general.

Another limitation worth outlining is that our final sample mainly reflected the reality and issues found in healthcare, despite having a scoping review open to all fields using AI. This could be due to the fact that AI is becoming more prominent in the field of healthcare (Davenport and Kalakota, 2019 ). The field is also often linked to the development and presence of research ethics boards (Edwards and Tracey Stone, 2007 ). Having healthcare outshine the rest of the fields in our sample could also be attributed to research ethics mostly stemming from multiple medical research incidents throughout history (Aita and Marie-Claire, 2005 ).

Furthermore, throughout the studied articles, few to none mentioned countries were non-affluent. This poses concerns about widening disparities between developed and developing countries. Therefore, it is vital to acknowledge the asymmetry of legislative and societal norms between countries to better serve their needs and avoid colonized practices.

Finally, this topic lacks maturity. This study primarily shows that REBs cannot find guidance from the literature. Indeed, there is a scarcity of findings in the literature regarding recommendations and practices to adopt in research using AI. There are even fewer findings that specifically aim to equip REBs. Reported suggestions are often about privileged behavior that governments or researchers should adopt rather than establishing the proper criteria REBs should follow during their assessments. Therefore, this study does not lead to findings directly applicable to REBs practice and should not be used as a tool for REBs.

5. Conclusion

Every field has its ethical challenges and needs. The results in this article have shown this reality. Indeed, we've navigated through some of AI ethics general issues before investigating AI ethics research-specific issues. This led us to discern what research ethics boards focus on more adeptly during their evaluations and the limits imposed on them when evaluating AI ethics in research. While AI is a promising field to explore and invest in, many caveats force us to develop a better understanding of these systems. With AI's development, many societal challenges will come our way, whether they are current ongoing issues, new AI-specific ones, or those that remain unknown to us. Ethical reflections are taking a step forward while normative guidelines adaptation to AI's reality is still dawdling. This impacts REBs and most stakeholders involved with AI. However, throughout the literature, many suggestions and recommendations were provided. This could allow us to build a framework with a clear set of practices that could be implemented for real-world use.

Author contributions

SBG: data collection, data curation, writing—original draft, and writing—review and editing. PG: conceptualization, methodology, data collection, writing—original draft, writing—review and editing, supervision, and project administration. JCBP: conceptualization, methodology, data collection, writing—review and editing, supervision, project administration, and funding acquisition. All authors contributed to the article and approved the submitted version.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frai.2023.1149082/full#supplementary-material

  • Ahuja A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician . PeerJ 7 , e7702. 10.7717/peerj.7702 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Aicardi C., Akintoye S., Fothergill B. T., Guerrero M., Klinker G., Knight W., et al.. (2020). Ethical and social aspects of neurorobotics . Sci. Eng. Ethics 26 , 2533–2546. 10.1007/s11948-020-00248-8 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Aicardi C., Fothergill B. T., Rainey S., Stahl B. C., Harris E. (2018). Accompanying technology development in the human brain project: from foresight to ethics management . Futures 102 , 114–124. 10.1016/j.futures.2018.01.005 [ CrossRef ] [ Google Scholar ]
  • Aita M., Marie-Claire R. (2005). Essentials of research ethics for healthcare professionals . Nurs. Health Sci. 7 , 119–125. 10.1111/j.1442-2018.2005.00216.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Andreotta A. J., Kirkham N., Rizzi M. (2021). AI, big data, and the future of consent . AI Soc. 17 , 1–14. 10.1007/s00146-021-01262-5 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Auger S. D., Jacobs B. M., Dobson R., Marshall C. R., Noyce A. J. (2020). Big data, machine learning and artificial intelligence: a neurologist's guide . Pract Neurol 21 , 4–11. 10.1136/practneurol-2020-002688 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Aymerich-Franch L., Fosch-Villaronga E. (2020). A self-guiding tool to conduct research with embodiment technologies responsibly . Front. Robotic. AI 7 , 22. 10.3389/frobt.2020.00022 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Battistuzzi L., Papadopoulos C., Hill T., Castro N., Bruno B., Sgorbissa A., et al.. (2021). Socially assistive robots, older adults and research ethics: the case for case-based ethics training . Int. J. Soc. Robotics 13 , 647–659. 10.1007/s12369-020-00652-x [ CrossRef ] [ Google Scholar ]
  • Bélisle-Pipon J. -C., Couture V., Roy M. -C., Ganache I., Goetghebeur M., Cohen I. G. (2021). What makes artificial intelligence exceptional in health technology assessment? . Front. Artif. Intell. 4 , 736697. 10.3389/frai.2021.736697 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bélisle-Pipon J. C., Monteferrante E., Roy M. C., Couture V. (2022). Artificial intelligence ethics has a black box problem . AI Soc . 10.1007/s00146-021-01380-0 [ CrossRef ] [ Google Scholar ]
  • Bentzen M. M. (2017). Black boxes on wheels: research challenges and ethical problems in MEA-based robotics . Ethics Inf. Technol. 19 , 19–28. 10.1007/s10676-016-9415-z [ CrossRef ] [ Google Scholar ]
  • Bonnet F., Bénédicte R. (2009). La régulation éthique de la recherche aux états-unis: histoire, état des lieux et enjeux . Genèses 2 , 87–108. 10.3917/gen.075.0087 [ CrossRef ] [ Google Scholar ]
  • Braun V., Victoria C. (2006). Using thematic analysis in psychology . Qualitative Res. Psychol. 3 , 101–177. 10.1191/1478088706qp063oa [ CrossRef ] [ Google Scholar ]
  • Brynjolfsson E., Andrew M. (2017). Artificial intelligence, for real . Harvard Bus. Rev. 1 , 1–31. [ Google Scholar ]
  • Calmet J., John A. C. (1997). A perspective on symbolic mathematical computing and artificial intelligence . Annal. Mathematics Artif. Int. 19 , 261–277. 10.1023/A:1018920108903 [ CrossRef ] [ Google Scholar ]
  • Cath C. (2018). Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges . London: The Royal Society Publishing. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cath C., Wachter S., Mittelstadt B., Taddeo M., Floridi L. (2018). Artificial intelligence and the ‘good society': the US, EU, and UK approach . Sci. Eng. Ethics 24 , 505–528. 10.1007/s11948-017-9901-7 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chassang G., Thomsen M., Rumeau P., Sedes F., Delfin A. (2021). An interdisciplinary conceptual study of artificial intelligence (AI) for helping benefit-risk assessment practices . AI Commun. 34 , 121–146. 10.3233/AIC-201523 [ CrossRef ] [ Google Scholar ]
  • Coeckelbergh M., Pop C., Simut R., Peca A., Pintea S., David D., et al.. (2016). A survey of expectations about the role of robots in robot-assisted therapy for children with ASD: ethical acceptability, trust, sociability, appearance, and attachment . Sci. Eng. Ethics 22 , 47–65. 10.1007/s11948-015-9649-x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Colquhoun H. L., Danielle Levac K. K. O., Sharon Straus A. C. T., Laure Perrier M. K. (2014). Scoping reviews: time for clarity in definition, methods, and reporting . J. Clin. Epidemiol. 67 , 1291–1294. 10.1016/j.jclinepi.2014.03.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Copeland B. J. (2022). Artificial Intelligence . Chicago, IL: Encyclopedia Britannica. [ Google Scholar ]
  • Davenport T., Kalakota R. (2019). The potential for artificial intelligence in healthcare . Future Healthc. J. 6 , 94–98. 10.7861/futurehosp.6-2-94 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Davenport T. H., Rajeev R. (2018). Artificial intelligence for the real world . Harvard Bus. Rev. 96 , 108–116. [ Google Scholar ]
  • Dignum V. (2018). Ethics in Artificial Intelligence: Introduction to the Special Issue. Cham: Springer. 10.1007/s10676-018-9450-z [ CrossRef ] [ Google Scholar ]
  • Durand G. (2005). Introduction Générale à La Bioéthique, Histoire Concepts et Outils . Vatican: Fides. [ Google Scholar ]
  • Edwards S. J. L., Tracey Stone T. S. (2007). Differences between research ethics committees . Int. J. Technol. Assess. Health Care 23 , 17–23. 10.1017/S0266462307051525 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Faden R. R., Nancy E., Kass S. N. G., Peter Pronovost S. T., Tom L. (2013). An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics . Hastings Center Rep. 43 , S16–27. 10.1002/hast.134 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Farisco M., Evers K., Salles A. (2020). Towards establishing criteria for the ethical analysis of artificial intelligence . Sci. Eng. Ethics 26 , 2413–2425. 10.1007/s11948-020-00238-w [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ford E., Scarlett Shepherd K. J., Lamiece H. (2020). Toward an ethical framework for the text mining of social media for health research: a systematic review . Front. Digital Health 2 , 592237. 10.3389/fdgth.2020.592237 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Friesen P., Douglas-Jones R., Marks M., Pierce R., Fletcher K., Mishra A., et al.. (2021). Governing AI-driven health research: are IRBs up to the task? Ethics Hum. Res. 43 , 35–42. 10.1002/eahr.500085 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Geis J. R., Brady A. P., Wu C. C., Spencer J., Ranschaert E., Jaremko J. L., et al.. (2019). Ethics of artificial intelligence in radiology: summary of the joint european and north American multisociety statement . Radiology 293 , 436–440. 10.1148/radiol.2019191586 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gerke S., Timo Minssen G. C. (2020). Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare. Artificial Intelligence in Healthcare. Amsterdam: Elsevier, 295–336. [ Google Scholar ]
  • Gooding P., Kariotis T. (2021). Ethics and law in research on algorithmic and data-driven technology in mental health care: scoping review . JMIR Ment. Health 8 , e24668. 10.2196/24668 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greatbatch O., Garrett A., Snape K. (2019). The impact of artificial intelligence on the current and future practice of clinical cancer genomics . Genet. Res. 101 , e9. 10.1017/S0016672319000089 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greene D., Hoffmann A. L., Stark L. (2019). Better, Nicer, Clearer , Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. 10.24251/HICSS.2019.258 [ CrossRef ] [ Google Scholar ]
  • Grote T. (2021). Randomised controlled trials in medical AI: ethical considerations . J. Med. Ethics. 48 , 899–906. 10.1136/medethics-2020-107166 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holte A. J., Richard F. (2021). Tethered to texting: reliance on texting and emotional attachment to cell phones . Curr. Psychol. 40 , 1–8. 10.1007/s12144-018-0037-y [ CrossRef ] [ Google Scholar ]
  • Ienca M., Ignatiadis K. (2020). Artificial intelligence in clinical neuroscience: methodological and ethical challenges . AJOB Neurosci. 11 , 77–87. 10.1080/21507740.2020.1740352 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jacobson N. C., Bentley K. H., Walton A., Wang S. B., Fortgang R. G., Millner A. J., et al.. (2020). Ethical dilemmas posed by mobile health and machine learning in psychiatry research . Bull World Health Organ. 98 , 270–276. 10.2471/BLT.19.237107 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Li M. D., Ken Chang X. M., Adam Bernheim M. C., Sharon R., Steinberger J., Brent P. L. (2021). Radiology implementation considerations for artificial intelligence (AI) applied to COVID-19, From the AJR Special Series on AI Applications . AJR . 291 , 15–23. 10.2214/AJR.21.26717 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mazurek G., Karolina M. (2019). Perception of privacy and data protection in the context of the development of artificial intelligence . J. Manage. Anal. 6 , 344–364. 10.1080/23270012.2019.1671243 [ CrossRef ] [ Google Scholar ]
  • McCradden M. D., Anderson J. A., Zlotnik Shaul R. (2020c). Accountability in the machine learning pipeline: the critical role of research ethics oversight . Am. J. Bioeth. 20 , 40–42. 10.1080/15265161.2020.1820111 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCradden M. D., Baba A., Saha A., Ahmad S., Boparai K., Fadaiefard P., et al.. (2020a). ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study . CMAJ Open 8 , E90–e95. 10.9778/cmajo.20190151 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCradden M. D., Stephenson E. A., Anderson J. A. (2020b). Clinical research underlies ethical integration of healthcare artificial intelligence . Nat. Med. 26 , 1325–1326. 10.1038/s41591-020-1035-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meszaros J., Ho C. H. (2021). AI research and data protection: can the same rules apply for commercial and academic research under the GDPR? Comput. Law Security Rev. 41 , 532. 10.1016/j.clsr.2021.105532 [ CrossRef ] [ Google Scholar ]
  • Miller L. F. (2020). Responsible research for the construction of maximally humanlike automata: the paradox of unattainable informed consent . Ethics Inf. Technol. 22 , 297–305. 10.1007/s10676-017-9427-3 [ CrossRef ] [ Google Scholar ]
  • Mills M. (2016). Artificial Intelligence in Law: The State of Play 2016 . Eegan, MN: Thomson Reuters Legal Executive Institute. [ Google Scholar ]
  • Mintz Y., Brodie R. (2019). Introduction to artificial intelligence in medicine . Minim Invasive Ther. Allied Technol. 28 , 73–81. 10.1080/13645706.2019.1575882 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moher D., Alessandro Liberati J. T., Douglas G., Altman A. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement . Annal. Int. Med. 151 , 264–269. 10.7326/0003-4819-151-4-200908180-00135 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Müller V. C. (2021). Ethics of Artificial Intelligence and Robotics. The Stanford Encyclopedia of Philosophy . Available online at: https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/
  • Munn Z., Micah D. J., Peters C. S., Catalin Tufanaru A. M., Edoardo A. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach . BMC Med. Res. Methodol. 18 , 1–7. 10.1186/s12874-018-0611-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nebeker C., Torous J., Bartlett Ellis R. J. (2019). Building the case for actionable ethics in digital health research supported by artificial intelligence . BMC Med. 17 , 137. 10.1186/s12916-019-1377-7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nittas V., Paola Daniore C. L., Felix Gille J. A., Shannon Hubbs M. A. P., Effy Vayena A. B. (2023). Beyond high hopes: a scoping review of the 2019–2021 scientific discourse on machine learning in medical imaging . PLOS Digital Health 2 , e0000189. 10.1371/journal.pdig.0000189 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • O'Sullivan S., Nevejans N., Allen C., Blyth A., Leonard S., Pagallo U., et al.. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery . Int. J. Med. Robot 15 , e1968. 10.1002/rcs.1968 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Page S. A., Jeffrey N. (2017). Improving the process of research ethics review . Res. Integ. Peer Rev. 2 , 1–7. 10.1186/s41073-017-0038-7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Prunkl C. E. A., Ashurst C., Anderljung M., Webb H., Leike J., Dafoe A., et al.. (2021). Institutionalizing Ethics in AI through broader impact requirements . Nat. Mac. Int. 3 , 104–110. 10.1038/s42256-021-00298-y [ CrossRef ] [ Google Scholar ]
  • Samuel G., Derrick G. (2020). Defining ethical standards for the application of digital tools to population health research . Bull World Health Organ. 98 , 239–244. 10.2471/BLT.19.237370 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Samuel G. J. C., Gemma D. (2021). Boundaries between research ethics and ethical research use in artificial intelligence health research . J. Emp. Res. Hum. Res. Ethics 16 , 325–337. 10.1177/15562646211002744 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sedenberg E., Chuang J., Mulligan D. (2016). Designing commercial therapeutic robots for privacy preserving systems and ethical research practices within the home . Int. J. Soc. Robotics 8 , 575–587. 10.1007/s12369-016-0362-y [ CrossRef ] [ Google Scholar ]
  • Stahl B. C., Coeckelbergh M. (2016). Ethics of healthcare robotics: towards responsible research and innovation . Robotic. Auton. Syst. 86 , 152–161. 10.1016/j.robot.2016.08.018 [ CrossRef ] [ Google Scholar ]
  • Stark L., Pylyshyn Z. (2020). Intelligence Artificielle (IA) Au Canada . Ottawa: Encyclopédie Canadienne. [ Google Scholar ]
  • Stone P., Brooks R. E. B., Calo O. E. (2016). One Hundred Year Study on Artificial Intelligence (AI100). Redwood, CA: Stanford University Press [ Google Scholar ]
  • Sukums F., Deogratias Mzurikwao D. S., Rebecca Chaula J. M., Twaha Kabika J. K., Bernard Ngowi J. N., Andrea S. W., et al.. (2023). The use of artificial intelligence-based innovations in the health sector in Tanzania: a scoping review . Health Policy Technol. 5 , 100728. 10.1016/j.hlpt.2023.100728 [ CrossRef ] [ Google Scholar ]
  • The Royal Society The Alan Turing Institute. (2019). The AI Revolution in Scientific Research. The Royal Society . Available online at: https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf?la=en-GB&hash=5240F21B56364A00053538A0BC29FF5F
  • Vollmer S., Bilal A., Mateen G. B., Franz J., Király R. G., Pall Jonsson S. C. (2020). Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness . BMJ 20 , 368. [ PubMed ] [ Google Scholar ]
  • Xu Y., Liu X., Cao X., Huang C., Liu E., Qian S., et al.. (2021). Artificial intelligence: A powerful paradigm for scientific research . The Innovation. 2 , 100179. 10.1016/j.xinn.2021.100179 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Foundr's Blog

Press ESC to close

Or check our popular categories....

AI Tools for Researchers

Top 10 AI Tools For Academics: Level Up Your Research

If you aren’t using AI to augment your academic research in 2023, you are wasting a lot of time – time that you could free up from grunt work and invest in the more interesting stuff. 

AI tools have proliferated since the launch of GPT 3 and beyond. The sudden and simultaneous rise of so many consumable AI-powered tools has quickly muddied the waters and made it tiresome if not impossible to get your hands on the right set of AI tools for researchers. 

Fear not! I have hand-picked (I’ve had help, a lot of it, to be honest) 10 of the best AI tools for researchers. Go through the list, combine multiple tools, and create a customized stack of AI tools to help with your research process.

Top 10 AI tools for researchers

We’ll discuss tools powered by artificial intelligence that can augment your research work, save you a lot of time through the automation of certain tasks, and help you brainstorm new ideas avoid plagiarism, and streamline the research process. 

1. PDFgear Copilot

AI tool for researchers - PDFgear

This PDF editor with a humble-looking website doesn’t even market itself properly as a top-class AI tool for researchers. It just is. PDFgear offers you some very simple functionalities that are going to save you a lot of time in different stages of your research activities.

It will let you upload PDF files and give you a summary of what’s inside the PDF . If you think it has missed something, ask. And it will find you the specific piece of information you were worried about. You can even ask PDFgear to compress a file, delete some pages, and perform other small-time edits via chat. 

Now, let’s say you have created a paper and you want to check it for errors before submitting it. Run it through PDFgear. The AI copilot will catch your typos and spelling errors and save you from embarrassment. 

PDFgear is free. It doesn’t matter if you want to summarize one file or 500 files. It’s free and instantaneous. 

The only downside is that this tool is available for download only on Windows 10/11. Nonetheless, the website says the MacOS version will arrive soon.        

2. Consensus

Consensus - AI tools for researchers

Consensus is useful for everyone and invaluable for researchers. It is an AI-powered search engine that takes questions in natural language and finds evidence-based answers from peer-reviewed research papers . Let that sink in.

While Google invests a lot in understanding the intent behind a search and providing the best answer, as a researcher you know how frustrating Google searches can be. You have to wade through an ocean of unverified content to reach evidence-based answers unless you are a master of keyword matching.

  • Consensus helps you access information spread across 200 million peer-reviewed papers.
  • They’ll cite the sources while answering your questions.
  • Every answer is evidence-based
  • The tool offers instant summaries and analyses with the help of GPT 4 and other powerful LLM

When it comes to using AI to augment research work, this is the real deal. You can use Consensus for research without paying a dime. The free edition will even let you create 3 summaries a month. For a price of $7.99 per month, you can generate unlimited summaries powered by GPT4. 

AI tool for academic research - Scite

Scite, just like Consensus and PDFgear, has a very simple but elegant offering for researchers, students, and scholars. It tells you where an article has been cited and whether the citing article affirms or disputes the citation. So, as a researcher and a smart individual, you already know why this is incredible. I’ll talk about it a little anyway.

Scite helps you find how older research publications have been cited by newer research work through a feature called Smart Citation. This feature allows you to visualize a network of citations stemming from a single piece of work. It identifies the context of the citation and also classifies the citations as affirmative or negative. 

You can take a glance at the visualization and instantly prioritize the publications that you want to go through . Scite is a real stress buster that can also open your eyes to new research angles. 

These really are exciting times for researchers.

4. SciSpace  

SciSpace AI based research tool

SciSpace is an AI-based tool that simplifies difficult concepts for you. So, if you are in a hurry and need to extract the gist of a sizable scientific paper , drop it at SciSpace and let the Copilot create a summary for you. 

What if you have read an entire paper and cannot make sense of a specific section? Upload the file at SciSpace and highlight the section you need help with. The tool will break it down into digestible chunks and even take follow-up questions from you.

SciSpace also helps you with your literature reviews by finding related articles. 

5. Wordvice AI

research questions of ai

Wordvice AI is a well-rounded AI-powered writing assistant. It proofreads your work and checks your articles for spelling, punctuation, and style error. It helps you maintain a flow of writing by analyzing sentence structures and offering sentence-level suggestions.

It will help you choose better words and create better sentences, all while ensuring the correctness of spelling, grammar, and style.  

Wordvice has solid use cases in academic research as well as in the corporate sector. It will help marketers write better copy and sales executives compose better emails. 

If you look closely, most of the AI tools for research can actually be repurposed for other functionalities. Similarly, AI apps meant for business can be repurposed for research. 

6. ChatGPT   

research questions of ai

ChatGPT is the OG generative AI chatbot. It took the world by storm and reached 1 million users in 5 days. It represents everything that’s cool about chatbots. But can you use it reliably for research? 

The answer is no. ChatGPT is not considered a credible source for conducting research in any field. It comes up with false citations, offers misinformation, and isn’t up-to-date. 

Then why is ChatGPT included in this list?

For two reasons: 

  • It is excellent at taking scattered information and forming comprehensive summaries.
  • Its capability to adapt to a certain style of writing is almost magical.

So, as a smart individual what you can do is, get the information from credible sources, tie them up neatly with multiple prompts, and use ChatGPT to transform information into literature.

Also read: ChatGPT Wrappers: Compared [Use ChatGPT for Almost Free]

7. Research Rabbit

research rabbit AI tool

They call it “Spotify for Papers” and there is good reason behind it. ResearchRabbit allows you to create a collection of papers much like a Spotify playlist. Then, based on what you add to your collection and how you interact with papers, the platform creates recommendations. How neat is that? It’s like the AI-powered tool is reading your mind to help you read better. 

Paper recommendations aside, ResearchRabbit also creates visualizations featuring your favorite articles showing how they’re cited. It gives you jumping-off points to delve deeper into an idea or to explore a different research angle.   

You cannot call Research Rabbit a research assistant in its traditional sense. It is more like a friend that nudges you to try something new – relevant papers in this case. 

research questions of ai

Here is another tool that’s never been marketed as an AI tool for researchers. In fact, Bit.ai is a fully-fledged document-sharing tool designed to cater to corporate needs. Nevertheless, it has certain features that researchers who like to collaborate can leverage.

This tool allows you to integrate a vast range of media items with your document . You can add infographics, create polls, and insert charts, and surveys. When you embed a link, Bit creates interactive visual cards visible to everyone sharing a document.

You can save all kinds of digital assets on the platform so that you do not have to search for content from different sources.

Now, imagine a scenario where you are part of a team of researchers who are collaborating on a few papers. You can organize and orchestrate the entire collaborative process with the help of Bit.ai. 

AI research assistant zotero

Zotero is a well-rounded AI research assistant. It helps researchers search better, organize better, and write better. Zotero analyzes your browsing patterns and senses when you are doing research. It then helps you find, sort, and save specific articles.

As you write the AI model recognizes the sources you are referencing and cites them for you following any of the 10000 citing procedures that it supports.

It creates a bibliography of all the resources used in your research paper. It synchronizes your data across devices to ensure access from anywhere at any time.

10. Semantic Scholar

research questions of ai

A search engine that helps you search from a database of 213 million scientific papers for free. It is somewhat similar to Consensus except for the fact that it is completely free. The platform is developed by Allen Institute for AI and it aims to make scientific literature accessible to all scholars .

As a researcher,

You can use Semantic Scholar to stay up-to-date with the latest scientific breakthroughs  

Extract meaning and identify connections within papers 

Find the highly influential citations at a glance 

Create an online library to organize all your material

Get paper recommendations

AI tools for researchers that didn’t make the list

 An AI research assistant that helps you find papers, extract meaning, and summarize articles.

A sentence structure checker and proofreader designed with scholastic compositions in mind.

Users can use this tool to converse with PDFs. They can get summaries, ask questions, and find insights into PDF files by uploading them.

iThenticate

A plagiarism-checker designed specifically for research works and scholastic literature.

Scholarcy is a platform driven by AI that helps you analyze scientific articles, extract key information, create lay summaries, and more.  

Maintaining academic integrity while using AI tools for research

While using AI-powered research tools is hardly a matter of choice anymore, it is important to maintain the ethics and standards we associate with academic research. Despite the use of cutting-edge AI, your research procedures should be transparent. 

Best practices for AI-powered academic research

  • Mention the use of AI in your research and give credit to the developers.
  • Discuss your use of AI tools and how their usage may have impacted the research outcome. 
  • Make sure that AI tools are used adhering to data privacy and informed consent requirements.
  • Do not use AI-generated content in your research work without due attribution.
  • Subject AI-assisted work to rigorous peer review.

FAQs about best AI tools for researchers

Does using ai tools for research raise any ethical concerns.

Yes, there can be concerns regarding data privacy, biased outcome, attribution of credit, and plagiarism. Researchers must be mindful of these issues while involving AI in their research strategy.

Can AI research assistants be used with AI expertise? 

Most AI research assistants come with a conversational AI model that doesn’t require any expertise to use.

Is there an AI-based tool for historical research?

You can use general-purpose AI text analyzers to summarize large volumes of historical texts and create summaries. While there are AI-powered applications trained on historical data, they are mostly gamified, and cannot be directly used in historical research. 

Categorized in:

Share Article:

Saumick Basu

Technical Writer & AI Researcher @ Foundr.AI

Saumick has been writing on technology for half a decade now. He loves talking about cybersecurity, AI, and enjoys diving deep into all disruptive tech. When not writing about tech, he writes songs and plays the drums.

Leave a Reply Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Related Articles

10 best ai tools for recruiting in 2024, top 9 best ai tools for ecommerce in 2024, 9 best ai presentation maker tools [+1 bonus], 10 best ai tools for product managers.

  • AI Content Shield
  • AI KW Research
  • AI Assistant
  • SEO Optimizer
  • AI KW Clustering
  • Customer reviews
  • The NLO Revolution
  • Press Center
  • Help Center
  • Content Resources
  • Facebook Group

AI Can Create Unique Research Questions

Table of Contents

Research questions are the cornerstone of any research report.

A good research question should be clear, purposeful, and able to be answered in the research report. Every researcher struggles to find questions that are on-topic and relevant to the research.

The research questions generator makes it easier for you to come up with research questions. This article takes a closer look at research questions and what makes a great research question.

research questions of ai

What Is a Research Question?

A research question is a question formulated for scientific research . The question delineates the scope of your research. In addition, it determines a researcher’s approach to the identified problems.

Research questions can take different structures. Sometimes, they describe the researcher’s interest. Questions can be descriptive, meaning they aim to describe or measure a subject of the researcher’s interest.

In this case, such research questions are what we call descriptive research questions. In other cases, your research question can be explanatory. This type of research question aims to expand existing research on a topic.

What Makes a Good Research Question

A good research question must have certain qualities, irrespective of whether the study is a short or long one. Below are the most important qualities expected of your research question.

The question has to be clear and devoid of vagueness and ambiguity. The specificity of the question is vital, which will ensure that the answer comes back with great accuracy .

Conciseness

You shouldn’t bore readers with long research questions. The more concise your research questions are, the more likely you’ll produce quality research. Therefore, you should create it in a way that it won’t take too much time to understand the information therein.

Debatability

When drafting it, make sure you use words in a way that makes your idea debatable. Readers shouldn’t be able to predict the result of your research process by simply reading the research question. Thus, you need to leave it open-ended and debatable.

Use an AI Research Questions Generator Today

Instead of spending hours trying to gather your ideas to write your research question, an AI tool can help you out. An AI-powered research questions generator is typically part of a main AI writing tool.

AI writing tools help students, academics, and researchers to make their writing process faster. From brainstorming to writing and editing, these tools are always ready to work for you.

Using these tools is no longer a luxury but a necessity. For example, you can create different types of content you want with only a click. To use an AI tool to generate research questions, you’ll have to provide some information. The information includes:

  • Your keyword (including terms relevant to your paper).
  • A bit of context
  • Your preferred tone

The tool will typically generate multiple options. Read through and choose the one that best suits you. You can keep repeating the steps until you find the right one you can use.

Final Words

AI writer tools are new and exciting. They can free you of many problems related to creating written pieces.

With AI, you don’t have to search the ends of the world to get ideas for your writing again. You can trust AI tools to deliver because they learn from real human writing. It identifies patterns and generates human-sounding output for users.

AI Can Create Unique Research Questions

Abir Ghenaiet

Abir is a data analyst and researcher. Among her interests are artificial intelligence, machine learning, and natural language processing. As a humanitarian and educator, she actively supports women in tech and promotes diversity.

Explore All Engaging Questions Tool Articles

Consider these fun questions about spring.

Spring is a season in the Earth’s yearly cycle after Winter and before Summer. It is the time life and…

  • Engaging Questions Tool

Fun Spouse Game Questions For Couples

Answering spouse game questions together can be fun. It’ll help begin conversations and further explore preferences, history, and interests. The…

Best Snap Game Questions to Play on Snapchat

Are you out to get a fun way to connect with your friends on Snapchat? Look no further than snap…

How to Prepare for Short Response Questions in Tests

When it comes to acing tests, there are a few things that will help you more than anything else. Good…

Top 20 Reflective Questions for Students

As students, we are constantly learning new things. Every day, we are presented with further information and ideas we need…

Random History Questions For History Games

A great icebreaker game is playing trivia even though you don’t know the answer. It is always fun to guess…

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • 10 Research Question Examples to Guide Your Research Project

10 Research Question Examples to Guide your Research Project

Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023.

The research question is one of the most important parts of your research paper , thesis or dissertation . It’s important to spend some time assessing and refining your question before you get started.

The exact form of your question will depend on a few things, such as the length of your project, the type of research you’re conducting, the topic , and the research problem . However, all research questions should be focused, specific, and relevant to a timely social or scholarly issue.

Once you’ve read our guide on how to write a research question , you can use these examples to craft your own.

Note that the design of your research question can depend on what method you are pursuing. Here are a few options for qualitative, quantitative, and statistical research questions.

Other interesting articles

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, October 19). 10 Research Question Examples to Guide your Research Project. Scribbr. Retrieved March 28, 2024, from https://www.scribbr.com/research-process/research-question-examples/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, writing strong research questions | criteria & examples, how to choose a dissertation topic | 8 steps to follow, evaluating sources | methods & examples, what is your plagiarism score.

Read our research on: Abortion | Podcasts | Election 2024

Regions & Countries

Many americans think generative ai programs should credit the sources they rely on.

Overall, 54% of Americans say artificial intelligence programs that generate text and images, like ChatGPT and DALL-E, need to credit the sources they rely on to produce their responses. A much smaller share (14%) says the programs don’t need to credit sources, according to a new Pew Research Center survey. About a third say they’re not sure on this question.

Pew Research Center published this analysis as part of its ongoing work to understand attitudes about artificial intelligence. This analysis draws on a survey of 10,133 U.S. adults conducted from Feb. 7 to 11, 2024.

Everyone who took part in the survey is a member of the Pew Research Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way, nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the  ATP’s methodology .

Here are the  questions used for the analysis , along with responses, and its  methodology .

A bar chart showing that 54% of adults say generative AI programs need to credit their sources.

A separate Pew Research Center analysis finds growing public engagement with ChatGPT – one of the most well-known examples of generative AI – especially among young people.

Generative AI programs work by reviewing large amounts of information , such as the works of an artist or news organization. That allows them to generate responses when users ask questions.

This process has spurred lawsuits from authors, artists and news organizations , who argue that this is an unauthorized use of copyrighted material. But some technology companies argue that this is fair use under copyright law and that the programs provide a clear public benefit.

Our survey finds that the public consistently says AI programs should credit sources across seven examples of content they could generate.

A bar chart showing that Americans consistently say generative AI programs should give credit to their sources.

For instance, 75% say AI programs should have to credit the sources they rely on if they provide information that matches what a journalist wrote nearly word-for-word. Just 6% say they shouldn’t have to credit their sources in this scenario, while 19% say they’re not sure.

Majorities of U.S. adults (67% each) also see a need for crediting sources if AI programs generate images that imitate the style of a current artist or text that imitates the style of a current author.

Whether an author is living or dead has little impact on public attitudes: 65% say credit is needed if AI programs imitate the writing style of a famous author who died many years ago.

Similarly, about six-in-ten say generative AI programs should have to credit the sources they rely on if they draft a movie script in the style of a popular movie. Hollywood screenwriters recently secured limits on using AI in script writing , as part of a larger labor agreement.

The view that credit is needed also extends to more general types of information. For instance, 60% of Americans say AI programs should have to credit the sources they use if they summarize information about the U.S. population. And 61% say credit is needed if these programs provide information that was reported by many different news organizations.

How often do Americans think they interact with AI?

A bar chart showing that adults with higher levels of education report more frequent interaction with AI.

Over the years, Center surveys have explored public views on multiple aspects of artificial intelligence , including overall awareness of and engagement with these technologies.

Our new survey finds that 22% of Americans say they interact with artificial intelligence almost constantly or several times a day. Another 27% say they interact with AI about once a day or several times a week. Half of Americans think they interact with AI less often.

Adults with higher levels of education are more likely than those with less education to say they interact with AI frequently. For instance, 63% of postgraduates and 57% of college graduates say they interact with AI at least several times a week. That compares with 50% of those with some college education and 36% of those with a high school diploma or less education.

Younger Americans also are more likely than their older peers to say they interact with AI often. Majorities of those ages 18 to 29 (56%) and 30 to 49 (54%) say they interact with AI at least several times a week. Smaller shares of those ages 50 to 64 (46%) and 65 and older (37%) say the same.

While AI now powers many widely used functions – like personalized online shopping recommendations – its presence may not always be visible to all Americans. For instance, only 30% of U.S. adults correctly identify the presence of AI across six examples in a recent survey about AI awareness . 

Note: Here are the  questions used for the analysis , along with responses, and its  methodology .

research questions of ai

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

Americans’ use of ChatGPT is ticking up, but few trust its election information

Q&a: how we used large language models to identify guests on popular podcasts, striking findings from 2023, what the data says about americans’ views of artificial intelligence, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Leaders Undervalue Creative Work from AI-Managed Teams

  • Shane Schweitzer
  • David De Cremer

research questions of ai

Beware the unintended consequences of using algorithmic tools for management.

Because of AI’s ability to learn from vast amounts of data and maximize efficiency, companies have predicted that humans working with it will be able to free up their time and expand their creative efforts, thereby driving greater innovation. But, despite the enthusiasm of tech gurus and companies alike, is this really how adopting AI tools will play out? A series of experiments that how algorithmic tools changed the consideration and resources workers were given for creative and innovative work suggest that these tools — specifically, the algorithmic tools that oversee employee productivity — could actually undercut employees’ ability to do this work, and that companies that deploy these tools haphazardly could find their optimism souring.

How will creative work be impacted by artificial intelligence (AI)? With AI’s immense and growing capabilities — it can do everything from structuring work schedules, managing administrative tasks , and giving advice to decision-makers — industry thought leaders are understandably optimistic about its potential. Much of this optimism hinges on the claim that, because of AI’s ability to learn from vast amounts of data and maximize efficiency, humans working with it will be able to free up their time and expand their creative efforts, thereby driving greater innovation. Numerous analyses and corporate reports have been written in favor of this claim.

research questions of ai

  • SS Shane Schweitzer is an Assistant Professor of Management and Organizational Development at the D’Amore-McKim School of Business.
  • David De Cremer is a professor of management and technology at Northeastern University and the Dunton Family Dean of its D’Amore-McKim School of Business. His website is daviddecremer.com .

Partner Center

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

Will Knight

Apple’s MM1 AI Model Shows a Sleeping Giant Is Waking Up

The Apple logo on the exterior of an Apple store building with a yellow overlay effect

While the tech industry went gaga for generative artificial intelligence , one giant has held back: Apple. The company has yet to introduce so much as an AI-generated emoji, and according to a New York Times report today and earlier reporting from Bloomberg, it is in preliminary talks with Google about adding the search company’s Gemini AI model to iPhones .

Yet a research paper quietly posted online last Friday by Apple engineers suggests that the company is making significant new investments into AI that are already bearing fruit. It details the development of a new generative AI model called MM1 capable of working with text and images. The researchers show it answering questions about photos and displaying the kind of general knowledge skills shown by chatbots like ChatGPT. The model’s name is not explained but could stand for MultiModal 1. MM1 appears to be similar in design and sophistication to a variety of recent AI models from other tech giants, including Meta’s open source Llama 2 and Google’s Gemini . Work by Apple’s rivals and academics shows that models of this type can be used to power capable chatbots or build “agents” that can solve tasks by writing code and taking actions such as using computer interfaces or websites. That suggests MM1 could yet find its way into Apple’s products.

“The fact that they’re doing this, it shows they have the ability to understand how to train and how to build these models,” says Ruslan Salakhutdinov , a professor at Carnegie Mellon who led AI research at Apple several years ago. “It requires a certain amount of expertise.”

MM1 is a multimodal large language model, or MLLM, meaning it is trained on images as well as text. This allows the model to respond to text prompts and also answer complex questions about particular images.

One example in the Apple research paper shows what happened when MM1 was provided with a photo of a sun-dappled restaurant table with a couple of beer bottles and also an image of the menu. When asked how much someone would expect to pay for “all the beer on the table,” the model correctly reads off the correct price and tallies up the cost.

When ChatGPT launched in November 2022, it could only ingest and generate text, but more recently its creator OpenAI and others have worked to expand the underlying large language model technology to work with other kinds of data. When Google launched Gemini (the model that now powers its answer to ChatGPT ) last December, the company touted its multimodal nature as beginning an important new direction in AI. “After the rise of LLMs, MLLMs are emerging as the next frontier in foundation models,” Apple’s paper says.

MM1 is a relatively small model as measured by its number of “parameters,” or the internal variables that get adjusted as a model is trained. Kate Saenko , a professor at Boston University who specializes in computer vision and machine learning, says this could make it easier for Apple’s engineers to experiment with different training methods and refinements before scaling up when they hit on something promising.

Saenko says the MM1 paper provides a surprising amount of detail on how the model was trained for a corporate publication. For instance, the engineers behind MM1 describe tricks for improving the performance of the model including increasing the resolution of images and mixing text and image data. Apple is famed for its secrecy, but it has previously shown unusual openness about AI research as it has sought to lure the talent needed to compete in the crucial technology.

The Earth Will Feast on Dead Cicadas

Adrienne So

The Next Heat Pump Frontier? NYC Apartment Windows

Saenko says it’s hard to draw too many conclusions about Apple’s plans from the research paper. Multimodal models have proven adaptable to many different use cases. But she suggests that MM1 could perhaps be a step toward building “some type of multimodal assistant that can describe photos, documents, or charts and answer questions about them.”

Apple’s flagship product, the iPhone, already has an AI assistant—Siri. The rise of ChatGPT and its rivals has quickly made the once revolutionary helper look increasingly limited and out-dated. Amazon and Google have said they are integrating LLM technology into their own assistants, Alexa and Google Assistant. Google allows users of Android phones to replace the Assistant with Gemini. Reports from The New York Times and Bloomberg that Apple may add Google’s Gemini to iPhones suggest Apple is considering expanding the strategy it has used for search on mobile devices to generative AI. Rather than develop web search technology in-house, the iPhone maker leans on Google, which reportedly pays more than $18 billion to make its search engine the iPhone default. Apple has also shown it can build its own alternatives to outside services, even when it starts from behind. Google Maps used to be the default on iPhones but in 2012 Apple replaced it with its own maps app .

Apple CEO Tim Cook has promised investors that the company will reveal more of its generative AI plans this year. The company faces pressure to keep up with rival smartphone makers, including Samsung and Google, that have introduced a raft of generative AI tools for their devices.

Apple could end up tapping both Google and its own, in-house AI, perhaps by introducing Gemini as a replacement for conventional Google Search while also building new generative AI tools on top of MM1 and other homegrown models. Last September, several of the researchers behind MM1 published details of MGIE , a tool that uses generative AI to manipulate images based on a text prompt.

Salakhutdinov believes his former employer may focus on developing LLMs that can be installed and run securely on Apple devices. That would fit with the company’s past emphasis on using “on-device” algorithms to safeguard sensitive data and avoid sharing it with other companies. A number of recent AI research papers from Apple concern machine-learning methods designed to preserve user privacy. “I think that's probably what Apple is going to do,” he says.

When it comes to tailoring generative AI to devices, Salakhutdinov says, Apple may yet turn out to have a distinct advantage because of its control over the entire software-hardware stack. The company has included a custom “neural engine” in the chips that power its mobile devices since 2017, with the debut of the iPhone X. “Apple is definitely working in that space, and I think at some point they will be in the front, because they have phones, the distribution.”

In a thread on X, Apple researcher Brandon McKinzie, lead author of the MM1 paper wrote : “This is just the beginning. The team is already hard at work on the next generation of models.”

You Might Also Like …

In your inbox: Will Knight's Fast Forward explores advances in AI

This shadowy firm enables businesses to operate in near-total secrecy

Scientists are inching closer to bringing back the woolly mammoth

The first rule of the Extreme Dishwasher Loading Facebook group is …

Phones for every budget: These devices stood up to WIRED’s testing

research questions of ai

Morgan Meaker

Europe Is Breaking Open the Empires of Big Tech

Makena Kelly

The US Sues Apple in an iPhone Antitrust Blockbuster

Vittoria Elliott

Forget Chatbots. AI Agents Are the Future

ANA | Driving Growth

Your company may already be a member. View our member list to find out, or create a new account .

Forgot Password?

Content Library

You can search our content library for case studies, research, industry insights, and more.

You can search our website for events, press releases, blog posts, and more.

AI Campaigns and Case Studies

By Joanna Fragopoulos     March 29, 2024    

research questions of ai

A rtificial intelligence (AI), and its applications, is at the forefront of many discussions in many industries and fields, from marketing to tech to healthcare to education to law. How to implement and leverage these tools in a helpful way for users can be challenging for teams. However, when used well, AI can help save time analyzing data, personalize content and information, enhance creative ideas, and find ways to promote diversity, equality, inclusion, and belonging (DEIB). Below are case studies and campaigns that successfully utilized AI.

Leveraging Chatbots and ChatGPT

Zak Stambor, senior analyst of retail and e-commerce at Insider Intelligence, discussed AI at an ANA event , stating that it is "very clear that marketers will be spending more of their budgets on AI-infused productivity tools in the future." Stambor cited two companies utilizing chatbots to help consumers find what they need. For instance, Instacart started its Ask Instacart tool to help its users "create and refine shopping lists by allowing them to ask questions like, 'What is a healthy lunch option for my kids?' Ask Instacart then provides potential options based on users' past buying habits and provides recipes and a shopping list once users have selected the option they want to try," according to the ANA event recap . Further, Mint Mobile used ChatGPT to write an ad which it later released. The recap , however, stated that the company's CMO "emphasized that there were limitations with the technology and stressed the importance of understanding a brand's DNA before using generative AI. He recommended approaching ChatGPT in the same way successful marketers approach social media."

Smoothing the Request for Proposal (RFP) Process

Creating campaigns that are actually interesting and engage people, is, of course, every marketer's dream. ZS, a consulting and technology firm focused on transforming global healthcare, worked with Stein IAS to create its campaign " Data Connects Us ," which provided client services teams with content, case studies, reports, ZS's Future of Health survey, and data to help with the RFP process. The campaign leveraged AI to create "futuristic AI generated images — such as a futuristic hospital — and coupled it with copy communicating how ZS is positioned to help connect data with people and support real innovation. By leveraging emotionally engaging, distinct, and memorable creative, ZS was able to invite consumers to learn more about the company," as described in the ANA event recap .

Fostering DEIB

Google sought to promote DEIB practices as well as combat stereotypes and bias; the company was able to do this through the use of AI in the photography space. In 2018, the company established the Google Image Equity initiative, which enlisted experts on "achieving fairness, accuracy, and authenticity in camera and imaging tools," according to the ANA event recap . This result in Real Tone, which is a "collection of improvements focused on building camera and imaging products that worked equally for people of color" and became a consideration for people potentially buying a Google Pixel. As part of this process, the company collaborated with Harvard professor, Dr. Ellis Monk; together, they released a 10-shade skin tone scale that was more inclusive of diverse skin tones. This scale helps "train and evaluate AI models for fairness, resulting in products that work better for people of all skin tones."

Unearthing Creativity

Michelob ULTRA partnered with agency CB New York to create a virtual tennis match with John McEnroe, both in the past and present. McEnroe's past self was created using motion-capture technology and AI. Moreover, the brand also created a campaign called "Dreamcaster" with Cameron Black, who has been blind since birth, who longed to be a sports broadcaster "but felt he would never get the opportunity due to his disability," as explained in the ANA event recap . The recap went on to explain that Michelob worked with Black for an entire year to "create a spatial audio portal, complete with 62 surround sound speakers and more than 1,000 unique sounds, that 'placed' him at center court and told him what was occurring during a basketball game in real time. The portal featured a vest, designed with its own haptic language, to further assist Black in following the action by allowing him to feel the game's action. After 12 months of development and training, Black became the first-ever visually impaired person to broadcast an NBA game on live TV."

Deepening Personalization

To enhance personalization, Panera Bread created a loyalty program called "My Panera" in 2010. The program gives customers rewards based on visits; the rewards to be personalized which boosts the program's engagement. Recently, Panera worked with ZS Associates to utilize machine learning to create an automated "best next action" program to enable "true one-to-one interactions with My Panera members," as described in the ANA event recap , which went on to say that the company uses a "time-based criterion, combine[s] it with several other variables identified and sorted by AI, and serve[s] more than 100 different offers to the same audience. Panera can also leverage the technology to develop multiple email subjects or coupon headlines, make product recommendations based on past purchases, and even customize colors and copy within the communication to suit the sensibilities of the customer being targeted. Overall, there are more than 4,000 unique combinations of offer and product recommendations that a customer can receive."

The views and opinions expressed in Industry Insights are solely those of the contributor and do not necessarily reflect the official position of the ANA or imply endorsement from the ANA.

Joanna Fragopoulos is a director of editorial and content development at ANA.

research questions of ai

KPMG Logo

  • Global (EN)
  • Albania (en)
  • Algeria (fr)
  • Argentina (es)
  • Armenia (en)
  • Australia (en)
  • Austria (de)
  • Austria (en)
  • Azerbaijan (en)
  • Bahamas (en)
  • Bahrain (en)
  • Bangladesh (en)
  • Barbados (en)
  • Belgium (en)
  • Belgium (nl)
  • Bermuda (en)
  • Bosnia and Herzegovina (en)
  • Brasil (pt)
  • Brazil (en)
  • British Virgin Islands (en)
  • Bulgaria (en)
  • Cambodia (en)
  • Cameroon (fr)
  • Canada (en)
  • Canada (fr)
  • Cayman Islands (en)
  • Channel Islands (en)
  • Colombia (es)
  • Costa Rica (es)
  • Croatia (en)
  • Cyprus (en)
  • Czech Republic (cs)
  • Czech Republic (en)
  • DR Congo (fr)
  • Denmark (da)
  • Denmark (en)
  • Ecuador (es)
  • Estonia (en)
  • Estonia (et)
  • Finland (fi)
  • France (fr)
  • Georgia (en)
  • Germany (de)
  • Germany (en)
  • Gibraltar (en)
  • Greece (el)
  • Greece (en)
  • Hong Kong SAR (en)
  • Hungary (en)
  • Hungary (hu)
  • Iceland (is)
  • Indonesia (en)
  • Ireland (en)
  • Isle of Man (en)
  • Israel (en)
  • Ivory Coast (fr)
  • Jamaica (en)
  • Jordan (en)
  • Kazakhstan (en)
  • Kazakhstan (kk)
  • Kazakhstan (ru)
  • Kuwait (en)
  • Latvia (en)
  • Latvia (lv)
  • Lebanon (en)
  • Lithuania (en)
  • Lithuania (lt)
  • Luxembourg (en)
  • Macau SAR (en)
  • Malaysia (en)
  • Mauritius (en)
  • Mexico (es)
  • Moldova (en)
  • Monaco (en)
  • Monaco (fr)
  • Mongolia (en)
  • Montenegro (en)
  • Mozambique (en)
  • Myanmar (en)
  • Namibia (en)
  • Netherlands (en)
  • Netherlands (nl)
  • New Zealand (en)
  • Nigeria (en)
  • North Macedonia (en)
  • Norway (nb)
  • Pakistan (en)
  • Panama (es)
  • Philippines (en)
  • Poland (en)
  • Poland (pl)
  • Portugal (en)
  • Portugal (pt)
  • Romania (en)
  • Romania (ro)
  • Saudi Arabia (en)
  • Serbia (en)
  • Singapore (en)
  • Slovakia (en)
  • Slovakia (sk)
  • Slovenia (en)
  • South Africa (en)
  • Sri Lanka (en)
  • Sweden (sv)
  • Switzerland (de)
  • Switzerland (en)
  • Switzerland (fr)
  • Taiwan (en)
  • Taiwan (zh)
  • Thailand (en)
  • Trinidad and Tobago (en)
  • Tunisia (en)
  • Tunisia (fr)
  • Turkey (en)
  • Turkey (tr)
  • Ukraine (en)
  • Ukraine (ru)
  • Ukraine (uk)
  • United Arab Emirates (en)
  • United Kingdom (en)
  • United States (en)
  • Uruguay (es)
  • Uzbekistan (en)
  • Uzbekistan (ru)
  • Venezuela (es)
  • Vietnam (en)
  • Vietnam (vi)
  • Zambia (en)
  • Zimbabwe (en)
  • Financial Reporting View
  • Women's Leadership
  • Corporate Finance
  • Board Leadership
  • Executive Education

Fresh thinking and actionable insights that address critical issues your organization faces.

  • Insights by Industry
  • Insights by Topic

KPMG's multi-disciplinary approach and deep, practical industry knowledge help clients meet challenges and respond to opportunities.

  • Advisory Services
  • Audit Services
  • Tax Services

Services to meet your business goals

Technology Alliances

KPMG has market-leading alliances with many of the world's leading software and services vendors.

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

  • Our Industries

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

  • What sets us apart

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Relevant Results

Sorry, there are no results matching your search..

  • KPMG Corporate Communications
  • Multimedia Resources
  • News about KPMG

KPMG U.S. survey: Executives expect generative AI to have enormous impact on business, but unprepared for immediate adoption

Generative AI is here, and executives expect it to have an enormous impact on business, but most say they are unprepared for immediate adoption, according to a new survey by KPMG U.S. 

Almost two-thirds (65%) of the 225 U.S. executives [1]  surveyed in the last two weeks of March believe generative AI will have a high or extremely high impact on their organization in the next three to five years, far above every other emerging technology. Yet nearly the same percentage, 60%, say they are still a year or two away from implementing their first generative AI solution.  

While generative AI has rapidly entered the vocabulary of executives and boards given the accessibility of the technology, organizations are challenged to keep pace. Fewer than half of respondents say they have the right technology, talent, and governance in place to successfully implement generative AI. Respondents anticipate spending the next 6-12 months focused on increasing their understanding of how generative AI works, evaluating internal capabilities, and investing in generative AI tools. 

“CEOs and board members must personally invest time in understanding generative AI, and they must demand the same of their teams,” said Atif Zaim, National Managing Principal, Advisory. “They have a responsibility to understand how generative AI and other emerging technologies will change their business and their workforce and to ensure they have sustainable and responsible innovation strategies that will provide a competitive advantage and maintain trust in their organization.”  

Companies often find it difficult to get the value they want from emerging technologies when they take a siloed approach. Yet 68% of respondents have not appointed a central person or team to organize their response to the emergence of generative AI. For the time being, the IT function is leading the charge. 

The respondents, who are from businesses with revenue of $1 billion and above, cite cost and lack of clear business case as the two highest barriers to implementing generative AI. Cyber security and data privacy are currently most top of mind concerns for leaders, at 81% and 78% respectively. 

“Generative AI has the potential to be the most disruptive technology we’ve seen to date,” said Steve Chase, U.S. Consulting Leader. “It will fundamentally change business models, providing new opportunities for growth, efficiency, and innovation, while surfacing significant risks and challenges. For leaders to harness the enormous potential of generative AI, they must set a clear strategy that quickly moves their organization from experimentation into industrialization.” 

A transformative technology and competitive differentiator 

According to the KPMG survey, 77% percent of executives believe that generative AI will have a bigger impact on broader society in the next three to five years than any other emerging technology.  

“2023 has proven to be a game changer, moving AI from the minds of a few thousand data scientists into the hands of 100 million people. This dramatic evolution erupts at the intersection of several slower moving trends: 1) the plummeting cost of computing power, driven by gaming chips and hyper-scale cloud providers, 2) the massive amounts of internet-based training data readily available, and 3) a handful of companies willing to spend billions of dollars to build and run massive “AI learning factories” (aka Large Language Models),” said Pär Edin, Innovation Leader for Deal Advisory & Strategy.

Executives expect the impact to be highest in enterprise-wide areas — driving innovation, customer success, tech investment, and sales and marketing. 

When it comes to the impact of generative AI on the enterprise level, 78% of respondents say that it will have a high or extremely high impact on driving innovation, followed closely by technology investment at 74%, and customer success at 73%. And they believe it will have the greatest transformational impact on research and development, product development, and operations — also the functions where the largest number of respondents are currently exploring the implementation of generative AI.  

“There is no doubt that generative AI could be truly revolutionary for both businesses and society. More than two-thirds of leaders say that changing customer demands and market competition are among the largest factors influencing the need for generative AI,” said U.S. Technology Consulting Leader, Todd Lohr. “There is a true first-mover advantage with the pace of generative AI innovation. Winning organizations will establish their competitive advantage by taking decisive action now, while ensuring they are taking the proper steps toward mitigating risk and implementing responsible AI.”

Key Findings from the Survey:

  • 65% believe generative AI will have a high or extremely high impact on their organization in the next 3-5 years
  • 60% say they are 1-2 years away from implementing their first generative AI solution
  • 72% agree that generative AI can play a critical role in building and maintaining stakeholder trust
  • 45% say it can have a negative impact on their organizations' trust if the appropriate risk management tools are not implemented
  • Executives are most optimistic about the opportunities to increase productivity (72%), change the way people work (65%), and encourage innovation (66%)

Industries diverge 

Executive prioritization of generative AI varies significantly by sector. [2]  Most of executives (71%) in technology, media, telecommunications (TMT) and 67% in healthcare and life sciences (HCLS) feel they have appropriately prioritized generative AI, while only 30% in consumer and retail say that it is a priority.  

Furthermore, 60% of respondents in TMT say that researching generative AI applications is a high or extremely high priority in the next 3-6 months, the highest of all industries. 

Respondents from TMT and financial services are the most likely to say that the recent focus on tools such as ChatGPT have had a large impact on their digital and innovation strategies.  

Maintaining trust through risk management and responsible AI    

The majority of executives (72%) agree that generative AI can play a critical role in building and maintaining stakeholder trust, but almost half (45%) say that generative AI can have a negative impact on their organization’s trust if the appropriate risk management tools are not implemented. 

Most executives (79%) also believe that organizations that leverage generative AI will have a competitive advantage in risk management compared with their peers.  

Along with their overall strategies for generative AI, organizations are still in the early stages of designing and implementing risk and responsible use programs, despite the expected magnitude of the impact generative AI will have on their organizations and customers.  

Only 6% of organizations report having a dedicated team in place for evaluating risk and implementing risk mitigation strategies as part of their overall generative AI strategy. Another 25% of organizations are putting risk management strategies in place, but it is a work in progress.  

Meanwhile, nearly half (47%) say they are in the initial stages of evaluating risk and mitigation strategies, and nearly a quarter (22%) have not yet started evaluating risk and mitigation strategies.  

Similarly, only 5% report having a mature responsible AI governance program in place, and nearly half (49%) say they intend to stand one up but have not done so yet. Another 19% say an AI governance program is in process or has been partially implemented. Interestingly, more than a quarter, (27%), say they do not currently see a need or have not reached enough scale to merit a responsible AI governance program.  

“Generative AI, like many technologies, creates great opportunities for organizations,” KPMG U.S. Trusted Imperative Leader Emily Frolick. “However, the ease of use and open nature of generative AI amplifies the risk. As organizations are exploring potential use cases, giving attention to the risks or exposures associated with generative AI should be equally prioritized.” 

The critical talent questions  

Respondents largely seem to believe that we are heading into a new era for the workforce that combines work from human+ generative AI. Executives are most optimistic about the opportunities to increase productivity (72%), change the way people work (66%) and encourage innovation (62%). However, they are also mindful of the potential negative implications.  

Almost 4 in 10 executives (39%) believe that generative AI could lead to decreased social interactions and human connections with coworkers that may have negative impacts on the workforce. Another 32% report a concern that they will see increased mental health issues among their workforce due to the stress of job loss and uncertainty about the future 

Companies are focused on a hybrid approach of hiring and capability building among their teams across industry and function.  

The bottom line 

Generative AI has the potential to transform businesses across industries. However, executives still see major hurdles to adoption, such as determining clear business cases and installing the right technology, talent, and governance.  

As generative AI evolves, executives must prioritize its rapid implementation to stay competitive, while ensuring that it is deployed in a responsible and ethical manner. 

This survey was completed as part of KPMG U.S.’ research initiative focused on generative AI and the speed of modern technology. In the coming weeks, more data will be released with detailed analyses and a deeper dive into functional and industry views.

[1] The survey polled 300 global C-suite and senior executives, of which 225 were US-based.

[2] Industry data is global, and based on a sample size of >50.

Image of Melanie Malluk Batley

Melanie Malluk Batley

Associate Director, Corporate Communications

Image of Andreas Marathovouniotis

Andreas Marathovouniotis

Explore more

research questions of ai

KPMG spins out software business Cranium to tackle AI Security

KPMG Studio – the firm’s startup incubator – in collaboration with its Advisory practice announced the spin-out of Cranium, a new independent company that tackles AI security.

research questions of ai

Unlocking the Value of Contract Data

KPMG LLP and Icertis announced an alliance relationship to help more businesses unlock the value of their contract data with contract intelligence.

research questions of ai

TMT leaders playing the long game on the metaverse

Find out more from KPMG's 2023 Technology, Media & Telecommunications Survey.

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement .

Job seekers

Visit our careers section or search our jobs database.

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

IMAGES

  1. Top 30 Artificial Intelligence Interview Questions (Advanced & Latest

    research questions of ai

  2. Top 20 Artificial Intelligence (AI) Interview Questions and Answers in

    research questions of ai

  3. AI Research Topics for PhD Manuscripts 2021

    research questions of ai

  4. Background

    research questions of ai

  5. Artificial Intelligence Interview Questions-Answers

    research questions of ai

  6. Frontiers

    research questions of ai

VIDEO

  1. Practice Questions ai-900 part8

  2. create multiple choice questions (AI tool)

  3. The Truth about AI? #ai #artificialintelligence #truth #shorts

  4. IMP Questions of AI

  5. 54 Questions with an MIT AI researcher

  6. Coding the Future: Navigating the Risks in AI-Powered Interactive Tutoring

COMMENTS

  1. 177 Brilliant Artificial Intelligence Research Paper Topics

    AI Ethics Topics. How the automation of jobs is going to make many jobless. Discuss inequality challenges in distributing wealth created by machines. The impact of machines on human behavior and interactions. How artificial intelligence is going to affect how we act accordingly.

  2. 12 Best Artificial Intelligence Topics for Thesis and Research

    In this blog, we embark on a journey to delve into 12 Artificial Intelligence Topics that stand as promising avenues for thorough research and exploration. Table of Contents. 1) Top Artificial Intelligence Topics for Research. a) Natural Language Processing. b) Computer vision. c) Reinforcement Learning. d) Explainable AI (XAI)

  3. AI & Machine Learning Research Topics (+ Free Webinar)

    Get 1-On-1 Help. If you're still unsure about how to find a quality research topic, check out our Research Topic Kickstarter service, which is the perfect starting point for developing a unique, well-justified research topic. A comprehensive list of research topics ideas in the AI and machine learning area. Includes access to a free webinar ...

  4. Artificial Intelligence

    Artificial Intelligence. Since the 1950s, scientists and engineers have designed computers to "think" by making decisions and finding patterns like humans do. In recent years, artificial intelligence has become increasingly powerful, propelling discovery across scientific fields and enabling researchers to delve into problems previously too ...

  5. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  6. Standing Questions and Responses

    Summary: Recent years have seen remarkable progress on some of the challenge problems that help drive AI research, such as answering questions based on reading a textbook, helping people drive so as to avoid accidents, and translating speech in real time. Others, like making independent mathematical discoveries, have remained open.

  7. AI Research Trends

    AI Research Trends. Until the turn of the millennium, AI's appeal lay largely in its promise to deliver, but in the last fifteen years, much of that promise has been redeemed. [15] AI already pervades our lives. And as it becomes a central force in society, the field is now shifting from simply building systems that are intelligent to ...

  8. Research

    Our teams aspire to make discoveries that positively impact society. Core to our approach is sharing our research and tools to fuel progress in the field, to help more people more quickly. We regularly publish in academic journals, release projects as open source, and apply research to Google products to benefit users at scale. Learn more about ...

  9. Research index

    Topics. Adversarial examples (4) Audio generation (2) Community (12) Compute (8) ... Research Papers. Feb 15, 2024 February 15, 2024. Video generation models as world simulators. ... Confidence-Building Measures for Artificial Intelligence: Workshop proceedings.

  10. Artificial intelligence in innovation research: A systematic review

    Artificial Intelligence (AI) is increasingly adopted by organizations to innovate, and this is ever more reflected in scholarly work. To illustrate, assess and map research at the intersection of AI and innovation, we performed a Systematic Literature Review (SLR) of published work indexed in the Clarivate Web of Science (WOS) and Elsevier Scopus databases (the final sample includes 1448 ...

  11. Research Guides: Artificial Intelligence (AI): AI in Research

    AI tools can be used to support different aspects of the research process, including: Hypotheses Generation: AI can automatically generate research questions based on a given dataset or topic that can serve as starting points for researchers to refine and develop into hypotheses. Literature Review: AI can accelerate the literature review process by analyzing and summarizing a body of ...

  12. The best AI tools for research papers and academic research (Literature

    AI research tools, like Concensus, offer immense benefits in scientific research. Here are the general AI-powered tools for academic research. ... Some are even working towards adding your own data base of files to ask questions from. Tools like scite even analyze citations in depth, while AI models like ChatGPT elicit new perspectives.

  13. AI-Driven Hypotheses: Real world examples exploring the ...

    However, in the realm of scientific research, the integration of AI marks a significant paradigm shift, ushering in an era where machines and human actively collaborate to formulate research hypotheses and questions. While AI systems have traditionally served as powerful tools for data analysis, their evolution now allows them to go beyond ...

  14. AI Research Tools

    Discover the latest AI research tools to accelerate your studies and academic research. Analyze research papers, summarize articles, citations, and more. ... Consensus is an AI-powered search engine that helps you find evidence-based answers to your research questions. It intelligently searches through over 200 million scientific papers ...

  15. 80 Artificial Intelligence Research Topics

    A List Of Potential Research Topics In Artificial Intelligence: Assessing the potential of AI in addressing healthcare inequalities in the UK. Exploring the role of AI in predicting and preventing cyberattacks. Investigating the application of AI in UK education policies and practices. Evaluating the use of AI in predicting and mitigating air ...

  16. Artificial Intelligence

    The share of Americans who say they are very or somewhat concerned about government use of people's data has increased from 64% in 2019 to 71% today. Two-thirds (67%) of adults say they understand little to nothing about what companies are doing with their personal data, up from 59%. short reads | Aug 28, 2023.

  17. 8 Questions About Using AI Responsibly, Answered

    02. 8 Questions About Using AI Responsibly, Answered. 03. What Does the Tech Industry Value? Summary. Generative AI tools are poised to change the way every business operates. As your own ...

  18. Specific challenges posed by artificial intelligence in research ethics

    The twenty first century is often defined as the era of Artificial Intelligence (AI), which raises many questions regarding its impact on society. It is already significantly changing many practices in different fields. Research ethics (RE) is no exception. Many challenges, including responsibility, privacy, and transparency, are encountered.

  19. AI Research Question Prompt Generator

    Unleash creativity and streamline your research process with our AI-powered Research Question Prompt Generator! Perfect for students, academics, and professionals, this tool helps you craft compelling, unique research questions in seconds. Save time, spark inspiration, and elevate the quality of your studies or projects. Try it now and let curiosity lead the way to discovery!

  20. Writing Strong Research Questions

    A good research question is essential to guide your research paper, dissertation, or thesis. All research questions should be: Focused on a single problem or issue. Researchable using primary and/or secondary sources. Feasible to answer within the timeframe and practical constraints. Specific enough to answer thoroughly.

  21. 10 Best AI Tools for Researchers in 2024

    Top 10 AI tools for researchers. We'll discuss tools powered by artificial intelligence that can augment your research work, save you a lot of time through the automation of certain tasks, and help you brainstorm new ideas avoid plagiarism, and streamline the research process. 1. PDFgear Copilot.

  22. AI Can Create Unique Research Questions

    An AI-powered research questions generator is typically part of a main AI writing tool. AI writing tools help students, academics, and researchers to make their writing process faster. From brainstorming to writing and editing, these tools are always ready to work for you. Using these tools is no longer a luxury but a necessity.

  23. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  24. Should generative AI programs credit their ...

    Overall, 54% of Americans say artificial intelligence programs that generate text and images, like ChatGPT and DALL-E, need to credit the sources they rely on to produce their responses. A much smaller share (14%) says the programs don't need to credit sources, according to a new Pew Research Center survey.

  25. Research: Leaders Undervalue Creative Work from AI-Managed Teams

    Summary. Because of AI's ability to learn from vast amounts of data and maximize efficiency, companies have predicted that humans working with it will be able to free up their time and expand ...

  26. Apple's MM1 AI Model Shows a Sleeping Giant Is Waking Up

    A research paper quietly released by Apple describes an AI model called MM1 that can answer questions and analyze images. It's the biggest sign yet that Apple is developing generative AI ...

  27. AI Campaigns and Case Studies

    A rtificial intelligence (AI), and its applications, is at the forefront of many discussions in many industries and fields, from marketing to tech to healthcare to education to law. How to implement and leverage these tools in a helpful way for users can be challenging for teams. However, when used well, AI can help save time analyzing data, personalize content and information, enhance ...

  28. 15 Questions To Ask Your CMO About Integrating AI In Marketing

    Much of the AI focus in marketing has been on driving productivity in research, ideation or creation. Companies need to do better in positioning and messaging how their product uses AI to customers.

  29. KPMG Generative AI Survey

    Industries diverge Executive prioritization of generative AI varies significantly by sector. [2] Most of executives (71%) in technology, media, telecommunications (TMT) and 67% in healthcare and life sciences (HCLS) feel they have appropriately prioritized generative AI, while only 30% in consumer and retail say that it is a priority.

  30. Tech industry growth 2024

    The tech industry navigated some headwinds in 2023, with a dip in global tech spending and layoffs across the sector. But some analysts are optimistic that the tech sector could return to modest growth in 2024, as companies determine how to leverage generative AI, migrate more workloads to the cloud, and adjust to new regulatory requirements. 1 Tech leaders agree: Deloitte's quarterly ...